by Judith Curry
Failure to communicate the relevant ‘weak link’ is sometimes under-appreciated as a critical element of science-based policy-making.
Lenny Smith and Arthur Petersen have written a very interesting and insightful paper Variations on Reliability: Connecting Climate Predictions to Climate Policy, [link] to complete manuscript. Excerpts:
Our general claim is that methodological reflections on uncertainty in scientific practices should provide guidance on how their results can be used more responsibly in decision support. In the case of decisions that need to be made to adapt to climate change, societal actors, both public and private, are confronted with deep uncertainty. The notions of ‘reliability’ are examined critically, in particular the manner(s) in which the reliability of climate model findings pertaining to model-based high-resolution climate predictions is communicated.
Findings can be considered ‘reliable’ in many different ways. Often only a statistical notion of reliability is implied, but we consider wider variations in the meaning of ‘reliability’, some more relevant to decision support than the mere uncertainty in a particular calculation. We distinguish between three dimensions of ‘reliability’ – statistical reliability, methodological reliability and public reliability – and we furthermore understand reliability as reliability for a given purpose, which is why we refer to the reliability of particular findings and not to the reliability of a model, or set of models, per se.
At times, the statistical notion of reliability, or ‘statistical uncertainty’, dominates uncertainty communication. One must, however, seriously question whether the statistical uncertainty adequately captures the ‘relevant dominant uncertainty’ (RDU). The RDU can be thought of as the most likely known unknown limiting our ability to make a more informative scientific probability distribution on some outcome of interest; perhaps preventing even the provision of a robust statement of subjective probabilities altogether. Here we are particularly interested in the RDU in simulation studies, especially in cases where the phenomena contributing to that uncertainty are neither sampled explicitly nor reflected in the probability distributions provided to those who frame policy or those who make decisions. For the understanding, characterisation and communication of uncertainty to be ‘sufficient’ in the context of decision-making we argue that the RDU should be clearly noted. Ideally the probability that a given characterisation of uncertainty will prove misleading to decision-makers should be provided explicitly.
Science tends to focus on uncertainties that can be quantified today, ideally reduced today. But a detailed probability density function (PDF) of the likely amount of fuel an aircraft would require to cross the Atlantic is of limited value to the pilot if in fact there is a good chance that metal fatigue will result in the wings separating from the fuselage. Indeed, focusing on the ability to carry enough fuel is a distraction when the integrity of the plane is thought to be at risk. The RDU is the uncertainty most-likely to alter the decision-maker’s conclusions given the evidence, while the scientist’s focus is understandably on some detailed component of the big picture. How can one motivate the scientist to communicate the extent to which her detailed contribution has both quantified the uncertainty under the assumption that the RDU is of no consequence, and also provided an idea of the timescales, impact and probability of the potential effects of the RDU? Failure to communicate the relevant ‘weak link’ is sometimes under-appreciated as a critical element of science-based policy-making.
While the IPCC has led the climate science community in codifying uncertainty characterisation, it has hardly paid attention to specifying the RDU. The focus is at times more on ensuring reproducibility of computation than on relevance (fidelity) to the Earth’s climate system, in fact it is not always easy to distinguish which of these two are being discussed. Instead, the attention has mainly been on increasing the transparency of the IPCC’s characterisation of uncertainty. Being transparent, while certainly a good thing in itself, is not the same as communicating the RDU for the main findings.
A central insight is to note that when the level of scientific understanding is low, ruling out aspects of uncertainty in a phenomenon without commenting on less well understood aspects of the same phenomenon can ultimately undermine the general trust decision-makers place in scientists (and thus lower the public reliability of their findings). Often epistemic uncertainty or mathematical intractability means that there is no strong evidence that an impact will occur; simultaneously there may be good scientific reason to believe the probability of a significant impact is nontrivial, say, greater than 1 in 200. How do we stimulate insightful discussions of things we can neither rule out nor rule in with precision, but which would have significant impact were they to occur?
Statistical reliability (reliability1). A statistical uncertainty distribution, or statistical reliability (denoted by reliability1), can be given for findings when uncertainty can be adequately expressed in statistical terms, e.g., as a range with associated probability (for example, uncertainty associated with modelled internal climate variability). Statistical uncertainty ranges based on varying real numbers associated with models constitute a dominant mode of describing uncertainty in science. One cannot immediately assume that the model relations involved offer adequate descriptions of the real system under study (or even that one has the correct model class), or that the (calibration-)data employed are representative of the target situation.
Methodological reliability (reliability2). We know that models are not perfect and never will be perfect. Especially when extrapolating models into the unknown, we wish ‘both to use the most reliable model available and to have an idea of how reliable that model is’ but the reliability of a model as a forecast of the real world in extrapolation cannot be established. There is no statistical fix here; one should not confuse the range of diverse outcomes across an ensemble of model simulations (projections), such as used by the IPCC, with a statistical measure of uncertainty in the behaviour of the Earth. This does not remotely suggest that there is no information in the ensemble or that the model(s) is worthless, but it does imply that each dimension of ‘reliability’ needs to be assessed.
If the referent is the real world (and not a universe of mathematical models) and the purpose is to generate findings about properties of the climate system or prediction of particular quantities, then‘reliability1’ is uninformative: one can have a reproducible, well-conditioned model distribution which is reliable1 without reliable being read as informative regarding the real world. A methodological definition of reliability, denoted by reliability2, indicates the extent to which a given output of a simulation is expected to reflect its counterpart (target) in reality.
Public reliability (reliability3). In addition to the qualitative evaluation of the reliability of a model, increasingly also the reliability of the modellers is taken into account in the internal and external evaluation of model results in climate science. We therefore introduce the notion of reliability3 of findings based on climate models, which reflects the extent to which scientists in general and the modellers in particular are trusted by others.
As we argue below, climate scientists can indicate the shortcomings of the details of their modelling results, while making clear that solid basic science implies that significant risks exist. If climate scientists are seen as ‘hiding’ uncertainties, however, the public reliability (reliability3) of their findings may decrease, and with it the reliability3 of solid basic physical insight.
Information on the relevant dominant uncertainty is more useful when it is identified clearly as the Relevant Dominant Uncertainty; it is less useful when buried amongst information of other uncertainties that are well quantified, have small impacts, or are an inescapable fact of all scientific simulation. Extrapolation problems like climate change can benefit from new insights into: how to better apply the current science, how to advertise its weaknesses and more clearly establish its limitations; all for the immediate improvement of decision support and the improvement of future studies. This might also aid the challenge of training not only the next generation of expert modellers, but also the next generation scientists who can look at the physical system as a whole and successfully use the science to identify the likely candidates of future RDU.
Climate policy on mitigation and decision-making on adaptation provide a rich field of evidence on the use and abuse of science and scientific language. We have a deep ignorance of what detailed weather the future will hold, even as we have a strong scientific basis for the belief that anthropogenic gases will warm the surface of the planet significantly. It seems rational to hold the probability that this is the case far in excess of the ‘1 in 200’ threshold which the financial sector is regulated to consider. Yet there is also an anti-science lobby which uses very scientific sounding words and graphs to bash well-meaning science and state-of-the-art modelling. If the response to this onslaught is to ‘circle the wagons’ and lower the profile of discussion of scientific error in the current science, one places the very foundation of science-based policy at risk.
Failing to highlight the shortcomings of the current science will not only lead to poor decision-making, but is likely to generate a new generation of insightful academic sceptics, rightly sceptical of oversell, of any over-interpretation of statistical evidence, and of any unjustified faith in the relevance of model-based probabilities. Statisticians and physical scientists outside climate science might become scientifically sceptical, sometimes wrongly, of the basic climate science in the face of unchecked oversell of model simulations. This mistrust will lead to a low assessment by these actors of the reliability3 (public reliability) of findings from such simulations, even where the reliability1 and reliability2 are relatively high (e.g. with respect to the attribution of climate change to significant increases in atmospheric CO2). Learning to better deal with deep uncertainty (ambiguity) and known model inadequacy can advance significantly and foster the more effective use of model-based probabilities in the real world.
JC comments: This paper articulates in a different way (and probably better way) many of the issues I have been raising about climate model uncertainty. I particularly like the idea of the Relevant Dominant Uncertainty, but the nature of the uncertainties is often such that it is unclear which among the uncertainties is dominant. What I find to be particularly important is the recognition that ever more precise estimates of statistical uncertainty can be completely meaningless for decision making.
Arthur Petersen is one of a new breed of philosophers of science that is working in the area of the applied philosophy of climate science. Lenny Smith is unique in the climate field by integrating nonlinear science, statistics, economics and increasingly applied philosophy of science to consider broad issues in climate modeling, climate science, and decision making under uncertainty. Note, Lenny Smith will be giving one of the big invited talks in the Atmospheric Sciences Section at the forthcoming Fall AGU meeting.
When benefits are perceived as damages, relying on the perceptions guarantees mistaken policy.
History shows us plainly that sometimes the “best available scientists” should communicate directly with the most courageous and foresighted political leaders:
• Einstein and Szilard to Roosevelt
• Turing to Churchill
• James Hansen to Yasuo Fukuda
Conclusion Not much gets done when big scientific committees try to mollify Big Carbon lobbyists. That’s why the strongest scientists have appealed directly to the most foresighted leaders … and that’s why those wise leaders have listened.
Karl Marx and Friedrich Engels to Vladimir Lenin and Joseph Stalin?
Lysenko to Stalin?
Ayman al-Zawahiri to Osama bin Laden?
Thanks for the link of Dr. James Hansen’s letter to the Prime Minister of Japan, Yasuo Fukuda. The problem may be Big Carbon lobbyists, but it may instead be publicly-financed manipulation of observations. Time will tell.
Seriously, you compare James Hansen to Turing & Einstein? That’s like comparing a cubic zirconium to a diamond.
More like a lump of coal to a diamond.
fan and for that sake,Mann, would you guys put up or shut up, please reveal what you know of this sinister “Big Carbon” conspiracy. Perhaps start by informing us which blogs or spokes- persons are funded by them.
Willful ignorance by r murphy, link by FOMD.
Also, there is Pogo’s immortal “We have met the enemy and “he is us!
Your interest is greatly appreciated, r murphy!
RC, SkS, Gore, Mann…
Like back when the “so called”, “best available scientists” said the Earth was Flat and they promoted locking up and punishing anyone who suggested they were wrong and the “so called wise leaders went along with this junk, “so called science”.
Popular illusion by Herman Alexander Pope, historical link by FOMD.
It is a pleasure to help increase your knowledge of science, Herman Alexander Pope!
FAN, Thank you!
Anytime you can help increase my knowledge of science, please do!
I don’t learn much from people who agree with me.
I learn most from those who present knowledge I never had or from those who disagree about what I think I already know.
Again, Thank you and keep it up.
When I am wrong, I do want and need to know.
Herman Alexander Pope, your outstandingly gracious post is exemplary of a gentleman *and* a scholar. For which, this personal appreciation and sincere thanks are hereby extended to you, Herman Alexander Pope.
Hansen included in the “best available scientists”?
You’re – er – extracting the Michael, right?
Edward Teller to Ronald Reagan:
Concept of SDI
The concept for the Strategic Defense Initiative came from a casual conversation between the renown scientist and “father of the hydrogen bomb” Dr. Edward Teller and President Reagan.
An important program during the administration of President Ronald Reagan was the Strategic Defense Initiative (SDI). This program was also called “Star Wars” by the popular press. It was a proposed defensive shield to shoot down enemy missiles. The program was very expensive but resulted in winning the Cold War with the Soviets.
hard to compare einstein to hansen, the first said we have the science so we can probably make a bombe and…we have a 90% science so we have to change the whole world economy and society…
The various religions prior to the discovery of the scientific method.
This is merely an observation that models of the universe (explanation of how it works) have been around for a very long time. They are designed to help us cope with uncertainty, and sometimes the cost has been pretty big, for instance, the Eugenics projects in Germany, leading to the sterilization and killing of many of the weakest members of society. Fortunately, there were some wise leaders who did not listen to these studies and views.
Unfortunately for us, there is a new religion with its various offshoots, best shown here in this Earth First video. Here in Kooky California, with our weirdo governor, this plays out with high speed rail trains, a mandate for 30% of energy from renewables by 2020, and storage for a million homes worth of energy by 2020. Naturally, his will hurt low income people, but they don’t know it’s coming.
That’s what adherence to kooky models of the universe get you.
What a pity that this paper was not written 30 years ago. There is so much baggage invested in the IPCC et al processes by both scientists and politicians that it will be difficult for them to take this aboard.
Faustino, your response is exactly what was on my mind as I read this article. As a practicing engineer for 40 years, I could never understand why Reliability was given such a back seat in climate science. It is this very neglect that has made me distrust many science papers I have read thanks to links made available to me on this blog. Perhaps more engineers and outside statisticians need to be involved in the journal review process!
“… why Reliability was given such a back seat in climate science”
Precisely because this undermines the public trust in consensus … which is regarded by AGW activists as their greatest achievement. If this trust is perceived as threatened, CAGW noise becomes absolutely deafening
not one ”climate model” is fit for the catwalk – it’s all harvested from thin air…
We can expect with a high degree of ‘reliability’ that a significant percentage of the population will not find employment in an economy that is hamstrung by socialist government regulations based upon fears of global warming.
Clearly CO2 is the relevant dominant uncertainty.
Sounds better than control knob.
The only independent variable also is the relevant dominant uncertainty: it’s the Sun, stupid.
My, my, there is only one independent variable?
Me thinks you need a refresher class in science.
Pray tell me how to adjust the output of the sun?
What is the correlation between the sun and global temperature anyway?
How many times can you be wrong in one sentence?
True… nominally, there is but a single independent variable. Compared to the Sun everything else is insignificant.
@ bob droege
You missed the word “relevant” – take away the Sun and nothing is relevant in regard to climate – period.
Very good Teddi,
Argument by assertion goes a long way in my book.
Taking away the sun, would that be one of those controlled experiments we can not do with the climate?
I liked wagathon’s “it’s the sun stupid” better.”
1. often Sun A star that’s 93 million miles away from Earth.
2. A star that is the center of a planetary system.
3. The radiant energy that is emitted by the sun—e.g., heat and visible light—i.e., sunshine.
4. A sunlike object, representation, or design.
5. That independent variable that somehow exerts its influence on everything else.
6. A mass of incandescent gas that doesn’t just cause different things to be related it accounts for it.
7. A place where no thing lives that we cannot live without.
8. A fearsome nuclear bomb.
9. A place so hot – the temperature is millions of degrees — even metals are gases.
10. Key independent variable responsible for global warming and cooling–nominally, it’s the Sun, stupid.
“ …we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that long-term prediction of future climate states is not possible (IPCC 3rd Assessment Report; Section 22.214.171.124, p. 774).
What caused fear of global warming? A gaming mentality invaded Western academia — schoolteachers using computers to filter, mix, manipulate, adjust, cut, optimize, remix (i.e., playing games with) data — all in a godawful quest for personal relevance in the cipher age.
Oh my god – the sun!
what is it Simon?
A big fiery ball in the center of our solar system, but that’s not important right now, we’re headed straight towards it.
“From another direction he felt the sensation of being a sheep startled by a flying saucer, but it was virtually indistinguishable from the feeling of being a sheep startled by anything else it ever encountered, for they were creatures who learned very little on their journey through life, and would be startled to see the sun rising in the morning, and astonished by all the green stuff in the fields.”
― Douglas Adams, So Long, and Thanks for All the Fish
The sun is a mass of incandescent gas.
Coal and oil are simply stored Sunlight!
Many apparently suffer from crystalmelanophobia or fear of black crystals, which probably could be as debilitating — at least to others — as fear of having peanut butter stick to the roof of your mouth.
First of all, it is obvious from COP 19 that China and India will do nothing but pay lip service to limiting CO2. The US isn’t going to suddenly decide that the 16th century was actually a really joyous time in human history and stop using fossil fuels. Other than nuclear, there really isn’t much of an alternative to fossil fuels. So, there, that’s what’s up with that.
So, we have some time. We can divert some of the copious money already being spent on climate science and gather more data. For example, the Miller et al 2013 study could have benefited from ice cores from the ice cap in question. It’s not a very big ice cap, easy to drill I’m guessing. That would give us a clue how the ice has been moving over the centuries. It would have benefited from drilling to the ground under the icecap in various locations to sample the moss. Has the moss really be stationary? Or has the ice moved it around? Is the pattern of the age of the moss what one would expect given the hypothesis? This would have been money spent for a good purpose and would minimized the need for statistics.
I’m sure there is a lot more data out there that would further our understanding of climate and somewhat remove the need to continually try to infer data. In the meantime, work can continue on the models, but at this point, some are just putting lipstick on the pig.
This view assumes a direction of cause-and-effect for which there is no supporting evidence provided. Certainly, there may be “skeptics” whose skepticism has truly been generated from those skeptical of oversell or over-intepretation of statistical evidence. On the other hand, there are certainly “skeptics” who are predisposed to interpret the evidence in such a way as to justify a conclusion of oversell or over-interpretation.
What is the evidence of the later? It is well provided by Kahan’s evidence related to “motivated reasoning,” But even more obviously, it is abundantly clear because the vast majority of people who are “skeptics” do not know the science well-enough to conclude that the evidence has been oversold or over-interpreted. Like so many of my much beloved “denizens,” it appears that the authors have conflated those “skeptics” who are knowledgeable about the science (for whom it is really impossible to say what the direction of causation is that results in their “skepticism), with the average Joe “skeptic” who has formulated views on climate change despite not actually having studied the science in any detail (the same could be said, of course, about the average “realist”).
And in the never-ending and recursive irony that manifests in the climate wars, this obvious error, this obvious oversell and over-interpretation of evidence, is being made by people who are experts in evaluating evidence so as to identify oversell and over-interpretation.
Same ol’ same ol’.
Hockey Stick. Hockey Puck.
The ‘science’ of global warming can be pretty easily summarized, as follows: America should feel guilty about causing polar bears to be stranded on small chunks of ice floating in the middle of an ocean and not roaming wild and free across the Arctic as nature intended.
“This view assumes a direction of cause-and-effect for which there is no supporting evidence provided. Certainly, there may be “skeptics” whose skepticism has truly been generated from those skeptical of oversell or over-intepretation of statistical evidence. On the other hand, there are certainly “skeptics” who are predisposed to interpret the evidence in such a way as to justify a conclusion of oversell or over-interpretation.”
you again misread the text. The text offers a prediction.
Overselling the models is likely to LEAD TO
A) bad policy
B) a new generation of academic skeptics.
There is a historical precident for this in other fields so its not an insane prediction. But your response assume they are describing a current state of affairs. They are predicting.
Again. when you read first try to understand what they are arguing.
Still buying your straw in bulk, I see.
Looks like I pasted when I should have cut. Or something like that. Let’s try that again.
Still buying your straw in bulk, I see.
When they make a prediction and you act as if it is a description, you’ve cornered the market in straw and nobody else can build a damn thing.
Address the fact that you consistently misread to invent issues where there are none.
Overselling models in operations research, over selling them in finance
did lead to bad policy and quite naturally the generations that follow make hay of that. Nothing too strange about that prediction. But it IS a prediction and not a description as you misread the text.
This is a prediction
“failing to highlight the shortcomings of the current science will not only lead to poor decision-making, but is likely to generate a new generation of insightful academic sceptics, rightly sceptical of oversell, of any over-interpretation of statistical evidence, and of any unjustified faith in the relevance of model-based probabilities.”
This is a description
“failing to highlight the shortcomings of the current science led to poor decision-making, and generated a new generation of insightful academic sceptics, rightly sceptical of oversell, of any over-interpretation of statistical evidence, and of any unjustified faith in the relevance of model-based probabilities.”
See the difference?
so when you present their prediction as a description and then ask for evidence that their description is true, you have engaged in bad faith yet again.
You do this over and over again. level up man.
> when you present their prediction as a description and then ask for evidence that their description is true […]
Predictions are hypotheses. Empirical hypotheses rest on observational evidence. Asking for the observational evidence of hypotheses is legitimate.
Using words like “likely” is likely to lead to statistical tittle tattle.
WIllard: Predictions are hypotheses.
They mean different things. Basically predictions are outcomes from models about future events.
I’d accept Merriam-Webster on the definition of hypothesis as “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.”
You can make a prediction without intending that it would be used as “a starting point for further investigations.” In some cases, a prediction (model outcome) could be a basis for a hypothesis, but even so, it’s still not the same thing, even if it leads to a hypothesis.
> They mean different things.
Indeed, just as “apple” and “fruit” do not have the same meanings.
Joshua and Steve Mosher are on different sides of the Dunning Kruger; this has become clear over time.
Very odd the way some of my posts just disappear. Posts that are in no way a violation of the stipulated rules. I wonder why that is. Let’s try again.
The authors’ predictions oversell and over-interpret the available evidence. Evidence that they failed to even discuss with any depth.
They describe what they think is “likely.” On what basis do they determine what is “likely?” How do they deal with the uncertainties? They don’t. I described elements that make their prediction uncertain.
Now based on past behavior, it is “likely” that steven will continue to misrepresent my views and build straw men to make this about me.
How likely is it that this comment will disappear?
I am deleting comments that have no purpose other than to pick a fight with or insult another commenter. You have no idea how boring this is to other readers, and how it keeps others from commenting here.
Please explain. DK says that competence will reduce confidence because it leads to a misunderstanding of the competence of others. Which of us shows such an attribute?
I just said that discussion would be fruitless. I meant it.
Heh, Dunning-Kruger is one of the last refuges of the sophistical.
There are intermittent studies published in which ‘XXX’ is shown to be damaging to public health. Most of these just die out of their own accord. In some cases it leads to government action to ban compounds, in others, such as the BSE = vCJD and Wakefields vaccines = Autism, there are massive financial and social costs. However, the biggest effect that the headlines, generated from press releases, to the effect that “XXX kills’, is the erosion of faith in the honesty of scientists and in the practice of science.
These studies, along with social ‘science’ studies of cause and effect of highly complex behavioral issues, are devaluing the coinage. To get attention, and funding, the shroud waving increases and we have a situation of inflation of risks and an increase in the action threshold of the public.
FOMD pointed out the action of Einstein and Szilard with regard to Roosevelt.
Please note, both Einstein and Szilard were wrong about the ability of the Nazi’s to acquire, much less deliver, nuclear weapons. What Einstein and Szilard accomplished was to set in motion a massive mobilization of resources into nuclear weaponry, first by the USA and then others, and the incineration of two Japanese cities. Personally, I am unsure that the acquisition of nuclear weapons by humanity in the 40’s and 50’s was a good or bad thing, but one thing is sure, both Einstein and Szilard were wrong on the cost and complexity of weaponizing nuclear fission and they engineered the nuclear stand off that came close to destroying civilization more than once.
Well yes. If you make a statement and then refuse to explain it, it will be fruitless. On the other hand, you could explain what you meant.
Once again. DK describes (1) – an overestimation of competence in someone who is not competent. I know I’m not competent, and admit it readily – so on what basis do you think steven over-estimates his competence? (2) DK describes a possible weakening of self-confidence in someone who is competent – due to an overestimation of the competence of someone else. You described steven and myself as opposite ends of a “curve.” That means that it is “likely” that one of us has a weakened self-confidence due to an overestimation of the competence of someone else. Is it I who has weakened self-confidence and who and overestimates steven’s competence, or is it steven who has weakened self-confidence due to overestimating my competence? Some other explanation?
Or was that just a cheap throw-away, hence the discussion would be fruitless?
curryja | November 26, 2013 at 10:04 am |
“I am deleting comments that have no purpose other than to pick a fight with or insult another commenter. You have no idea how boring this is to other readers, and how it keeps others from commenting here.”
Thank you very much!
Evidence? I have seen some data on “trust” in scientists over time. They show a moderate drop only among a minority of Republicans. No drop among independents. Not drop among Democrats. The drop is concurrent with the growth of the religious right which is largely in opposition to (does not “trust”) the majority of establishment scientists on issues such as evolution and the formation of the universe. The drop is concurrent with divisions on science-related topics such as abortion and stem-cell research. The drop is concurrent with a wider drop in “faith” in societal institutions and “government.”
You are a scientist. You speak of an “effect.” Where is the evidence of that “effect” other than your off-the-cuff, anecdotal reasoning. Why would a scientist, repeatedly, offer completely confidence descriptions of an “effect,” with no discussion of uncertainty, with no explication of evidence?
The authors did say something about oversell, and about over-interpretation of evidence.
Now didn’t they?
“No drop among independents. Not drop among Democrats.”
Really? Then who is it that opposes the science of GM foods and Vaccines, and doesn’t that all by itself imply a drop in the believe in science?
“The authors’ predictions oversell and over-interpret the available evidence. Evidence that they failed to even discuss with any depth.
They describe what they think is “likely.” On what basis do they determine what is “likely?” How do they deal with the uncertainties? They don’t. I described elements that make their prediction uncertain.”
MUCH better. now we can have a discussion.
1. you argue that they oversell based on available evidence.
2. you argue that they dont present the evidence
you have a problem. you cant judge whether they oversell the evidence unless they present it.
Here is what we have as possibilities
Start with their argument.
Overselling models is likely to lead to
A) bad policy
B) a new generation of Academic skeptics
lets take them in order.
Overselling models will likey lead to which
A) bad policy
b) good policy
c) can’t tell.
Likely in common usage means greater than half. Now why do we use models in policy. we use them because we think that a good model will lead to good policy. The whole basis os using a model is that it will lead to better policy. So which of the above three do you is most likely to be true.
understand you dont need evidence to actually make an argument here.
The argument can be made entirely from the point of internal consistency.
If one believes that having better models will lead to better policy and that is the reason why we use them, then it would hard to consistently hold that overselling models leads to better policy or has unknown effects. As far as evidence goes, one has to look at what counts as evidence for this. Do we have any examples from the field of climate science where policy was based on oversold models. ? If not do we have any other fields to consider. For example, financial models that drove bad policy. In short, if you dont think that overselling models will lead to bad policy, then you lose the rational basis for using models at all.
The second point. Will overselling models likely lead to a new generation of academics that question models.
Again, we have three choices
overselling models will lead to
1) more academics getting on board the modelling train
2) more academics questioning the modelling train
3) no change.
One need only look at the change in rhetoric around climate modelling to see that the skepticism about modelling is gaining traction within the community. In the US we see a push to move toward having one model as opposed to a several. We see more modellers freely admitting issues with models ( Tamsin, hawkins). This is generally what happens in academia when a particular approach is oversold. A new generation see an opportunity to make its mark by parting with the past or questioning the generation of scholars that when before them.
You’re problem Joshua is that you have learned important criticial thinking skills. For example, why do think that every argument needs evidence? and where is your evidence that this is the case?
” They show a moderate drop only among a minority of Republicans. No drop among independents. Not drop among Democrats. The drop is concurrent with the growth of the religious right ”
So there is a drop, which you believe can be discounted due to a rise in the levels of religious dogma in AMERICAN society.
However, please note that the two examples I used for a change in the public’s perception of scientific integrity are drawn from the UK, where there are no Democrats/Republicans and no religious right.
(1) Actually, the evidence is that there is no particular association with political ideology. Check out Kahan’s evidence and compare it to your, anecdotal evidence.
(2) The lack of association between political ideology and scientific “consensus” on those issues shows that the claims that we see about some “asymmetry” in rational approach to science (claims made on both sides, ubiquitously), are specious.
You might want to take a look at this book “Science Left Behind” then.
> you cant judge whether they oversell the evidence unless they present it.
Unless they base their prediction on a counterfactual like
[EC] If there was no overselling, less insightful academic sceptics would likely be generated.
which belongs to metaphysics, or at best folk psychology.
Considering their armchair standpoint, no wonder the authors consider the distinction between projection and prediction artificial.
Doc MartynPlease note, both Einstein and Szilard were wrong about the ability of the Nazi’s to acquire, much less deliver, nuclear weapons.
Worth remembering. One might be forgiving of mistakes made in the fog of war, but that is not an unambiguous example of appropriate policy advocacy. It has two features in common with climate alarmism:(1) we have to act immediately (without thorough evaluation, discussion and debate); and (2) if we don’t act, the consequences may be disastrous.
So once again, we see a self-described “skeptic” fail to formulate an opinion after a carefully controlled look at the evidence. No, I don’t “believe [the drop] can be discounted to a rise in the levels of religious dogma…”
(1) I think that the growth of the religious right is, plausibly, an important influence in that drop among a specific group and,
(2) It isn’t a growth in religious dogma that I am speaking about, but the increased linkage between those who have strong religious identities and politics. There is a boatload of evidence supporting that linkage.
Just so I understand: in the examples you used were you talking about them as a way of describing the cause-and-effect of public opinion/faith in science in a general sense? If so, what evidence did you show of that effect you’re describing. You can’t seriously be using an example of change in public opinion related to one specific issue to show a general effect, can you?
Joshua, is it very hard to understand what the hell it is you mean as you write in a manner designed to misdirect, rather than to inform.
You used the phrase ‘religious right’, not I.
The April 2012 paper in American Sociological Review indicates that members of the public who define themselves as conservative are becoming less trusting of scientists. It may be the ‘religious right’, whatever that means, but the data show that the most educated conservatives are losing trust, and that whilst Church Attendance correlates with being conservative, it does not have an interactive effect.
In conclusion it is stated
“Relating to the second pattern, when examining a series of public attitudes toward science, conservatives’ unfavorable attitudes are most acute in relation to government funding of science and the use of scientific knowledge to influence social policy (see Gauchat 2010). Conservatives thus appear especially averse to regulatory science, defined here as the mutual dependence of organized science and government policy”
So, based on peer-reviewed social science, you are talking bollocks, again.
I’m glad that you found Gauchat. But perhaps you should read him in more detail:
Gauchat’s findings may also reflect clashes between supporters of conservative ideology and particular areas of science. The rise of the Christian right, which rejects evolutionary theory and opposes research on embryonic stem cells, may have been particularly important – Gauchet found a similar decline in trust in science among frequent churchgoers. More recently, conservatives have come to fear that global warming will provide an excuse for “big government” to restrict their personal freedom through environmental regulation.
Gauchat’s evidence is part of what confounds the arguments of folks like our much beloved Don – because the decline in “trust” in scientists amongst conservatives has been taking place over decades. Hence, the oft’ found claim in the “skept-o-sphere” that the decline is related to climate scientists doesn’t comport with the evidence.
Although it does comport with their theories based on their own, anecdotal, reasoning.
Now keep in mind, in contrast to your mistaken conclusions about what I have argued, I have believe that ” [the drop] can be discounted due to a rise in the levels of religious dogma in AMERICAN society.” But I do think that the increased linkage between the religiosity and political ideology, a linkage for which there is tons of solid evidence (including many, many explicit statements from the religious right and conservative ideologues both, about their intent to strengthen that linkage), is an influence on the growth of “distrust” in science found only among a minority of a particular demographic.
There is much else in Gauchat that supports the possibility that decrease in trust among that subsection of the public is associated with political ideology – also interesting and something not considered in the article Judith excerpted.
Which, to get back to my earlier point, is the freakin’ evidence that shows that what the authors “predicted” is oversell and over-intepretation. They are “predicting” something that will happen. Where is their evidence? The evidence shows that if we were going to be making “predictions” about what is “likely” based on empirical analysis of data from the past, distrust in science will continue to drop in association with political ideology. In other words, the determination of “oversell and over-interpreation” of scientists may well lie primarily in the ideology of the observer rather than what the scientists say. People of one political persuasion will determine “oversell and over-interpretation” and people of another, won’t – depending on the issue and how it plays out in a politically-charged context.
And as Kahan points out – this only happens with a tiny minority of issues related to the interpretation of scientific evidence. Only those that are politically charged.
However, let’s not forget that predicting the future based on the past is very problematic.
Come to think of it, that is yet another way that the authors were guilt of oversell and over-interpretation, isn’t it?
Truly understanding what they are arguing involves moving beyond simply grasping that they are making a prediction, to understanding why they are making that prediction, and what they are saying about that prediction.
They argue that “climate scientists” should not oversell models and overdraw conclusions. Not because it is unscientific or immoral to do those things, but because doing so might raise the interest of actual scientists and draw their attention to what you are doing.
It is a propaganda exercise. As is Joshua’s preemptory broad brushing of any actual scientist who (having had her eyebrows raised by the unscientific hyping activities of “climate scientists”) might be inclined to pursue investigation of the status of “climate science”. Joshua’ post is perfectly in keeping with what the authors are trying to achieve. He understands.
Is there a doctor in the house?
In the context of what the letter actually said, I do not think they were wrong. The letter basically states that recent work made it possible that a very powerful new bomb could be built. They speculate that it might be too heavy for delivery by air. They state that Germany had already cut off sales of uranium from mines it controlled. They make no claim that Germany was doing anything other than halting sales.
FDR was already interested in uranium research, and was very distrustful of Germany.
Bombs were going to made. That was probably inevitable. By whom was uncertain. FDR made moves that allowed it to be us.
In 1945 Japan was on the verge of starvation. My father’s Division took severe losses on Iwo Jima, an island about the size of a typical college campus. Around 27,000 (almost 7,000 Americans and almost all of the defenders) men died in action and another 20,000 (almost all U.S.) were wounded in action. The belief that the atomic bombs saved their lives was/is virtually universal among American participants in the Pacific War. After the island campaigns, none of them believed they would survive the invasion of Japan.
With no atomic bombs Japanese casualties would have been significantly higher. The blockade had cut off food imports entirely. There were none. Bad weather had destroyed much of that years rice crop. Next up on the conventional bombing schedule was the destruction of their food distribution railroads and road bridges. Attempts to harvest the surrounding ocean meant certain death. The Japanese military had food. They did not care about civilian deaths. Multiple millions were about to die in a very short time span. After the surrender the same disaster nearly happened. At the behest of Truman, former President Hoover quickly assessed the situation in Japan, and went to congress and changed reluctant minds. Hoover has never received the acclaim he richly deserves for one the most successful humanitarian missions in history. We hated them and we did not want to feed them.
And, of course, the most expensive new US weapon of WW2 was an airplane.
Doc? Has anyone see Doc?
He was here just a little while ago, and now he’s disappeared without a word of explanation.
As we argue below, climate scientists can indicate the shortcomings of the details of their modelling results, while making clear that solid basic science implies that significant risks exist.
NO, the shortcomings of their modeling does imply that they cannot know if there are any risk.
If climate scientists are seen as ‘hiding’ uncertainties, however, the public reliability (reliability3) of their findings may decrease, and with it the reliability3 of solid basic physical insight.
Yep, they are hiding uncertainties and messing with data and we, the public, do not trust them.
Their solid basic physical insight is NOT solid, not basic and is not insight.
They do not understand Climate or their Theory and Models would perform better.
The “1 in 200” number that is presented in this text is way too high. Their understanding is much less than zero.
This mistrust will lead to a low assessment by these actors of the reliability3 (public reliability) of findings from such simulations,
It really should lead to a low assessment.
even where the reliability1 and reliability2 are relatively high
That has not happened yet and may never happen.
Some men you just can’t reach.
Indeed skepticism should go equally in both directions. It could be worse than the climate model spread or better. If someone had said that the models were underestimating Arctic sea-ice loss, they would have been right.
This is a lesson that the models may be underdoing it, and perhaps there is too much conservativeness in the climate science community that throws out the more scary results, which might have predicted this better. Being conservative about climate change is a likely failing as models forecast the unknown. There may be some selectivism that keeps things more as they are (such as sea ice) rather than letting things evolve faster into unknowable states, because that sets the tipping point dominoes falling faster, and we know models can’t accurately predict those because they are like bifurcation points. None of the published climate models have shown a tipping point by 2100, but that could be because those runs are thrown out and retuned to not do that. There is a disincentive to be an outlier in the IPCC ensembling process because that is like a competition, and conservatism usually wins over sticking your neck out.
Bottom line: uncertainty works both ways.
Without the benefit of the scientific method climate models already are little more than doomsday signs around the necks of insane urban witchdoctors.
In times of rapid change, the scientific method may be too conservative.
So much for the promise of science in helping to overcome superstition and ignorance.
Jim D, right as usual. The coming Eddy Minimum must be stopped by burning more fossil fuels. Just in case. Science is too invested in the old meme of AGW and therefore cannot grasp the cold doom approaching.
“In times of rapid change, the scientific method may be too conservative.”
Say the anti-science people who wish to make rapid changes.
It’s this type of inanity from Wagathon and the other type localities of Lewandansky’s fake but accurate denier psycho theory that discredits your blog and bores your readers. Joshua’s “vapours” delight and tickle the cold heart of realists.
“In times of rapid change, the scientific method may be too conservative.”
Perfect. Thank you. Let’s scuttle common sense and reason because you’re having trouble sleeping at night. And yet you can’t even quantify…in fact can’t even identify… this “rapid change” you speak of.
Bottom line, you’re all scared rabbits.
The wonderful thing about tipping points is that they have always been and always be in the right direction.
When is is warm, polar sea ice melts and snow falls to increase Albedo and cool the earth.
When it is cold polar waters freeze and stop the snow from falling and Albedo decresses and the sun warms the earth.
The models predict increasingly lower air pressure in the Arctic with increasing global mean temperatures. Lower pressure in the Arctic will make it colder there not warmer. Think positive Arctic Oscillation.
The paper says: “How do we stimulate insightful discussions of things we can neither rule out nor rule in with precision, but which would have significant impact were they to occur?”
Wrong question. Climate models have nothing to do with uncertainties management. But on how to keep a lie alive.
In fact, IPCC do not need to stimulate any discussion. In late 80’s (cold war was ending) and UN had to struggle for survival. So they started to think in global policies, but the sad fact is that they invented a fictitious science,
in order to provide some kind of reliability for those policies.
I understand that you might think that all what I am saying is another paranoic conspirancy theory from a mad man in the internet. But all this political invasion of science, is explained and justified in Al Gore’s 90’s book “Earth in the Balance: Forging a New Common Purpose”.
Have you considered running your thinking by Lew?
Hi Joshua. English is not my first language so I have difficulties in understanding the word “Lew”. Could you express what you mean?
“Hi Joshua. English is not my first language so I have difficulties in understanding the word “Lew”. Could you express what you mean?”
“Loo” is an English euphemism meaning “receptacle of crap”. “Lew” is an alternate spelling for the same thing.
So JJ, how long have you been waiting for the perfect opportunity to spring that one? ;)
I believe that: (a) scientist must identify the fallacies in IPCC’s climatic report, as I did in:
and (b) to make them public, as I am doing now.
This is the only way for climatic science to keep staying reliable.
If you have any other theories or opinions, you could try to express them in a logical understandable way (well, if you have brain enough to do it).
Oh sorry forgot the winkie which transforms putdowns into good clean fun. ;)
I see you fixed the broken link to your faculty page too, Wilcox. If I can be of further assistance with complicated computerish stuff in the future please hesitate to ask. ;) ;) ;)
I believe Joshua is referring to several papers by “Lew”. This is Lewandosky (spelling?) who has published some really, really crappy papers trying to link people he disagrees with to other people who may believe in conspiracy theories.
JJ, loved the loo/Lew gag. ignore David, he thinks everyone is as venomous and nasty as he is.
You’re a mind reader now? Amazing.
Ok Bill, I guess then that Joshua could be dissapointed when someone in internet criticises Al Gore. And may be many americans who voted for him in 2000 do not accept critics on such an oscarized nobelized almost-US-president. But I am providing you with that reference to his own book. In there, Al Gore bets for a political climatic action despite of what science could say. Furthermore, could anyone think that in 1980 (one year after Charney’s report) UN and OMM would have had a 0.0000001% chance of creating the IPCC?. Of course not. Not even one in a trillion chance!!.
You guess wrong. Criticize away, without any concerns of me being disappointed. I think Gore’s a blowhard. Always have. I didn’t vote for him because at the time I wasn’t inclined to vote for a lesser of two evils. Funny how Bush turned me around on that perspective.
Jim D: “There is a disincentive to be an outlier in the IPCC”. Ah, must stay in the tent of the “97 %” and toe the consensus line. But I believe our hostess would agree few if any are pulling their punches about ringing alarms. Interesting idea Jim D, but it doesn’t really pass the smell test.
I think the failure to predict the Arctic sea ice loss proves my point.
The failure to predict above average global sea ice is typical of cultists who never grasp the global picture.
They failed because they got the science wrong, it isn’t melting from above, it’s melting from below.
Jim D. you write “I think the failure to predict the Arctic sea ice loss proves my point.”
I am not sure which prediction you are talking about. This year with the ARCUS predictions, only one guess was above the actual minimum. All others guessed too little ice. Is this what you are referring to?
This was the prediction I linked above
It was a century projection that went wrong withing 15 years. You don’t hear much about this because the skeptics have a confirmation bias that deletes this kind of information from their memories.
Lets see what the IPCC said.
Changes in sea ice: There will be substantial loss of sea ice in the Arctic Ocean. Predictions for summer ice indicate that its extent could shrink by 60% for a doubling of carbon dioxide (CO2), opening new sea routes. This will have major trading and strategic implications. With more open water, there will be a moderation of temperatures and an increase in precipitation in Arctic lands. Antarctic sea-ice volume is predicted to decrease by 25% or more for a doubling of CO2, with sea ice retreating about 2 degrees of latitude
maksimovich, yes, and some of this has already happened earlier than they expected. See my graph.
Global sea ice peak this year not seen since 1996.
Not at all as impressive as what’s happening in the Arctic, is it?
This years Arctic minimum appears to be roughly close to that of 2005. In any case, it isn’t as bad as some of the past 5 or 6.
The happenings in the Arctic appear to be regional, as in not CO2. Otherwise, we should be seeing the same sort of thing in the Antarctic. I mean, there is just as much CO2 there, isn’t there?
I think looking at year-by-year regional trends is a big mistake, but taking these trends over 30 years is more robust, and since Milankovitch cycles favor Arctic sea ice more now than during the last few millennia, it is clear something else is happening that interrupted the northern cooling. Skeptics pretend not to understand what that something-else could possibly be, and just want to appear clueless, which they do succeed in doing. I give them that much.
Antarctic surface temperatures increased by about 1.5°C since 1960, which is much larger than the global average. So, you you are looking for a reason why there hasn’t been as much of a reduction in sea ice extent, it’s not just because of a lack of warming. Consider that Antarctic is a big continent surrounded by oceans while the Arctic is an ocean surrounded by land masses. You can’t expect a change in forcing to have the same effect. The albedo feedback is quite lower, and the Antarctic minimum sea ice extend is much more constrained than the Arctic is. When it drops to a value close to zero, there remains this vast ice-sheet in the middle. Melt water from the ice-sheet also contributes much to soften surface water and make it more prone to freeze in winter. The warming Southern ocean also promote precipitations, which also contributes to surface water softening and to cover sea ice with snow. To expect CO2 and warming to have the exact same effect in to such vastly different environment’s isn’t reasonable.
This one is for jim2 and lil miss sunshine.
Hope the paws buster brightens up their day:
Which is the data, which is the model?
Arctic Ice melting much faster than IPCC consensus. Rest of the earth melting much slower than IPCC consensus.
If your point Jim D was that the IPCC consensus science is wrong then you have a point. Otherwise you have no point.
WebHubTelescope (@WHUT) | November 27, 2013 at 2:06 am |
Trick question. Neither are models. The giveaway for even casual but well informed observer (that’s me) is the precise overlay of the well known 1998 El Nino spike. No model predicts El Nino. Therefore both traces are based on historical observations. It’s like overlaying RSS satellite data on top of ground instrument data. They will be slightly different but both will reflect the 1998 El Nino and neither can be called a model in the sense of the world that Coupled Ocean/Atmosphere General Circulation Models such as those in the CMIP5 ensemble used by the IPCC are models.
Massive succeed on the part of the CSALT model, in contrast to Springer’s negativity.
The secular trend of the model is predominantly driven by the value of CO2.
Predicting that and being able to understand and bound the values of natural variability as Springer pointed out makes CSALT a winner.
And massive fail on Springer’s part as he was unable to determine which is the model.
The challenge is still on.
CSALT is not a model. Models predict the future. When does your model say the next severe El Nino will occur? You seem to be claiming it hindcasts the 1998 El Nino so the proof of your “model” would be its ability to forecast a future El Nino. If it can’t do that it isn’t a physics model but rather just a (perhaps unique) way of taking proxy time series of temperature and transforming it into a replica of the instrumental temperature record.
Prove me wrong by using your model to “predict” something that actually comes to pass in future observations. Otherwise: FAIL.
My oh my, how quaint an argument.
In the CSALT model, the strength and timing of the ENSO events are mainly estimated by the SOI pressure difference. Since this value is bounded by physical laws and always reverts to the mean , when the El Nino or La Nina event occurs is largely noise on top of the larger trend.
You will really have to try harder next time.
 It reverts to the mean since a pressure difference can not be sustained in the long term, as the mean atmospheric pressure is determined by the gravity head of the overhead atmosphere.
BTW, please note that Springer still can not tell which temperature profile is the model and which one is the measured GISS time series.
He is very confused about the nature of a model. Shouldn’t be surprising, as Clarke once said that any sufficiently advanced technology is indistinguishable from magic (where’s Kip? LOL). Basic thermodynamic analysis of energy balance fits into the technology bucket.
Grade = A+
Of course I can tell which time-series is which. I just didn’t bother.
The green line is GISS.
Neither are models. One is a land-only temperature series taken from thermometers and the other is a proxy temperature series taken from various historical data sets like SOI.
So when does your “model” predict the next El Nino event will happen?
Stop dodging the challenge to make a prediction.
Instead of self-publishing like your peers at Scientific Principia (LOL) try getting your model published in a legitimate peer reviewed journal. Publish or perish, dopey.
Oh sorry. Principia Scientific not Scientific Principia. Not that it makes much difference. If it looks like crank science and reads like crank science it’s probably crank science.
Two peas in a pod.
The equatorial Pacific has not looked like this since the 1980s, so my prediction is El Nino in 2014, but probably not super. Still, should be a top-2 year. Close to 2010, or higher. Paws up.
I would love to see a strong El Nino. La Nina means cold and drought in Texas and we’ve had quite enough of that with back-to-back La Ninas. I don’t figure it’s going to end anytime soon though because this appears to be a repeat of the 1950’s decade-long drought so it’s only half over. This was climatologically predictable. It’s simply the 60-year Atlantic Multi Decadal Oscillation having its way. Anthropogenic warming didn’t phase it one bit by the looks of things.
This paper seems to talk around the fact that if the climate models can not predict future temperature then they are simply not fit for that purpose and it is a serious mistake to use them for that purpose. They may be useful for many other purposes but to use them as an argument to remake society backfires badly. If their prediction is not true why would one believe that even if the earth did warm that that would be a bad thing? Am I going to take the climate scientists word for it that 3 deg. warming is going to be catastrophic when they can’t even predict the warming? Certainly not. Especially when I can get in my car and achieve 3 deg. warming in several hours of driving. Without death even!
Pingback: benefits of CO2 outweigh the deficits … | pindanpost
IPCC’s First Assessment Report Predictions:
Based on current model results, we predict:
“An average rate of increase of global mean temperature during the next century of about 0.3°C per decade (with an uncertainty range of 0.2—0.5°C per decade) assuming the IPCC Scenario A (Business-as-Usual) emissions of greenhouse gases; this is a more rapid increase than seen over the past 10,000 years. This will result in a likely increase in the global mean temperature of about 1°C above the present value by 2025”
This prediction contradicts observation that shows a warming of only about 0.5 deg C above the 1990 level by 2025.
“As an advice-giving profession we are in way over our heads…Even to begin to assess the likely consequences of these policies in anything like a scientific way is clearly well beyond the current limits of our discipline.”
Robert E. Lucas Jr., 1980.
Nat, an excellent paper in which Lucas shows true humility, an example to many who wish to direct public policy without sufficient foundation.
Economics is a social science. The phrase “social science” is a contradiction in terms right from the word go.
we jest need ter establish which
of the goddam uncertainties is
the Relevant Dominant goddam
Yeah, my thought too. And in fairness, Dr. Curry noticed it too: “I particularly like the idea of the Relevant Dominant Uncertainty, but…[often] it is unclear which among the uncertainties is dominant.” And how would you estimate that?
I think that you’ld have to take my approach, acknowledge that our capacity to predict the future is very limited, that it will always surprise us, and pursue policies which maximise our capacity to deal with whatever befalls. That is, flexible, non-prescriptive, light-handed policies which encourage personal responsibility, entrepreneurship, innovation etc, and do not presume to be able to determine a specific optimal path forward. So don’t focus on identifying and quantifying the RDU, but on making its values have less importance, whatever it is.
That’s a defensible position. Actually, I think the idea of an RDU is more valuable to researchers than policy makers. Is it possible that Smith and Petersen are confusing these two things? And also, unrelatedly, in the context of risk and uncertainty isn’t it unfortunate to adopt an already-used abbreviation? (The phrase “Rank Dependent Utility” gets 102,000 hits on Google.)
A fine distinctshun here, light-handed rather than
Beth, once in a published paper, I inadverdently referred to “light-hearted,” rather than “light-handed,” regulation. I think I caught it before print, but on reflection light-hearted might have been better.
I’ve given up trying to figure out why the slips are so often an improvement over the first intention. It’s all stolen elsewhere, anyway, so I like to thinki it’s an ironic reminder from the forgotten source.
> And also, unrelatedly, in the context of risk and uncertainty isn’t it unfortunate to adopt an already-used abbreviation?
The main problem with acronyms it may oversell that the entity to which it refers exists.
Once you have determined the RDU, you need to look inside the RDU and find out what the RDU of the RDU is. After that it is RDUs all the way down.
Actually there is a global abbreviations crisis. I am compiling a list of abbreviations with multiple meetings. We are running out of two- and three-letter combinations. It’s even gotten all the way to fields like parapsychology where telepathy (TP) is easily confused with toilet paper (TP).
The only answer is a global treaty to allocate these scarce linguistic resources.
We make these slips, Faustino, and they can sometimes be
paradoxically spot on. ) But it pays ter edit and edit agen. I
Just think of Relevant Uncertainty Domain – Economics versus Relevant Uncertainty Domain – Environmental Research
Take your pick RUDE or RUDER
Beth Cooper and David Springer
Winkie roadkill? Does Springer know about this?
Statistical reliability (reliability1). … Statistical uncertainty ranges based on varying real numbers associated with models constitute a dominant mode of describing uncertainty in science.
That isn’t statistical reliability, it is sensitivity analysis. Why do these “communication experts” invent terms that conflate the two?
Especially when extrapolating models into the unknown, we wish ‘both to use the most reliable model available and to have an idea of how reliable that model is’ but the reliability of a model as a forecast of the real world in extrapolation cannot be established.
The unreliability of a model as a forecast of the real world can be established. Why don’t these guys talk about how that fundamental component of science, whilst engaging in putative “science communication”?
We have a deep ignorance of what detailed weather the future will hold, even as we have a strong scientific basis for the belief that anthropogenic gases will warm the surface of the planet significantly. It seems rational to hold the probability that this is the case far in excess of the ‘1 in 200’ threshold which the financial sector is regulated to consider.
It “seems rational”? Well, why didn’t you say so in the first place?!!? Here, take all my money. And my autonomy. Certainly can’t hold on to those in the face of the overwhelming proof that is “seems rational”.
Exactly what kind of reliability (TM) is the bald assertion of probability based on “seems rational”, anyways?
Climate scientists need to spend more time learning science and less time boning up on their
Public ReliabilityPropaganda Techniques.
While the authors are much too nice to be smug, I notice some very comfortable assumptions about the truth of warmism, nestling in amongst all that conciliatory verbiage. It’s all rather well done.
More of that “communication”?
Yes, more of that “communication”, which the Judith Goat invariably finds to be “interesting” and without fail “insightful”.
A measure of the madness, the unspoken assumptions loudly give away the game.
I’ve had special youth experience which has enabled me to interpret such “communication”. I once managed (or mismanaged) a French restaurant where, instead of changing the soup, we used to change the name of the soup.
The RDU of the IPCC science is the limited knowledge of the properties ofCO2. Because theoretical evaluation is intractable – the Equipartition principle does not work, and experimental information is lacking, this is the RDU. Such a fundamental RDU throws all their results in doubt
Scott Scarborough above shows how easy it is to get a climate 3C warmer and a Melbourne resident can get 3C warmer by moving to Sydney: it happens every day.
“If climate scientists are seen as ‘hiding’ uncertainties, however, the public reliability (reliability3) of their findings may decrease, and with it the reliability3 of solid basic physical insight.” (See e.g. Aesop, 550 BC. “The Boy Who Cried Wolf,” Journal of Supergame-Theoretic Myths, 19:125-32.)
The same goes if scientists are portrayed as “hiding” uncertainties.
I’ll request my institution to subscribe to the Journal of Supergame-Theoretic Myths. It sounds cool.
The forecast of climate models is no better than a random guess. They are totally unreliable. Models should be ignored until they can make accurate hindcast. Read this assessment of climate models.
Is there any evidence at all that any of the climate modles have any predcitive skill whatsoever?
If not – and any such evidence is hidden so well as to be invisible to me –
then the whole exercise is a waste of the modellers’ time and the public’s money.
Ever increasing and convoluted discussion of the intimate details of the models and their deficiencies is completely missing the point.
They do not work, They are not fit for any policy-related purpose. If the effort is to continue – and to continue to be publicly funded – there needs to be a fundamental rethink of their basics. Trying different shades of lipstick on the same old broken down climate model pig is irrelevant.
Oh, c’mon, Latimer, note the ‘state-of-the-art-modelling’. It’s right there in black and white. Don’t bash ‘well meaning science’, whatever the Hell that means.
These were supposed to be philosophers of science.
You beat me to it, Kim. I was going to remark that ‘state of the art’ can just mean ‘last years model with go-faster stripes and furry dice on the mirror’. Or new lipstick on the same old pig.
And ‘well-meaning’ can hide a multitude of sins. The road to hell is paved with good intentions and unintended consequences.
Being state of the art and/or well-meaning is of no consequence when the underlying product is not fit for purpose.
Apart from the obvious reason that their pay and rations and careers and status depend on writing complex papers without too much offending their colleagues, why do academics try to overcomplicate simple things?
The models are broken. Fix them or give up.
I think “well meaning” means their hearts are in the right place but their heads are firmly planted where the sun never shines, but there is much methane.
The Models are based on the ideas and wishes of the modeller. I’m a big proponent of simulators, when used in the correct way. When not used correctly, they are worse than nothing, as they provide false confidence and confirmation of the modellers biases.
But even with all of that, I am okay with GCM’s being used to study climate, I’m not okay using GCM’s for policy, they lack fidelity. What’s worse is that either climatologist don’t understand they are lacking, or do and still advocate them for policy.
Based on the many many comments on the various blogs by warmists, most don’t understand they are a caricature of climate.
‘Statisticians and physical scientists outside climate science might become scientifically sceptical, sometimes wrongly, of the basic climate science in the face of unchecked oversell of model simulations’
Translation: ‘Real emprical scientists might start to wonder whether the Climate Model Emperor has any clothes at all. And if they do, it won’t take long before the jig is up. Better try not to attract too much attention. Keep your heads down a bit boys and girls
‘Likely to generate a new generation of insightful academic skeptics’. The worst that could happen, to me.
I think it is fair to say that the IPCC doesn’t trust the models either. Why would they elect to present so many different models that disagree with each other yet collectively claim such a large slice of the available ‘prediction landscape’?
Because many think that they diverge only due to chaotic influences, and not from a failed model, the mantra is they know the physics is right, so the results must be right.
It doesn’t really matter if you have one result that incorrectly descibes the system, or 1000. They are still all wrong and you are no closer to the truth..
With stuff like this going on, no wonder the public don’t trust science. The tragedy is that when all the uncertainty filters through to the public consciousness, it will take years to regain trust, irrespective of the fact that science contributes so much to civilisation….
IMO, the “Relevant Dominant Uncertainty” is the probability that the advocated policies, like carbon pricing, will achieve the goals – i.e. to control the climate, the weather and sea levels. But no one seems to have done any research into the probability that the advocated policies will succeed in the real world.
I am sorry. We cannot do controlled experiments on the earth’s atmosphere, so the scientific method cannot be applied to decide whether the hypothesis if CAGW is right or wrong. No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere. Such little empirical data as we have, strongly suggests any effect is negligible.
Jim, you are a man amongst Joshuas and Moshers.
I think we have lots of good data, it’s just that GMT throws almost all of it away. I think we need to look at the trend of nighttime falling temps under clear sky conditions.
But even with all of the empirical data that we do have (I have over 120 million daily samples), it still strongly suggests any effect is negligible.
Good call. I’ve often wondered about testing it that way. Do you have any data available, or point me in the direction of the best sources?
If you follow the link in my name, you’ll find some of this written up.
I’ve looked at falling temps, but have been hesitant to trying to extract only clear sky data. Mostly because no matter how I do it, it will be ignored because I’m “cherry picking”, so mostly I’ve included all the data, but it too shows no clear trend. I did get an inexpensive weather station so I can see how I can pick out the clear skies data, since the data itself doesn’t say if it was clear all day or not.
While I mention the data source in the blogs, I use NCDC’s Global Summary of Days. Which is daily station data for about 30,000 (iirc) global stations from 1929 on.
Mi Cro, You have to consider the system as a whole.
“You have to consider the system as a whole”
And the only reason seasalt works is because global averaging erases the fact that regional temperature profiles (which are different from each other since the 40’s busting the “global” in global warming) are blended all together, doing what I said it does, by throwing away information.
“Annual average of daily Min and Max day over day differences for Mid Latitude Northern Hemisphere Stations (24.950-49.410Lat, 180.000- -8.000 Lon and 24.950-49.410 Lat, -67.00- -124.800 Lon) divided into Eurasia(32M samples) and the US(24M samples)”
You can see that US stations and Eurasia stations do not have the same measured temp profiles, so the only way seasalt works is average it all together, throwing away information.
“Most of the people that are doing the smoothing incorrectly are the kranks that use filtering to mislead and thus convince rubes that the accepted climate science is wrong”
Lol, Pot Kettle.
BTW, GCM’s do not get regional temperature averages correct, it’s only when they average globally are they able to match temps, and big fail.
A study of the Earth’s albedo (project “Earthshine”) shows that the amount of reflected sunlight does not vary with increases in greenhouse gases. The “Earthshine” data shows that the Earth’s albedo fell up to 1997 and rose after 2001.
What was learned is that climate change is related to albedo, as a result of the change in the amount of energy from the sun that is absorbed by the Earth. For example, fewer clouds means less reflectivity which results in a warmer Earth. And, this happened through about 1998. Conversely, more clouds means greater reflectivity which results in a cooler Earth. And this happened after 1998.
It is logical to presume that changes in Earth’s albedo are due to increases and decreases in low cloud cover, which in turn is related to the climate change that we have observed during the 20th Century, including the present global cooling. However, we see that climate variability over the same period is not related to changes in atmospheric greenhouse gases.
Obviously, the amount of `climate forcing’ that may be due to changes in atmospheric greenhouse gases is either overstated or countervailing forces are at work that GCMs simply ignore. GCMs fail to account for changes in the Earth’s albedo. Accordingly, GCMs do not account for the effect that the Earth’s albedo has on the amount of solar energy that is absorbed by the Earth
I find this totally unbelievable. Do you have links?
Easy to Google Earthshine –e.g., http://www.bbso.njit.edu/Research/EarthShine/
and, for example, see abstract, below…
I think I’ve never heard so loud
The quiet message in a cloud.
The Earthshine project is almost poetic–e.g.,
We have been to the Big Bear who looks to the sky and have seen the Earth reflected in the eye of the Moon through sunlit clouds.
“GCMs fail to account for changes in the Earth’s albedo. Accordingly, GCMs do not account for the effect that the Earth’s albedo has on the amount of solar energy that is absorbed by the Earth”
wrong. In terms of surface albedo the albedo of models is a fairly good match. within 5-10%.
in terms of cloud albedo, I’ll suggest you go back and re examine earthshine and criticism of that product.
Reflection from clouds is modelled. there are two issues.
1.matching the cloud observations
2. matching the response
So, the issue is NOT that the models fail to account for the phenomena. They do. It is part and parcel of the radiation models. The issue is how well they match
E. Pallé, P. R. Goode2, P. Montañés-Rodríguez.
Interannual variations in Earth’s reflectance 1999–2007
 The overall reflectance of sunlight from Earth is a fundamental parameter for climate studies. Recently, measurements of earthshine were used to find large decadal variability in Earth’s reflectance of sunlight. However, the results did not seem consistent with contemporaneous independent albedo measurements from the low Earth orbit satellite, Clouds and the Earth’s Radiant Energy System (CERES), which showed a weak, opposing trend. Now more data for both are available, all sets have been either reanalyzed (earthshine) or recalibrated (CERES), and they present consistent results. Albedo data are also available from the recently released International Satellite Cloud Climatology Project flux data (FD) product. Earthshine and FD analyses show contemporaneous and climatologically significant increases in the Earth’s reflectance from the outset of our earthshine measurements beginning in late 1998 roughly until mid-2000. After that and to date, all three show a roughly constant terrestrial albedo, except for the FD data in the most recent years. Using satellite cloud data and Earth reflectance models, we also show that the decadal-scale changes in Earth’s reflectance measured by earthshine are reliable and are caused by changes in the properties of clouds rather than any spurious signal, such as changes in the Sun-Earth-Moon geometry.
Let me blockquote from the paper referenced:
This demonstrates conclusively, from your own reference, that at least one model, the ” ISCCP project “ does indeed account for albedo. Presumably a parametrized “average” at the cell level, which is summed by “the FD product”. GCM’s very definitely don’t “ignore albedo”. They do quite a bit of calculation around it.
The problem is, they don’t “account for changes in the Earth’s albedo” very well, or with acceptable accuracy. This is probably (IMO) because they use an extremely simplistic parametrization scheme to create a number (or low-dimensioned vector) at the cell level, while the real albedo of the planet is determined by aggregation from the droplet level.
But that’s not the same thing, and any “skeptic” claiming that GCM’s “don’t account for changes in the Earth’s albedo” or that “GCMs ignore albedo” just succeeds in making all skeptics look foolish.
Don’t be silly–we know that climate over period was not related to changes in atmospheric greenhouse gases so, irrespective of what the data may have shown–when looked at with a skeptical eye–the amount of ‘climate forcing’ supposedly due to changes in atmospheric greenhouse gases was obviously overstated or countervailing forces such as clouds were simply ignored by GCMs. And, we know that of the two the latter is what is happening: GCMs fail to account for changes in the Earth’s albedo due to clouds and do not account for the effect that clouds have on the amount of solar energy that is absorbed by the Earth.
We don’t know anything of the sort: the most probable explanation is that one or more “countervailing force” was calculated wrong.
No, we don’t know anything of the sort.
You should listen to the warning of Dr. Hans von Storch and don’t step in the quatch! Freeman Dyson just does not believe climatologists “understand the climate,” and says, ”their computer models are full of fudge factors.” Dyson also says, “I think any good scientist ought to be a skeptic.” Going to the matter of competence, Dyson was no less sparing of brainwashed climate scientists. “The models are extremely oversimplified,” says Dyson. “They don’t represent the clouds in detail at all. They simply use a fudge factor to represent the clouds.” (See–e.g., Climatologists are no Einsteins, says his successor)
How is this different from what I said above?:
While I have great respect for the two scientists you mentioned, I didn’t need them to tell me climate models were simplistic to the point of ridiculousness. My point is that, simplistic or not, the models do take account of albedo.and saying that they don’t is as simplistic, and wrong, as the models themselves.
It is simpleminded to believe you can swagger around with dabs of acrylic on your sleeves and call yourself an artist because your easel is pointed at the sunset,
“In terms of surface albedo the albedo of models is a fairly good match. within 5-10%.”
5-10% probably accounts for trends far larger than what’s actually measured, and since we don’t have anyway measurements for all we know this alone accounts for all the changes in climate we’ve seen.
“So, the issue is NOT that the models fail to account for the phenomena. They do. It is part and parcel of the radiation models. The issue is how well they match”
If they don’t match is that not a failure?
“A study of the Earth’s albedo (project “Earthshine”) shows that the amount of reflected sunlight does not vary with increases in greenhouse gases. The “Earthshine” data shows that the Earth’s albedo fell up to 1997 and rose after 2001.”
And (assuming this is true), it is important why? Given that SW solar for the most part passes right through CO2, CH4, and N2O…why do we care about their relationship to the Earthshine data?
Because, GCMs ignore albedo and yet, according to E. Pallé, the effect of albedo over a 20 year period on the reflectance of solar energy, in terms of solar irradiance for comparative purposes, is 20 times greater than the variation in solar irradiance, “from maxima to minima.”
GCMs do take changing albedo into account. For instance:
“General circulation model experiments with surface albedo changes”
I doubt they do so in an accurate manner though.
Steven Mosher | November 26, 2013 at 12:24 pm |
“In terms of surface albedo the albedo of models is a fairly good match. within 5-10%.”
With incoming shortwave about 1360 W/m2 that means “the fairly good match” you mention is within 68-136 W/m2. With the ostensible anthropogenic forcing at around 4 W/m2 it seems that’s an error range an order of magnitude larger than the signal.
GCMs fail to account for changes in the Earth’s albedo.
It is the absence of theory ( by the use of a constant) that is at question eg Ramanthan.
It is remarkable that general circulation
climate models (GCMs) are able to explain
the observed temperature variations during
the last century solely through variations in
greenhouse gases, volcanoes and solar constant.
This implies that the cloud contribution
to the planetary albedo due to
feedbacks with natural and forced climate
changes has not changed during the last
100 years by more than ±0.3%; i.e, the cloud
forcing has remained constant within ±1 Wm–2.
If indeed, the global cloud properties
and their influence on the albedo are this
stable (as asserted by GCMs), scientists need
to validate this prediction and develop a
theory to account for the stability
Dyson has street cred among those gathered under the CO2 lamp post looking for the key to climate. He also seems to have a knack for exposition simple enough for hoi polloi to comprehend.
About the albedo, Yu Kosaka and Shang-Ping Xie apparently fed in the SST numbers around the ENSO region. It worked. How about forcing or if the data isn’t good enough, solving for albedo using a GCM?
If this is a good idea, I suppose it’s already been tried.
GCMs have the variable albedo due to cloud, aerosol and ice-cover changes, so I am not sure what else they would need to add here.
Maybe tune the albedo until the output matches the observed temperature. Then compare the tuned albedo to what we know about the observed albedo.
maksimovich | November 26, 2013 at 7:15 pm |
The constant (discounting small dynamic compensation for interannual change in snow cover) differs between models by the 5% – 10% that Mosher mentioned. In absolute terms that means the models differ between 68 W/m2 and 136 W/m2 in how much solar energy they assume is absorbed and thermalized by the surface & atmosphere versus reflected straight back out into space. Ostensible forcing by all anthropogenic sources is a positive ~4 W/m2 and thus the constant albedo chosen by the model builder/user is a fudge factor with a range an order of magnitude greater than the anthropogenic signal they seek to discover.
And they say models aren’t tuned. BS. This is a prime and most egregious example of how models are tuned. In the engineering community we’ve always called crap like this “fudge factors” and the application of same without understanding why because it empirically fixes something “a kludge”. The CMIP5 ensemble is rife with fudge factors and kludges. The proof is in their failure to predict no rise in temperature for the past 17 years despite accelerating anthropogenic CO2 emission the whole time.
You forgot to divide by four (for the ratio between the area of a sphere and the area of its circular cross-section intercepting solar radiation). S/B between 17 W/m2 and 34 W/m2. Still pretty large compared to 4-8 W/m2 for 1-2 doublings of pCO2.
Good point but I’m not sure how good. Albedo doesn’t matter at night and it’s higher than 30% in the tropics where most of the solar energy enters the system. Rule of thumb is 1000 W/m2 on the surface at noon on the equator. A cloud reflects about 90% of that and there’s a lot of clouds between 30N and 30S latitude. In the Iintertropical Convergence Zone it’s almost all cloud all the time and elsewhere clouds tend form in the afternoon when insolation is peaking.
“No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere.”
Sure we have ideas.
The effect was described in 1850 and in the 1890s it was predicted that adding C02 will warm the planet. Not cool the planet, not have zero effect.
Since 1896 C02 has increased and the temperature has increased.
There are 3 simple choices
1. C02 will warm the planet
2. C02 is inert and doesnt interact with radiation
3. C02 will cool the planet
We know that 2 isnt true from the lab.
There is no physical theory which supports #3.
The physical theory which supports #1 was completed by operations research specialists in the 1950s. yes Jim, guys who worked in OR had to
figure out what C02 did in the stratosphere inorder to protect american bombers. Their physics says C02 will warm the planet not cool it.
Later, as we tried to figure out how to image the earth from space wee also had to use this physics. The physics that supports #1 is the exact same physics that says C02 will warm the planet.
later, ronald reagan decided we needed a missile shield. Once again we needed a physics that was trustworthy. So we used the physics that says C02 will warm the planet, not cool it. Today, when somebody wants to build a sensor that will detect a rise in c02 in buildings… they rely on the physics that says thay c02 will warm the planet not cool it.
Any method that ignores this experience and method that argues that 1, 2 and 3 cannot be ordered in terms of probability is non scientific.
Your problem is mistaking lab science for observation science.
We can never do a controlled test of what happens if an asteroid hits NYC.
But we can apply known working physics which suggests the effect will not be pleasant. The notion that one must do controlled experiments in order to understand, is irrational.. We derive understanding and knowledge in many other ways than doing controlled experiments. We dont give your children heroin to test whether they will become addicted.
So, it would be cold and getting colder without the AnthroGHG effect, eh?
Steven, you write “Your problem is mistaking lab science for observation science.
Nonsense. I understand perfectly well what observational science is. It means taking all the observations that we have and then try to interpret what they mean. The trouble with observational science is that you cannot prove anything using this approach. All you can do, is say what is likely to be true.
CAGW is a viable hypothesis. Since we can not do controlled experiments on the earth’s atmosphere, we cannot prove whether CAGW is true or false. What I have stated over and over again, is that using what you term observational science, the data from this source gives a strong indication that the effect of adding CO2 to the atmosphere from current levels on global temperatures is negligible.
I agree that when you add CO2 to the atmosphere for current levels, the most likely thing is that it warms the earth. There is no proof lf this, but it is the most likely outcome. What nobody knows is how much the additional CO2 warms the earth. Observational science gives a strong indication that any effect on global temperatures is negligible.
“Your problem is mistaking lab science for observation science.”
Actually I think the problem with climatologists is that they mistake lab science for climate science.
There are lots of lab experiments that show effects that are not normally experienced in nature except in very specific cases. Quantum Mechanics, the Weak force, and Quantum tunneling for example.
I think most skeptics agree with your #1 – that adding CO2 will warm the planet. The question is by how much?
The theory in 1896 just looked at direct effects of CO2 and I believe computed a number really pretty close to the current thinking of direct radiative effects, i.e. about 1.2 C for a doubling of CO2 from 280 to 560 ppm.
What I am most skeptical about is the indirect effect calculation – which pushes the total warming from 1.2 C to 3 or so C.
The more data we get, the less the amount of the indirect calculation seems to get. I have seen it drop from over 2C to about 1/2C just in the last year or so.
The surface is cooled mostly by evaporation and convection. Furthermore, the so-called ghgs and clouds radiate more than 90% of the solar energy absorbed by Earth to space.
The problem needs to be solved properly.
And even if the net effect is surface warming, it’s likely small and insignificant.
I hope that participants in these discussions do acknowledge the basic absorption effect of CO2. When I first became interested I build myself a simple radiative model of the atmosphere in a spreadsheet. What I saw in my model is the green house surface temperature increase along with an increased radiative lapse rate. So if we limit ourselves to first order effects, we should evaluate the effect of lapse rate change in parallel with green house effect. Qualitatively, the lapse rate effect would tend to promote convection and cloud formation. Since these effects enhance surface cooling and solar reflection, respectively, it is in theory possible that the effect of CO2 is neutral or even negative on temperature.
One of the great unknowns is the relative impact of additional CO2 to the climate system at any given point in time. The specific impact on temperature (as well as other conditions) seems to largely depend upon what else is happening in the system at that time.
It seems to be a pretty large complex system. It seems reasonable to be skeptical of anyone’s conclusions that they know how the system operates until they demonstrate that they have a model that matches observed conditions fairly closely (for a reasonable period of time) for the characteristics their model is designed to simulate.
Imo, many are in far to great a rush to claim that they understand the system and that people should accept the outputs of their simulations. Be skeptical until a model shows that it is reliable, and then continue to be skeptical because the system will change and a model that works reasonably well for 10 years may not work for the next 100.
I’m a potential-ist; The interaction of radiative energy transport with CO2 in the atmosphere has a potential to ‘warm the planet’ all other things remaining constant and unchanged.
To unconditionally state that increased atmospheric CO2 content will without question warm the planet implies that it is never possible for the Earth’s climate systems to attain a state of increased atmospheric CO2 content along with a condition of decreased energy content.
What aspects of the Earth’s climate systems ensures that increased atmospheric CO2 content will always without exceptions correspond to increased energy content, and never with decreased energy content.
As to the 5-10% range in the value of the albedo, that alone represents an uncertainty that is greater than the few Watts that the radiative-only approach hypothesizes to be a critically important problem.
No Steven. We don’t know CO2 will warm the planet. There are too many confounding factors both known and unknown to categorically state that.
Even if CO2 can warm the planet, we have no idea whether that effect overwhelms the centennial and millennial scale natural changes, or whether that effect is overwhelmed by natural changes.
PS, it is the second, as we are gradually becoming more and more aware. See, ‘we’ may have no idea, but ‘I’ do.
kim | November 26, 2013 at 11:17 am |
“So, it would be cold and getting colder without the AnthroGHG effect, eh?”
Classic pointed question. Don’t expect an answer from the bandwagon climate boffins or their sycophants. They hate admitting stuff like that.
It’s a monster question, worth asking every alarmist over and over again. The higher the climate sensitivity, the colder it would now be without man’s effect.
We would be far better off if climate sensitivity is low, and most of our temperature record is from natural processes.
kim | November 26, 2013 at 4:03 pm |
“Even if CO2 can warm the planet, we have no idea whether that effect overwhelms the centennial and millennial scale natural changes, or whether that effect is overwhelmed by natural changes.”
And even if it does there is no credible assessment of whether said warming is a net benefit or detriment. All we know for sure is that the process releasing the CO2 (burning fossil fuels) is so hugely beneficial it’s not unwarranted to say modern civilization would not be here without it nor can it continue without it. Only a moron kills the goose that lays the golden eggs without first securing another source of eggs. Warmists are morons by and large. Not playing with a full deck. A few screws loose. Moonbats in the belfry. A couple cans short of a six pack. A few sandwiches shy of a picnic…
“Even if CO2 can warm the planet, we have no idea whether that effect overwhelms the centennial and millennial scale natural changes, or whether that effect is overwhelmed by natural changes.”
David, We do know that effect is overwhelmed by natural changes. we are much like a flea on an Elephant.
All we know for sure is that the process releasing the CO2 (burning fossil fuels) is so hugely beneficial it’s not unwarranted to say modern civilization would not be here without it nor can it continue without it.
“1. C02 will warm the planet
2. C02 is inert and doesnt interact with radiation
3. C02 will cool the planet”
Bonk bonk bonk.
Error Error. False trichotomy.
4. CO2 (btw, that’s “O” as in oxygen, not “0” as in zip) has a finite warming effect that’s low enough and overall beneficial enough to
a) not be an immediate problem, and
b) possibly not be a problem in the future, and
c) we have enough time to evaluate adaptations without making panicked and stupid policy decisions now.
Not quite slick enough, Slick.
Jim, while it is true that we cannot do controlled experiments on the Earths atmosphere, we can do carefully controlled experiments on key facets of the underpinning assumptions and guesstimates that many models use.
I would take a pipe 100 m long and 1 m in diameter and place in an ocean, with the lip at the surface, then seal the top and bottom. Leave it for 2 weeks floating in the ocean, then take measurements of the various ions and gasses every 5 m down the length and compare it to the fresh sea water at the same depths. I for one would expect that the seal pipe would have a hugely different chemical composition, w.r.t. depth, as the pipe is an equilibrium and the ocean a biotic steady state.
Doc, you write “I would take “. Fair enough. But you have not actually done this. All I can do is take what has actually been done. I, and many others, have suggested that more effort be put into trying to measure what is important, rather than putting effort into trying to improve the climate models, which have not been validated, and probably cannot be validated in any reasonable time period, and the other hypothetical approaches taken by the warmists.
Ah Jim, therein lies the rub, if you have no background in experimental design, then you are unlikely to design experiments as you do not formulate your thinking in terms of testability of postulates. Identifying your postulates and testing them is the basis of the scientific method. Often, fundamental postulates have been shown to be quite wrong, when tested.
Doc, you write “Ah Jim, therein lies the rub,” On the contrary, I have designed experiments, carried them out, and analysed the results; many times. I know all about how to do that. It is simply that no-one knows how to do controlled experiments on the earth’s atmosphere.
“It is simply that no-one knows how to do controlled experiments on the earth’s atmosphere.”
One would think that a good place to start would be to apply an input, and look at the response.
And guess what, we get an experiment like this everyday.
MiCro, you write “One would think that a good place to start would be to apply an input, and look at the response.”
You have omitted one vital word that I used, namely “controlled”. Sure it is easy to do what you suggest, but it will never be a controlled experiment.
“Sure it is easy to do what you suggest, but it will never be a controlled experiment.”
Well the conditions can be controlled for, ratio of day to night, temperature, humidity, cloudiness, surface albedo, I would think that would be a good place to start, and we have at least 120 million samples to cull from.
Mi Cro, you write “Well the conditions can be controlled for, ratio of day to night, temperature, humidity, cloudiness, surface albedo,”
I am sorry, but we must understand completely different definitions in the English language. We cannot control ANY of these factors. We may be able to measure them, but we cannot decide what values we want, and then go out and make sure that these values are achieved and maintained throughout any experiment we might conduct.
You left off the the end of the statement:
“we have at least 120 million samples to cull from.”
Nature has setup a continuously running experiment, it’s up to us to select the condition we want for our experiment out of the experimental setup we have. If we didn’t have whole days with the conditions which we can sample from it’d be one thing, but we do.
Now I expect you’ll protest some more, so let me note that this is almost exactly the same as what’s done at the LHC. Sure they accelerate the protons and then collide them, but everything else after that is exactly what I’m suggesting, they collect the results of each group of collisions, and then select for the types of products produced and analyze those specific events, just as I suggest can be done with station data.
But instead, we run all of the station data into a blender, IMO throwing all of the real evidence of any effect of Co2 away in the process.
Mi Cro you write “Now I expect you’ll protest some more,”
It is not that I am protesting. Everything you write is correct. There is empirical data we can use. The issue is that this data cannot prove whether or not the hypothesis of CAGW s correct
We could only prove whether CAGW is true or not, if we could keep all other conditions the same, increase how much CO2 there is in the atmosphere, and measure how much temperatures rise.
That controlled experiment we cannot do.
@ Jim C,
But we’ve already done that part of the experiment, though our samples have to be taken over 50-60 years. And we have the data to do this. I have the data.
I’ve partially done this already, but unless I put the time in to publish this, which I don’t really have, it’s dismissed by Mosh as wrong :)
Actually I think he’d say it was wrong even if I did published it, but that’s another topic.
Mi Cro, you write ” I have the data.”
Wonderful!! When it has been published in some form, and is available for all of us to look at, be assured that everyone who matters will give it the attention it will undoubtedly deserve. But until we have all seen the data, CAGW remains a hypothesis.
I haven’t done much around selecting a subset of measurements, but follow the link in my name and you can see various analysis of the gsod date.
Just note it does not get a Mosh seal of approval.
Jim Cripwell: We cannot do controlled experiments on the earth’s atmosphere, so the scientific method cannot be applied to decide whether the hypothesis if CAGW is right or wrong. No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere
I disagree. Laboratory investigations give us a very sound idea that, other things being equal, increased CO2 will produce increased heat accumulation somewhere.
So some obvious questions follow, and they may be amenable to study, including via experimental interventions.
1. What are the other things? e.g., will an increase in atmospheric CO2 produce an increase in natural and agricultural plant growth?
2. Will the other things remain equal? e.g. cloud cover?
3. Where will the heat accumulate? Land vs water? Surface, middle troposphere, upper troposphere?
4. What kind of heat? tangible heat (increased temperature) or latent heat (increased water vapor) or a combination?
5. How much extra heat will accumulate? Is the equilibrium approximation relevant to this calculation?
6. How fast will the heat accumulate? Will this be the same near the Earth surface as it is in other places, such as upper troposphere and deep oceans?
Matthew, you write “I disagree. Laboratory investigations give us a very sound idea that, other things being equal, increased CO2 will produce increased heat accumulation somewhere.”
Fair enough, you disagree. Just answer me a question. If we cannot do controlled experiments how can you ever prove that an observed rise in temperature was actually cause by an observed rise in CO2 concentration?
And I agree with your observation. I believe increased CO2 will cause an increase of heat somewhere. The question is, how much heat? As I have said over and over again, such little experimental data as we have gives a strong indication that this accumulation of heat produces a negligible rise in global temperatures.
Jim Cripwell: first you said: No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere.
now you say: I agree that when you add CO2 to the atmosphere for current levels, the most likely thing is that it warms the earth.
Matthew, This is a blog. I don’t vet what I have written to make sure that it is pedantically correct. Sure the first sentence is not strictly accurate. I hope the second one makes my meaning clear.
The first sentence is more than just not “strictly” correct, its completely false. Huge difference. A pig is not “strickly” a goat…in fact, not a goat at all. Thousands of people have very concrete and scientificially based ideas about what happens when we add more CO2 to the atmosphere.
Referring back to our volcano conversations.
How long would you reckon that emissions remained in the atmosphere for and what levels would be enough in any one year to impact on temperatures?
It is evident that there were short bursts of high emissions such as 1258 but there are many other times when there was very little volcanic activity and unless the emissions remained in the atmosphere for decades it is difficult to see how they could impact on temperatures other than sporadically
You counter Jim Cripwell’s:
No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere.
with the comment:
Thousands of people have very concrete and scientificially based ideas about what happens when we add more CO2 to the atmosphere.
Let’s analyze this exchange.
Cripwell has conceded that added CO2 may well result in some increase in global temperature, but this increase could be so small as to be “indistinguishable from zero”. IOW no one has any real idea how much warming could possibly result from this added CO2.
You counter that “thousands of people have very concrete and scientifically based ideas about what happens when we add more CO2 to the atmosphere”.
But this does not mean in any way that these “thousands” (assuming your estimate is correct) have any hard information (based on empirical scientific evidence) to support their ideas that increased CO2 concentrations will lead to global temperature increases that are distinguishable from zero.
And that is the point of discussion here – not whether or not “thousands of people” have “ideas”.
It’s not about “ideas” (i.e. hypotheses); it’s about “empirical scientific evidence” to corroborate these hypotheses.
Do you see the difference, Gates?
R. Gates, you write ” Thousands of people have very concrete and scientificially based ideas about what happens when we add more CO2 to the atmosphere.”
As Max has already pointed out, this is the key issue, and where you are simply not addressing the problems we skeptics are trying to discuss. I am not in the slightest bit interested in anyone’s IDEAS on their own. Ideas are not physics unless and until they are supported by empirical data. Show me the empirical data which supports these ideas you are talking about, and then let us talk.
But please don’t confuse ideas with physics. There are two sides of the same coin in physics. There is the ideas side, where hypotheses are presented. And there is the data side where these ideas are tested ,and proven to be correct or not, as the case might be. There is only one coin. It just happens to have two sides, which are each useless without each other. Nullius in verba.
“It’s not about “ideas” (i.e. hypotheses); it’s about “empirical scientific evidence” to corroborate these hypotheses.
Do you see the difference, Gates?”
Well, I think I do know something about the difference between ideas, conjectures, hypotheses, theories, models, etc. versus hard empirical evidence. Hard empirical evidence of anthropogenic climate change is plentiful and has been dicussed at length here on CE, and in hundreds of highly researched papers. But all empirical evidence comes down to interpretation and probability, never certainty. One interpretation of the hard empirical evidence points to a very high probability that humans are altering the climate primarily (but not completely) through the enormous alterations we are making in atmospheric chemistry. There might be other interpretations of the empirical evidence, but none that I know of that would explain the totality of all the changes to Earth systems we are seeing that is as comprehensive and solidly based on known laws of physics as anthropogenic climate change.
Referring back to our volcano conversations.
How long would you reckon that emissions remained in the atmosphere for and what levels would be enough in any one year to impact on temperatures?”
As you know from actually studying the effects of volcanoes, the impact varies based on the condition of the atmosphere, the location and size of the volcano, etc. If we take a good sized volcano (not a mega volcano like 1257), but say something like Pinatubo, we see effects last a few years until the temperatures rebound. But there are dozens of smaller volcanoes that are active all the time around the planet, and we know (from ice core and sediments samples) that the Planet goes through periods in which there is a general increase in the stratospheric optical depth because of a general background level of volcanic aerosols that is higher. These periods of general increase in volcanic activity cannot really be measured simply by going around and measuring directly all the volcanoes going off. Rather, it really needs to be measured indirectly by the overall increase in optical depth (such as we have directly measured and seen happen during the past 10 to 15 years). Going back in time and looking at the paleoclimate data, the ice core and sediment data can be a good proxy for what the optical depth was during long periods in Earth’s past. Thus, while a few large, and some (like 1257) very large volcanoes might go off during a 500 year period of generally increased volcanic activity, these will only represent spikes of aerosols against a background of generally higher aerosols from numerous smaller volcanoes that are being continually more active during long periods of time (i.e. the optical depth remains high for extended periods). In this way, we know from ice cores for example, that the MWP was generally a period of lower background volcanic activity and (even if there were a few spikes by major volcanoes), and the LIA (from about 1200 to 1900 or so was a period of higher background volcanic activity, again, punctuated by a few large volcanoes. As we’ve discussed, this forcing from a generally higher or lower period of volcanic activity does not preclude other forcings as well such as from solar variability or internal natural variability.
Thanks for response.
Please refer me to this plentiful ”hard empirical evidence of anthropogenic climate change” specifically as it relates to the premise that increased CO2 concentrations result in significant global warming.
Until I see it, I am highly skeptical that such evidence exists.
As you write, ”all empirical evidence comes down to interpretation and probability, never certainty”
I’d agree that “certainty” is a BIG word, even for those hypotheses that have been corroborated by empirical evidence and have successfully withstood repeated attempts at falsification.
However, the premise that added CO2 has been the direct cause of significant past global warming (let’s say “most of the warming since 1950” as IPCC claims, for example) is NOT corroborated by “hard empirical evidence”.
The RDU, which limits ”our ability to make a more informative scientific probability distribution on this premise” (as the lead post authors point out) is just too great in the case of the premise of CO2 attribution of “most of the global warming since 1950”.
This is so because of not only the “known unknowns” (as stated by the authors), but also because of the “unknown unknowns”. (Our hostess has commented on this repeatedly.)
Inasmuch as we do not have empirical data from real-time physical observations or reproducible experimentation, which specifically provide evidence in support of the idea that significant past warming was caused by increased CO2 concentrations, this remains an uncorroborated hypothesis.
As Jim Cripwell has written:
I’d say that this is the position of many rational skeptics of the IPCC CAGW premise (including myself).
This is your opinion (and also that of IPCC). But it is not supported by direct empirical evidence, as pointed out above.
It is based on the argument: “we can only explain the observed changes if we assume it was caused by AGW” or “no other interpretation explains the observed changes as well as AGW”.
This is a rationalization based on the logical fallacy of “argument from ignorance”, and certainly not empirical evidence.
As Janice Joplin sang: “ya gotta try a little bit harder”, Gates.
The bizarre certainty is reminiscent of the worst kinds of faith, and is encapsulated in Trenberth’s corruption of the Null.
Thanks for your detailed reply.
So, warm MWP very few volcanos. Cold LIA lots of volcanos. Warm(ish) modern period very few volcanos.
You don’t think there appears to be a direct cause and effect here in which the words man made co2 don’t feature?
Jim Cripwell Sure the first sentence is not strictly accurate.
fair enough. everybody goofs.
And now Max is misquoting Janis Joplin,
just goes to show, he can’t ever get anything right.
Pants of many paints.
Although GCMs neglect albedo altogether, according to E. Pallé, the effect of albedo over a 20 year period on the reflectance of solar energy, in terms of solar irradiance for comparative purposes, is greater than the variation in solar irradiance, “from maxima to minima” by a, FACTOR of 20!!
Now we just have to figure out how the influence of the sun regulates albedo. Is it UV? Is is Cosmic Rays? Is it the response of the biome? Is it all of the above, and yet more?
There is more variance in the Earth’s albedo than climate alarmists realized. What else don’t they know about?
About clouds maybe? Do we understand the relationship between cloud cover and global warming? And, can we explain why clouds change?
• Are these changes due to GHGs? …. No
• Are these changes solar? …. humm!!
• Are they natural variability? ….. probably
Are they natural variability? ….. probably
REALLY!!!!!!!!! PROBABLY NOT!!!!!!!!!
We are well inside the bounds of the high and low temperatures of the past ten thousand years.
ARE THEY NATURAL VARIABILITY?
Temperatures are well inside the range of natural variability.
WHY WOULD THEY NOT BE NATURAL VARIABILITY?????
There is NO reason!
Queries by Herman Alexander Pope, history-and-science links by FOMD.
Conclusions As established by the first link (the Einstein/Szilard/Fermi letter to Roosevelt), scientists have a duty to recommend even in the presence of substantial uncertainty. As established by the second link (NASA’s biography of Milutin Milankovitch), strong thermodynamical arguments predict, and global-scale observations affirm, that anthropogenic CO2-forcing is almost certainly larger than the Milankovitch forcing that has dominated Earth’s climate for the past several million years.
It is a pleasure to answer your historical and scientific climate-change questions, Herman Alexander Pope!
Polar Sea Ice regulates Albedo.
When Polar Oceans are warm and wet there are more Clouds and there is more Snowfall and both increases Albedo.
When Polar Oceans are cold and frozen there are less Clouds and there is less Snowfall and both decrease Albedo.
The Temperature that Polar Sea Ice Melts and Freezes provides the Set Point for Earth temperature and the Albedo Provides the Forcing to keep temperatures well bounded close to the Set Point.
Some say this is too close to the Poles for Albedo to influence Temperature. much.
We are always near or in Equilibrium and a small influence is all that is necessary. Also, this snow is triggered by early Snowfall around the Arctic, but much of the snow that makes a difference falls well out side the Arctic.
Your assumption that it is all the sun seems unwarranted.
I vote for “and yet more?”. My instincts tell me scientific knowledge is only beginning to scrape the epidermis of an elephant
and that gives in hiroshima bomb unit?
The paper is worth careful reading. Already I like footnote 1: 1
The distinction between ‘prediction’ and ‘projection’ is arguably artificial if not simply false. As discussed below all probability forecasts, indeed all probability statements, are conditional on some information.
Thank you again Prof Curry.
I don’t see how that probability statements are conditional on some information renders the distinction between prediction and projection artificial.
willard(@nevaudit): We cannot do controlled experiments on the earth’s atmosphere, so the scientific method cannot be applied to decide whether the hypothesis if CAGW is right or wrong. No-one has any idea what happens to global temperatures as we add more CO2 to the atmosphere
So you don’t. Have you criticisms of their discussion “below”?
Ah, nuts! I meant to quote this from willard(@nevaudit): I don’t see how that probability statements are conditional on some information renders the distinction between prediction and projection artificial.
Bother. I forgot to reread.
> Have you criticisms of their discussion “below”?
Too bad you can’t see it, Willard.
Maybe the light will come on.
> Keep trying.
The authors mentioned that as an argument.
Let’s see how the argument works.
Due diligence and all.
Willard(@nevaudit): Let’s see how the argument works.
If events turn out different from what was predicted/forecast/extrapolated/projected/etc, then at least some of the basic postulates of the theory are discredited. Thus they are all epistemologically equivalent. Pragmatically and psychologically there may be differences: the word “forecast” usually connotes the unstated assumption “unless something we don’t anticipate happens”; but if the forecast turns out inaccurate because something we did not anticipate did in fact happen, it illuminates an area of our ignorance.
> If events turn out different from what was predicted/forecast/extrapolated/projected/etc, then at least some of the basic postulates of the theory are discredited. Thus they are all epistemologically equivalent.
This applies to what is described, referred, assumed, and perhaps any projectable predicate:
If we accept the A’s argument, we should conclude that descriptions are predictions too.
The only epistemology for which projections and predictions are equivalent might very well be solipsism.
willard(@nevaudit): The only epistemology for which projections and predictions are equivalent might very well be solipsism.
What’s the important shareable difference between them?
> What’s the important shareable difference between them?
Solipsism is the claim that only one’s own mind exists, cf.
At the very best, we could have this debate all over again:
My hunch is that conflating prediction and projection relies on an internalist epistemology.
Perhaps there would be a shortcut: let’s look at specific statements. We’d need an example of a projection claim, and an example of prediction claim. But even that may be tough, as I am not even sure the IPCC makes any projection claim:
Compare with climate prediction:
On the face of these definitions, a prediction can be formulated with a specific claim, while a projection can only refer to a set of claims.
willard(@nevaudit): A projection is a potential future evolution of a quantity or set of quantities, often computed with the aid of a model. Projections are distinguished from predictions in order to emphasise that projections involve assumptions concerning, e.g., future socio-economic and technological developments that may or may not be realised, and are therefore subject to substantial uncertainty
Hence the focus authors reminder that even predictions are conditional. And my comment that the difference is pragmatic, raather than epistemological. If events turn out different from what were forecast/predicted/extrapolated/projected, then we have learned that the bases of them are wrong at least to some extent.
We always know our simplifications are wrong to some extent, but the disconfirmations of expectations show that the inaccuracies are large enough.
What was the point of your introduction of solipsism?
In reference to a ‘prediction’, the term “information” is defined. In reference to a ‘projection’, the term “information” is undefined.
Many of us casualty actuaries like to view the world as a process with certain random elements. E.g., the expected number of serious hurricanes making landfall in a year might be 3. The actual number was a random variable with mean 3. Then there were 3 sources of uncertainty
1. The proper model for number of hurricanes
2. The parameters of that model
3. The actual number of hurricanes, given a model and given parameters.
The uncertainty in #3 could be measured, However that was thought to be the smallest source of risk. The uncertainty in #1 was perhaps the largest source of risk, and there’s no statistical formula for measuring it.
@David in Cal,
But you can look at the prediction accuracy overtime to determine if 1 has any value.
Global warming consensus: fact or fiction?
There is a problem with attributing the 20th century warming to increasing CO2: most of the temperature increase occurred from 1910 to 1940, but only a third of the modern increase in CO2 had occurred by then… The emerging hypothesis is that post-1940 temperature change was suppressed by the cooling effect of aerosols (particulate pollution) in the atmosphere, which diminished since the 1970s.
The CO2-temperature causal relation has been mentioned in connection with ice-core data dating to several hundred thousand years ago, correlating CO2 and temperature, but it is increasingly clear that in the geologic past the temperature changes preceded the CO2 changes… Rather than atmospheric CO2 driving temperature changes, it is temperature changes which may have driven past changes in the global carbon cycle…
~Wm. Robert Johnston
The ‘relevant dominant uncertainty’ (RDU) has been defined by Smith and Petersen as:
In the case of past climate change the RDU is the magnitude of past natural climate forcing (including the mechanisms involved).
Here we are dealing not only with “known unknowns”, but possibly also with some (as yet) “unknown unknowns”.
Only after we have cleared up these unknowns can we begin to quantify the magnitude of past greenhouse forcing and, hence, future greenhouse warming (the “outcome of interest”).
We are obviously not there yet, as our hostess has pointed out repeatedly – so any forecasts of future GH warming are intrinsically flawed to the point of being essentially worthless.
In other words, these forecasts are based on the logical fallacy known as “argument from ignorance”, i.e. “our models can only explain the observed past warming if we assume…”.
In discussing the reliability of the IPCC climate models, the authors break this down into three separate facets of reliability (statistical, methodological and public). This is interesting, but does not change the basic conclusion reached above.
Further to your comment:
Around 30% of the incoming solar radiation is reflected back to outer space, primarily by clouds (Kiehl-Trenberth).
The ISCCP observations (Pallé et al.) suggest that the global monthly mean cloud cover decreased by around 4.5% between 1985 and 2000. As a result the Earth’s global albedo decreased by the equivalent of around –5 W/m^2, i.e. decrease of reflected solar radiation (= heating of our planet).
This compares with an IPCC estimate for 2xCO2 greenhouse forcing (Myhre et al.) of around 3.7 W/m^2.
I think I never heard so loud
kim’s ponderings on clouds’
quiet import and the cheshire ::
grin of sunspots. We require
more spotlights on complexity
of climate’s interactive system.
and scientists un- certainties.
A fellow serf.
How do we stimulate insightful discussions of things we can neither rule out nor rule in with precision, but which would have significant impact were they to occur?
Skeptics are open to discussions about any and all of this.
Consensus People are open to discussions about none of this.
Judith, can you name three climate scientists who consistently address RDU in their papers?
Predicting (projecting!) average global surface temperatures may have some political relevance, although we would be more interested in regional predictions, in which field GCMs have no skill whatsoever. But genuine scientific questions are seldom practical and meaningful applications can only come at the end of investigations, not before.
Therefore let me skip reliability N [N=1,2,…] for now and stick to reliability 0, which is defined as our comprehension of physics of the climate system.
Now, physics never studies a single run of a unique system, but multiple runs of a wide class of systems. In this case we are supposed to understand irreproducible non equilibrium quasi stationary thermodynamic systems in general, to which class the terrestrial climate system belongs to, but in fact we do not understand them at all. Therefore it seems to be a bit premature to construct computational models of high Kolmogorov complexity of an incomprehensible system.
To some extent we do understand the reproducible variety of systems referred to above, see
however, if microstates belonging to the same macrostate can evolve to different macrostates in a short time, i.e. the system is irreproducible, we do not have a clue. The chaotic nature of climate ensures the climate system is a member of the latter class.
No wonder the Maximum Entropy Production principle does not apply to it, for example. If it would, Earth would be black, as most of the entropy production happens when high color temperature short wave radiation gets absorbed and thermalized, so lowering albedo increases rate of entropy production. But Earth is not black, as it can clearly be seen in any satellite image of it.
So, what I am calling for is a little curiosity, in the best scientific tradition. Climate science is not settled, far from it, but the meme exists, which, unfortunately, sets a high barrier to progress.
For even if terrestrial albedo is pretty high (~0.3), there are tantalizing signs of it being strictly regulated, see
Clear sky albedo of the Southern hemisphere is much lower than that of the Northern one (due to abundance of water there), but relative error in mid term average of their actual albedo difference is smaller than 0.1%. This symmetry is not shown by GCMs, which is a much more serious issue than overestimating observed warming. It indicates genuine lack of physical understanding in the theory on which all computational models are based on.
Earth’s albedo is regulated by the water cycle, that is, by ice, snow &. clouds, so the lack of this regulation in GCMs indicates the water cycle is ill understood, therefore it is poorly represented in computational models. This regulation is an emergent phenomenon, brought about by turbulent flows and phase transitions on all scales in a highly non linear system with a plethora of internal degrees of freedom.
Under these circumstances serious scientists would let the climate system alone for a while and start studying other instances of irreproducible quasi stationary non equilibrium thermodynamic systems, those which, unlike the climate system, would fit into the lab, so multiple runs could be studied experimentally with controlled parameters. As soon as theoretical understanding of these systems reaches a point when their successful computational modelling becomes possible, one can return to the climate, but not sooner than that.
With unreliable physical understanding it simply does not make sense to proceed to higher logical levels of reliability.
Berényi Péter your post seems at odds with the whole “science is settled” meme :)
Here is how I see it.
In a volume of water, each individual molecule of water has an energy and collectively you have a distribution of energies, some of which will be enough to transfer to the gaseous state, all dependent on temperature.
Which is why warm water evaporates faster than cold water.
The warmer the water, the higher the vapor pressure, the faster the water evaporates.
It is contrary to the properties of water some effect to raise the rate of evaporation without raising the temperature.
Berényi, I have always wondered why the albedo of the Earth is 1 with respect to IR radiation in all the models. I had always assumed that some fraction should be reflected by both land and oceans.
An albedo of one is total reflectance. You must have read something wrong. IR reflectance is practically zero as far as I know. Which frequency band and which surface type did you think reflects a significant fraction?
Damn, must think backwards.
The models suggest the Earth has an Albedo of 0, with 100% of all downward IR radiation absorbed at ground/sea level.
In thermal IR Earth is pretty black. Its reflectivity is not exactly zero, but it is close to it. However, what I am talking about is shortwave reflectance, that is to visible &. near infrared (non thermal) radiation. It determines the absorbed incoming energy flux, so until its regulation is understood, it is premature to talk about energy balance of any kind.
DocMartyn | November 26, 2013 at 3:52 pm |
“The models suggest the Earth has an Albedo of 0, with 100% of all downward IR radiation absorbed at ground/sea level.”
Sorry, still wrong. Water in the atmosphere absorbs near infrared from the sun.
This is basic encyclopedic knowledge, Doc. Atmospheric physics for 9th grade natural science class. I shouldn’t need to be explaining this to you at this point in time.
DocMartyn | November 26, 2013 at 3:52 pm |
This power spectrum handily illuminates (pun intended) that CO2 attribution is a joke. See the left hand side of that graph – the UV transmission? No one knows better than a few percent how much is transmitted and how much is absorbed and turned straight around as longwave in the earth’s ozone layer. Truth be told the ostensible anthropogenic warming observed since 1950 can be totally explained by change in stratospheric ozone. Easily enough potential energy throttling there. Circa 1950 is when we started dumping CFCs into the atmosphere and 1990’s is when we largely ceased. The pause could easily be us no longer destroying atmospheric ozone with CFCs. Even more, we just recently learned that while sunspots are a proxy for a variation of only 0.1% of total insolation it’s an order of magnitude larger variation in UV power which again is more energy being denied to the earth’s surface (or not) than the entire calculated surface forcing increase from anthropogenic emissions. The increase at the surface could be rejected at the top of the atmosphere simply by change in ozone concentration and/or UV power coming from the sun.
It’s not a good time be a warmist. The whole house of settled science cards is falling about their heads and shoulders.
That looks like quite an instructive selection of spectra: Water vapor punching some big holes in the incoming radiation. So presumably more water vapor possibly leads to absorption occurring higher up in the atmosphere?
What alternative theory is Springer up to right now? # umpteen?
As JimD says it boils down to ABCDEFGH
A greybody behaves exactly like a blackbody under far-from-equilibrium conditions. Just because an object appears to be light colored does not mean its equilibrium temperature is any different from a blackbody. There will be dynamic differences. A black asphalt roadway heats up (and cools down) faster than a light concrete one, but the equilibrium temperature depends on the characteristic of the emission spectrum of the object, not on its albedo. Set an oven at 400 deg F and anything you put in it will come to the same temperature, 400 deg F, regardless of its albedo. It is a mistake to assume a priori that the earth’s albedo determines its equilibrium temperature. Other factors are more important.
michael hart | November 26, 2013 at 10:09 pm |
“That looks like quite an instructive selection of spectra: Water vapor punching some big holes in the incoming radiation. So presumably more water vapor possibly leads to absorption occurring higher up in the atmosphere?”
Not necessarily more water vapor. As an aside all H2O phases not just vapor are excellent IR absorbers so we must include ice and liquid. But it’s not necessarily more water. The same amount of water extending higher up in the atmosphere changes things too. That’s called lapse rate feedback and it’s a well known negative feedback that counteracts surface temperature rise from non-condensing greenhouse gases like CO2. In a nutshell the modus operandi is that clouds form at a higher altitude but at the same temperature due to reduction in lapse rate. At the higher altitude there is a lesser amount of non-condensing greenhouse gases above the cloud top and more below the cloud bottom. Thus there is a less restricted path for radiative cooling of the cloud top to space and a more restricted path for so-called back-radiation from the cloud bottom to reach the earth’s surface below.
Hence a miscalculation in cloud dynamics that fails to accurately predict their behavior under increasing non-condensing GHGs will cause a model failure. We don’t have good observational data about changes in clouds but we are improving our instrumentation in that regard and the better instrumentation has already revealed that lapse rate feedback has been underestimated.
Contrary to Webby’s plaintive cries that I change my story more often than he changes his clothes (weekly I suppose) this is no change whatsoever. I have consistently pointed to poor cloud modeling as the most likely source of error in GCM ensembles. That’s not to say it’s a single sourced phuck-up. Undoubtedly the inaccurate physics in these models is legion and I could be wrong but the odds are against it.
Berényi Péter | November 26, 2013 at 3:10 pm | Reply
“Earth’s albedo is regulated by the water cycle, that is, by ice, snow &. clouds, so the lack of this regulation in GCMs indicates the water cycle is ill understood, therefore it is poorly represented in computational models.”
A dead giveaway is the proxy temperature recorded in ice cores dating back a half million years or so through several interglacial epochs. At the beginning of each interglacial we can see temperature shoot up like a rocket as the high albedo snow & ice melts and becomes very low albedo ocean surface.
The key bit to look at is the maximum temperature attained at the end of the melt. It’s nearly identical each time and is the highest value attained during the entire interglacial. Temperature shoots up like a rocket then hits a ceiling that stops any further rise dead in its tracks. It is my supposition that the temperature is capped when shade-producing clouds reach a point where they starve the ocean of solar energy such that no more clouds can form. A balance is obtained and this squares with all the oobservations. Glaciers reform at (heh heh) glacial pace and this is reflected in the slow decline in temperature after the peak is attained. As gglaciers reform the ocean surface area is reduced and hence global aaverage albedo slowly increases due to it. At some point the earth gets cold enough so that there is a dearth of shade-producing clouds then the first small perturbance (such as Milankovitch orbital and rotational dynamics) causes a tipping point which starts another rapid melt. Lather, rinse, and repeat.
pochas | November 27, 2013 at 5:27 am |
“Set an oven at 400 deg F and anything you put in it will come to the same temperature, 400 deg F, regardless of its albedo.”
An oven heats by conduction not radiation. If that horribly flawed analogy is the best you can do then you’ve an exceedingly poor understanding of the earth’s heat budget. Maybe this will help you along the path to greater understanding:
Texas A&M University online textbook “Introduction to Physical Oceanography”
5.6 Geographic Distribution of Terms in the Heat Budget
When you feel comfortable being able to describe the where, why, and how of each term and its distribution then you’ll have a solid basis for further understanding the nuances of how anthropogenic factors like CO2 and soot can change the global heat budget.
In Springer’s world “could be” is a sure bet !
Remember Springer’s theory of a “liquid GHG” that explains everything?
FSM .. LOL
Strawman about “explaining everything” aside, the ocean as a greenhouse fluid explains a lot and I’m still waiting for some experimental evidence that a body of water free to evaporate can be heated from above by mid-infrared radiation. I know you warmist boys don’t do much in the way of experiments to confirm your assumptions but in this case I’m afraid I must insist because the evidence (cool skin layer 1000 micrometers deep, total absorption of mid-infrared in less than 100 micrometers) argues that you cannot heat water from above with infrared all you can do is accelerate evaporation. Prove me wrong.
And that failure to warm the main body of the ocean with mid-infrared that only accelerates evaporation is precisely why lapse rate feedback has been underestimated. It speeds up the hydrologic cycle and places cloud tops at a higher altitude in the process.
Over dry land the story is different because rocks don’t evaporate they must instead get warmer in the classic T4 response. This results in warmer runoff from the continents and that is why ARGO can’t detect OHC rise first passing through the mixed layer – it enters on the shore and hugs the bottom avoiding ARGO buoys because the buoys avoid the shore.
I know you must hate it that this all makes sense but I’m sure you’ve learned by now how to deal with failure in real life. Exercise your failure acceptance skills during blog trolling too. You may not like it at first but it will make a better man out of you and it certainly can’t hurt because you can’t sink lower than the bottom of the barrel where you currently reside.
David, the evaporation rate is a direct function of the temperature.
You can’t change one without changing the other.
Epic fail, or as an astute blogger once remarked
“I prefer the ocean has waves theory”
“the evaporation rate is a direct function of the temperature.”
I don’t think this precludes mid-IR not warming surface waters though. Imagine individual water molecules capturing a(many) IR photon and evaporating quickly enough that little to no heat is transferred past the surface layer. If I understand evaporation correctly, to evaporate a molecule has to reach the boiling point of the liquid at the specific vapor pressure, even if the liquid is far from boiling. This is how evaporative cooling works, again if I understand it correctly.
I don’t see this being contrary to David’s statements.
Perfectly still water has a thermal diffusivity of 0.143 mm^2/second at 25C.
You can get a feel for how far the heat will diffuse from the surface in 1 minute by invoking the square root heuristic
sqrt(60s*0.143mm^2/s) = 2.9 mm.
which means that yes, the heat will not move to far from the surface in this situation. That is why it takes a long time to heat a pot of water via an overhead infrared source of heat. Try it.
Yet, the reality is that the ocean’s surface has waves and eddies and features lots of turbulent mixing. The actual effective diffusivity is on the order of 100 mm^2/second or greater. See Roy Spencer here:
So given that knowledge, we can find out how far the heat will mix in real water after 60 seconds.
sqrt(60s*100mm^2/s) = 77 mm
even in one second the heat is already starting to mix into the depths, which means it is not going to evaporate !
This is reality and I know that it must absolutely pain the deniers such as Springer and MiCro that they can’t use their FUD-filled arguments against people that actually understand physics, but there you go.
I get that, what you didn’t define is how far IR penetrates the surface before the photon is captured, because if It’s only a molecule or two, it might just evaporate before it has a chance to get mixed, and in doing so might take additional heat from the bulk water with it.
Where is the experiment showing how mid-infrared illumination (~10 micrometers) from above slows heat loss in a body of water free to evaporate?
No hand waving. I want to see a controlled repeatable experiment document exactly what happens. I can’t find one and no one else has produced it either. Surely in the 150 years since Tyndall started experimenting with absorption of calorific rays some experiment with same illuminating a body of water. I find it fascinating that none you anointed physics experts can produce the experimental results of such a common physical occurrence that ostensibly underpins most of the CO2 warming hypothesis.
Retired solar physicist Doug Hoyt (Raytheon) says that incident IR at ~10um will not affect the temperature of a body of water. In the comments he references and experiment using a 10.3 um laser and temperature sensor a centimeter below the surface. The temp sensor registers no increase in temperature. He also mentions dermatologists use lasers at that frequency to cut and burn the top layer of skin while not harming deeper tissue.
The collapse of arguments for high climate sensitivity
February 7th, 2007 by Warwick Hughes
Guest essay by Dr Doug Hoyt
The last sentence in 2 above “at best … slows any cooling” is ostensibly what the climate boffin Gavin Schmidt says is what happens. But when and how much does it slow cooling? How does relative humidity and wind speed and turbulence effect this?
These questions cannot be definitively answered by flippant hand waving but they can be answered by carefully controlled experiment. The answers are important.
The penetration depth of IR in water is around 10µm for most wavelengths and at least 2µm for all wavelengths of thermal IR. Thus it’s around 7000 molecular distances or more.
Thanks Pekka, even @7000 molecules, 2-10u is a very thin layer, that’s 1980’s semiconductor minimum feature size.
A person’s skin is not in turbulent motion.
The denier’s denial mechanism is strong.
Singer puts out the FUD and Springer runs with it like a loyal lapdog.
Singer/Springer, what’s the diff?
WebHubTelescope WebHubColonoscope what’s the difference?
Yes, 10um is very thin and it’s stone cold fact that twice as much heat escapes the ocean as latent heat than does as radiation. In the tropics its even more than that. See figures 5.8 (longwave flux) and 5.9 (latent flux) below and note that tropical ocean sheds 130-160W/m2 as latent and only 40-60W/m2 as radiation while sensible heat flux is 5-10W/m2.
The reason there is very little conduction is that water surface temperature is very close to the surface air temperature. Same goes for heat loss via longwave flux – downwelling IR from the humid surface air is very near the same intensity as upwelling IR from the water so they largely cancel out leaving evaporation and convection as the dominant mechanism removing heat from the ocean.
The question thus begged is how much a 4W/m2 increase in downwelling IR increases each of the three fluxes leaving the ocean – conductive, radiative, and latent. Given that it is completely absorbed in the top 10 microns of surface water and there is no mechanism to transport the heat downward (the surface layer is colder than the bulk below it) there’s not much left for the extra 4W/m2 DWLIR to do except go into latent heat of vaporization.
WebHubColonoscope says turbulence mixes the heat downward. This is false on the face of it. It is well known fact that the top millimeter of the ocean is colder than the water below it. It’s called the cool skin layer. Even if the cool skin layer mixed downward to any large degree (surface tension prevents this except in breaking waves, by the way) it’s mixing colder water downward which STILL fails to transfer any heat to the bulk of the ocean and in fact has the opposite effect of bringing warmer water to the surface. WebHubColonoscope earns his name by having the effect of turbulent mixing of the skin layer bass ackwards.
Yeah, but then you have all that nice water vapour for the positive feedback that yields high climate sensitivity. That’s the ticket.
Whaddya mean this ticket’s fake? They’re still selling ’em, right over there.
Springer bucks all experimental evidence that has shown that the heat is diffusing downward as indicated by the OHC studies.
And Webby still doesn’t get that they measured what a couple percent of ocean heat for a few decades before proclaiming is was convently going “up”.
Nice post, informative, clarifying, interesting links, Berényi Péter
One must, however, seriously question whether the statistical uncertainty adequately captures the ‘relevant dominant uncertainty’ (RDU). The RDU can be thought of as the most likely known unknown limiting our ability to make a more informative scientific probability distribution on some outcome of interest; perhaps preventing even the provision of a robust statement of subjective probabilities altogether.
The RDU, the most likely known unknown, is the complete lack of application of modern scientific software development procedures and processes to any Climate Science software. The sources of the projections / predictions, the numbers, are the software.
Software reliability ( reliability0 ). Until this aspect is determined, everything else, everything without exceptions, are distractions.
Indeed, focusing on the ability to carry enough fuel is a distraction when the integrity of the plane is thought to be at risk.
Indeed, focusing on reliability and evaluation ( better known as validation ) and fit for purpose are distractions when the integrity of the numerical solutions of the discrete approximations have not yet been verified.
Statisticians and physical scientists outside climate science might become scientifically skeptical, sometimes wrongly, of the basic climate science in the face of unchecked oversell of model simulations.
As do plain ol’ engineers, and all others who are skeptical, frequently correctly, of Bumper Sticker Grade Climate Science.
Joshua asked, where’s the evidence for the claim that certain behavior of scientists weakens more widely the trust in science and because of that backfires. That wasn’t a surprise because he asked the same question when Judith presented the same argument, when I presented the same argument, and I’m sure also to several others who have presented the same argument.
All we who present the same argument believe to it by intuition. I’m certain that very many scientists think similarly, but it may certainly be that this way of thinking is more common among scientists than among the wider public. I do still think that the effect is real and significant, but do we really have evidence on that?
What we have evidence of is that people are inclined to filter information so as to confirm biases. If you read the work of Kahan, you will see that he presents solid evidence that shows that as people display more expertise on these types of controversial issues, the more they tend to be polarized in their views (not on an individual basis, but across groups). His studies show that people assess the value of “expert” input in ways that are strongly associated with ideology and group identification. “Skeptics” and “realists” alike, on the whole, fail to account for the evidence that he presents. The picture painted by his carefully controlled and assessed evidence is one quite different than the phenomenon that the authors “predict” is “likely.”
I don’t doubt that what they describe exists to some extent – the question, however, is how relevant is what they describe to understanding the larger picture. My opinion is that in the way they outline their “prediction,” they have oversold and over-interpreted the evidence. They have some, anecdotal evidence of the sort you describe. They present no, none, zilch, nada, niente. bupkis evidence that supports their “prediction.” Despite that we all believe that the phenomenon they describe exists to some they have not presented actual evidence. They are making unsupported assumptions.
Rather ironic given their areas of expertise.
Joshua | November 26, 2013 at 6:30 pm | Reply
‘”What we have evidence of is that people are inclined to filter information so as to confirm biases. If you read the work of Kahan, you will see that he presents solid evidence”
You likely read Kahan because he confirms your biases.
You have now caught yourself up in a recursive trap that’s like a black hole pulling with irresistible force only this hole attracts asininity instead of matter.
Actually, there is a fair amount that I disagree with him about. I do agree with much of his analysis, but more importantly, he presents empirical data on the phenomena we’re discussing – something no one else, including the authors, does.
Ironic, given that they and so many others are focused on quantifying uncertainty.
> You likely read Kahan because he confirms your biases.
You’re likely a mind reader now? Amazing.
His array still takes noise for signal.
Oh, dang, w; you’re gonna completely dodge that one.
Joshua | November 26, 2013 at 8:24 pm |
“Actually, there is a fair amount that I disagree with him about.”
That would be the part you ignore. Bias works both ways – accepting that which agrees with preconceived beliefs and rejecting that which does not. Let me know what parts of that your bias won’t allow you to understand.
Compare and contrast this:
Wrong. Once again we see your tendency to formulate opinions without basing them in evidence.
I don’t ignore his opinions that I don’t agree with – I engage in discussion with him on those issues.
You are just flat out, wrong. You formulated a conclusion, with certainty, merely out of some fantasy that you have about me..
This begins to give an empirically-based picture of what I’m talking about.
Yes, Pekka, we really have evidence, but you really have to look for it.
Here you go, Pekka:
Notice that the report doesn’t tell us anything about the impact over time.’
Typically, “skeptics” try to use those cross-sectional data to support longitudinal conclusions. That’s not something a skeptic would do.
Notice that the report doesn’t tell us anything about how climate scientists compare in that regard to plumbers, or priests, or grave-diggers, or “skeptics.”
Typically, “skeptics” try to use that report to draw conclusions about climate scientists or scientists more generally although it data aren’t controlled so as to support such conclusions. That’s not something a skeptic would do.
Do you do those things, Don?
Joshua waves his hands and expects us to watch them, still.
Compare and contrast this:
Need I say it, Judith?
Josh – Maybe you could take those sour grapes and make yourself a bottle full of whine.
Pekka, look at the MMR vaccination rate
Wakefields Lancet paper was launched in mid-98 and has had a catastrophic impact on vaccination rates, world wide. No matter what the establishment stated, parents distrusted the ‘science’.
It is one thing to ask the question ‘Do you trust science and scientists’ and log the numbers, quite another to have someone inject your newborn with something.
The general proposition that models do not have to be exactly right to be somewhat useful seems to be true. How they turn out to be wrong can tell us a lot about the systems they are attempting to capture.
However, where questions of policy (or for that matter safety) are involved we need to be able to assess the relative reliability of a particular model. Is it “half right”? and, if so, which half?
Take a climate model which ascribes say 10% of a carefully measured temperature increase to solar variation, 10% to aerosols, 10% to other sources of natural variation leaving 70% of that temperature increase to be explained. This model would be mildly useful for policy because it would tell policy makers that at least 20% of the temperature increase was beyond human control. The 10% aerosols could be studied to determine the human contribution. The 70% “to be explained” can be usefully examined as well. Critically, how much of that 70% can conceivably be attributed to human causes and therefore can conceivably be controlled by policy action is a good question even in conditions of radical uncertainty.
Relevant dominant uncertainty would be useful in this example because it can categorize the uncertainty in the 70% of temperature rise which cannot be explained. For instance, uncertainty as between the contribution of clouds and ice reflectivity is only relevant where there are policy options which would make a difference to either of those variables.
The most fundamental question of relevance is whether there is a possibility a particular policy will have a measurable effect on the variable to be controlled. And here you have to be quite careful; if what you are seeking to control is temperature a policy which reduces CO2 emissions may or may not have an effect on that variable.
So, in a temperature example, if you have a policy which will cost “x” the relevant dominant uncertainty will be how much temperature reduction you will get for your “x”. If a scientist can’t tell a policy maker with a fair degree of certainty what the effect of a particular action will be on temperature that policy maker would be very unwise to take action.
The general proposition that models do not have to be exactly right to be somewhat useful seems to be true.
That might make sense if it was true.
The Truth is that the models are not anywhere close to being exactly right. Totally wrong for 15 or 17 years is not close to being even a little bit right. Totally not useful is what they are, and worse, decisions are made based on worse than useless output that is are harming our economy and energy production.
“Totally not useful is what they are, and worse, decisions are made based on worse than useless output that is are harming our economy and energy production.”
I work with data, one of things I tell my customers is that the worse thing I can do is give them bad data that they don’t know and can’t tell is bad. We are fortunate that we can tell that GCM results are bad, unfortunately many are not smart enough to know better than to use bad data for a ten’s of trillions of dollars decision that will affect billions of us.
However, where questions of policy (or for that matter safety) are involved we need to be able to assess the relative reliability of a particular model. Is it “half right”? and, if so, which half?
There is no problem here. There is no model that is anywhere close to half right. Both half’s are totally worse than just wrong.
This model would be mildly useful for policy because it would tell policy makers that at least 20% of the temperature increase was beyond human control.
When the models show no skill, there is nothing to tell policy makers anything about what is beyond human control. Mainly, all of the temperature control is natural and beyond human control.
There is no actual data that shows humans have any control over earth temperature of sea level.
Well, well, well. Ptolemaic astronomy used to get celestial positions of planets almost right. It was useful indeed for astrological purposes. Does it make astrology sound? Do occasional deviations of computed positions from observed ones tell us anything about the system under study?
The first version of heliocentric Copernican astronomy was in fact less accurate and scientists of that time were aware of this discrepancy perfectly well. It needed Kepler, who abandoned the flawed notion of circular orbits and replaced them with elliptical ones to surpass Ptolemy.
However, unexplained deviations of the old model were utterly useless in finding the correct solution, because they belonged to the model itself, not reality.
The US Navy teaches celestial navigation using a Ptolemaic universe because imagining all the heavenly bodies as moving around a stationary Earth is a lot simpler for navigational purposes. Of course, it’s just a coordinate transformation, not a causal theory, but one shouldn’t jump to conclusions about the usefulness of wildly wrong models. Every schematic diagram presents an unrealistic geometry that aids certain kinds of correct mental operations.
And if you wanna get REALLY wild, Micronesian navigators use a frame of reference in which the ocean canoe stands still while the ocean and islands move. This is a total trip:
And the most relevant question: Who made the decision that a temperature increase–ANY temperature increase–was bad and MUST be controlled, based on what criteria? We have been so propagandized as to the horrors of the rise in the ‘temperature of the earth’, that no one ever gets around to asking ‘Really? How so?’. Is it important how MUCH rise?
It is noteworthy that when CAGW first arrived on the scene as ‘settled science’, the postulated temperature rise in the next hundred years were in the ten degree range (I’m not going to do the research to get exact figures, and basically, they were just picked out of thin air, and varied.). Now they are in the 2+/-1 degree range for a doubling of CO2 (not even close yet), but the remedies have remained the same: ANY temperature rise is catastrophic and must be countered immediately by decreasing anthropogenic CO2 by 90+% through a combination of taxes, regulations, and artificially increasing the cost of energy while reducing its supply. Oh, and by redistributing billions in OPM to every tinhorn thug with a Swiss bank account that the UN can identify, although the connection between the redistribution and the TOE remains a bit nebulous. If one were a cynic, he would be suspicious that instead of having a problem in search of a solution, we had a solution in search of a problem.
This seems again to be a case of the professional tree experts coming by to expound their statistical ‘relevant dominant uncertainty’ (RDU) analysis of tree physiology and tree psychology. Rigorous statistical analysis and systematic investigation of data uncertainties and the perplexity of natural ongoing variability of various climate effects is of course a good thing and can lead to a better understanding of the complex climate system physical processes – and, all the better to confuse decision-makers with.
Decision-makers need to be looking more at what is happening to the forest, and not be so pre-occupied with what individual trees may or may not be doing. There are many uncertainties of climate variability that are never going to become ‘predictable’ in any preemptive sense. So, decision-makers (and the general public) will just have to learn how to deal with the consequences of those unpredictable climate events when they do happen.
Decision-makers need instead to understand the Relevant Dominant Certainty (RDC) of global warming and the changing climate system. The basic facts and physics are very clear. There is virtual certainty that atmospheric CO2 is increasing (now at 400 ppm) because of fossil fuel burning. There is also certainty that atmospheric CO2 is the principal non-condensing greenhouse gas which acts to prop up the terrestrial greenhouse effect, and that water vapor and clouds are the fast feedback effects that simply multiply the CO2 greenhouse warming by a factor of three or four.
The bottom line is that atmospheric CO2 is the principal control knob that governs the global temperature of Earth. Decision-makers need to understand that there really is no significant uncertainty in the basic cause-and-effect relationship of atmospheric CO2 and global warming, and the impending consequences of sea level rise and environmental disruption.
The decision-makers should keep the basic facts and physics in mind, and act accordingly.
If I were advising a policy maker would I tell him that there is a linear relationship between CO2 and temp or a logarithmic one?
How is the “control know” calibrated?
What would be the reduction in temperature if I reduced CO2 emissions by 5%?
If it cost 100 billion dollars to reduce overall emissions by 5% would this amount to a net benefit for a) my country, b) my region, c) the world?
These are the sorts of questions policy makers confront. If the uncertainty in the science is such that they cannot be answered then talking about “control knobs” is simply dealing in the false coin of metaphor.
Policy makers should not be concerned with linear or logarithmic relationships. These is way too down in the weeds, and you should know that.
Outcomes and Probabilities, risk versus costs of both action or inaction—these are what should be discussed.
Anthropogenic climate change is like playing Russian roulette http://en.wikipedia.org/wiki/Russian_roulette. The best science can do is let policymakers know:
1) How many bullets are likely in the cylinder
2) What kind of bullets they are
3) What the likely outcome would be should a bullet of any given type be fired (rubber, hollow point, etc.)
4) What are the costs would be to prevent that particular type of bullet from being fired by removing it from the gun. Are the costs greater than the likely damage itself?
5) What other risks (other than anthropogenic climate change) might require our attention even more? This would be akin to second or third revolver being placed against our head, which ends up doing damage which we could have prevented had we not been so preoccupied with the AGW revolover.
Oh, sorry it had to be a Texan:
“In February 28, 2000, a man from Houston, Texas died after attempting to play Russian roulette with a semi-automatic pistol. The man was apparently unaware that semi-automatic pistols automatically insert a cartridge into the firing chamber when the gun is cocked. He was posthumously awarded a Darwin Award.”
Demonizing warming while ignoring cooling is pretty much turning the weapon upon oneself.
The simplistic repetition of unscientific mumbo jumbo about “control knobs’ and “fast feedbacks” is the the hallmark of professional propagandists. I’ll take the “tree experts,” who at least show some scientific understanding of the components of the “forest.”
(Do not show scientific understanding.)
How non-belligent a non-insulting a two-sentences comment can be.
willard is right.
I know the precise scientific meaning of “control knob” and “fast feedback” because I have invested the time and worked out the math for myself.
I didn’t realize homework assignments are considered “propaganda” .
JS, a real gem of a paragraph; it’s shiny, it sparkles.
Like the ignorant deniers, primitive tribes and fawning acolytes are all easily influenced by shiny beads and sparkly baubles.
Willard will link the Sinatra rendition via YouTube if I am a mind-reader.
“There is also certainty that atmospheric CO2 is the principal non-condensing greenhouse gas which acts to prop up the terrestrial greenhouse effect, and that water vapor and clouds are the fast feedback effects that simply multiply the CO2 greenhouse warming by a factor of three or four.”
So when, precisely, is water going to get itself off the couch and do these “certain” things?
A Lacis writes,
I consider that is an essential logical error. The certain facts are not relevant for decisions, because there are not enough of them. AGW by itself without quantification is not a basis for acting, and certain lower limits of damages are so low that they are no more a reason for acting. Knowing what’s certain does not lead to decisions to act in case of climate policies.
What should make decision makers to act is what’s both likely enough and serious enough. Understanding the nature of the uncertainties, and the likely consequences of proposed policies are the essential issues, not what’s certain.
I could add that perhaps perpetuating that error is the most serious mistake several climate scientists have done. That’s perhaps the most essential reason for the difficulties in getting the message trough.
Environmental economists like Nordhaus, Weitzmann, and Tol do not make that error, pure climate scientists seem to make often either explicitly or implicitly. Both may end up with the same recommendations, but they argue differently for them.
It’s natural that many skeptics argue that nothing should be done until we have the certainty of CAGW. Reacting to that by pointing out what we know for certain is the wrong reaction. More emphasis should be put in pointing out that certainty is not at all required, not even very high likelihood at the level of 95% or 90%.
Those arguing for action should not fall in the trap built by the opposition, and start to make arguments that cannot be fully substantiated. They should keep with the sound arguments and explain why they are sufficient. That may be difficult, but that’s the only sound way for scientists.
Pekka, nothing gets done (not counting the profiteering by the rich and bureaucrats) even in the countries who did agree on “action” (Kyoto). It’s just a bureaucratic verbiage and the loss and damage caused by the scare is enormous.
+ 10 Of course, any action must be constrained so as to have a higher expected rate of return than alternative uses of resources.
Pekka, that reply was to your 4.14 post.
Quite a lot has been done in several countries, perhaps most in Germany, but the results are poor. That’s related to the problems of getting the correct message trough. The political atmosphere in Germany has been very positive towards renewable energy and the climate issue is an essential factor in that.
When getting a balanced and rather complex message trough is so difficult the outcome may be simplistic policies in one direction or in another. The problems of the policies of Germany start to grow so large that changes are very likely to come. The errors in the other direction take much longer to become as obvious, perhaps so long that the interpretation remains controversial even then.
The more complex arguments that I support are not as alarmist as those simplistic ones presented right now. They would not lead to as strong immediate steps as countries like Germany have taken, but they might still be more effective in the long run. Most certainly they would have a much better benefit to cost ratio.
Emphasizing too much maximal short term effect leads to errors and high costs.
Even if the pessimistic predictions turn out to be correct, the changes become clearly visible very slowly, probably too slowly to support the policy push that has been initiated by scare-laden argumentation.
Perhaps you consider my next two posts more problematic, but then perhaps you find the 5:21am post again more acceptable.
Germany? That’s a great example of the bureacratic verbiage.
“Germany’s role as a leader has already taken a hit during this conference. The country’s emissions rose 1.8 percent last year despite headway in building renewable energy and a focus on the Energiewende. In the European Union as a whole, emissions were down 1.3 percent, according to the Center for International Climate and Environmental Research in Oslo.
What’s more, Germany’s new coalition government has already agreed in negotiations to drastically cut the country’s offshore wind target for the next 15 years. It’s the kind of change that worries investors and sends a signal that renewables are risky. One major energy firm, DONG Energy, would not have invested in a number of offshore wind farms with the new, lower levels of support, according to BusinessGreen.”
“So, perhaps you’ve heard about Germany’s heroic green revolution, about how it’s overhauling its entire energy infrastructure to embrace renewable energy sources? Well, in reality, our chimney stacks are spewing out more than ever, and coal consumption jumped 8 percent in the first half of 2013. Germans are pumping more climate-killing CO2 into the air than they have in years. And people are surprised.”
People are surprised because they have been misled.
Several parallel processes have been going on in Germany:
– support of wind and solar power by very high feed-in tariffs since 1991 (the tariffs for new installations have gone strongly down in recent years, but the costs remain high for long)
– allowing in former DDR new lignite production, while that was not possible in the western part,
– shutting down nuclear power plants.
The first one has been far too costly, but the two others have also been necessary to result in increase of CO2 emissions.
Changes are large, but not very useful.
Heh, Pekka’s cock crows fearlessly this dawn.
Agreed. It leads only to arguing about what is certain.
The problem is, however, is that will lead only to arguing about what’s both likely enough and serious enough.
Policy changes that are both rapid and persistent are often difficult to achieve, and so they are in the case of climate policy. As the issue is a long term issue, my choice would be to aim at a slow and persistent change rather than a rapid one that leads to an partial dead end.
There’s so much discussion on the urgency, how the solution gets so much more expensive and ineffective, if we don’t act immediately, but looking at the numbers and delays at every step of the change, it’s not really that dependent on the speed of acting. A more efficient solution introduced several years later may well lead to better final outcome. Postponing everything by decades is a different thing. That might well have very negative effects.
Switching from argumentation based on the assumption of extreme urgency to one aiming at long term success might help in lessening the acute controversies.
The real problem, IMO, is widespread lack of imagination, especially in policymakers and those advising them.
Let’s start with the assumption that most technology tends to develop in a fashion analogous to yeast growth, with one or more periods of close to exponential growth with the growth rate dependent on a number of factors, many of them subject to political manipulation. In this case, the optimum manipulation would be to foster rapid growth of technology that, when mature, will be able to solve the problem of atmospheric pCO2.
Technologies that remove CO2 from the atmosphere or ocean surface for some profitable use are an ideal subject for this. Even if the carbon is returned to the atmosphere during the early growth and maturation of the technology, once it’s mature diverting part of its product to some sort of sequestration could probably be done without nearly so much pain as trying to impose restrictions on CO2 emissions immediately.
Similarly, use of policy manipulations that foster the growth of industries without immediately raising the cost/price of energy would be much less painful. Perhaps the current efforts to stimulate economies by pumping money into them could be channeled into R&D with likely long-terrm rewards involving such technology.
Rather than direct subsidies, how about tax breaks, or allowing tax-paying industries to direct some of their owed taxes into research of their own choosing, with partial patent rights accruing? Also, how about streamlining and rationalizing the Intellectual Property laws, in ways to foster more R&D, and more effort to achieve growth and maturity in desired technologies by private investers?
I have become rather pessimistic concerning great breakthroughs in energy technologies. That’s based on the past performance. The oil crises of 1970s led all industrial countries to initiate substantial special funding for research in energy technologies. The results have so far been disappointing. More than 35 years of special support has not produced much that would have changed essentially the energy picture.
Several technologies have certainly progressed, but in comparison to what has happened in the information technology – without similar special support – the changes are minimal. Great breakthroughs come when science and basic technologies are ripe for that, not when they are needed.
By volume the most important energy technologies are old, newer solutions have not yet reached the maturity needed for full scale implementation.
The submodels used to describe technology change in integrated assessment models can probably not take properly into account the special nature of changes in energy technology. Therefore they risk being far too optimistic (the opposite is also possible, but not predictable).
Different forms of support are best at different stages of development, tax breaks are perhaps most suitable when the technology is already mature, but requires very heavy investments, which may also be somewhat risky. At earlier phases it’s not an effective incentive. At early phase direct subsidies for research, development, and demonstration, is probably most effective. All the other forms of support may have their place under some conditions, but a wrong form or level of support leads only to waste of resources.
The only real replacement for baseload electricity that’s close to the same operating cost is Nuclear, which at least in the US has been strangled by regulation and lawsuits. With no, or few new systems going on line why would there be a push to develop advanced technology? Computers on the other hand have had lots development because there’s a market for them.
We could however change this, and if we must reduce Co2, and don’t want to kill millions of poor people around the world, Nuclear is the only technology that we can use.
Nuclear could certainly have a bigger role. One of its problems is that too little has been done for quite a long time to develop it further. The extensive public opposition has led to that.
Concerning the safety requirements and regulation, the problem is perhaps not too much regulation, but too poor regulation. Nuclear safety is essential for the future of nuclear power, but there must be better ways for assuring that than those applied presently. Developing reactor types with good inherent safety properties is perhaps the best path, but it’s possible to improve also concepts not too different from the present plants.
Up to now and also in near future nuclear plants require very competent and responsible operating personnel as well as regulators at a level that restricts the suitability of nuclear energy in much of the world.
One issue is also the availability of U-235 needed in the present nuclear plants. Really significant expansion would soon require other nuclear fuels cycles (breeders based on uranium and plutonium or thorium).
I’m not sure I agree there isn’t enough regulation, but If I was in charge I would go on a tear building light water plants (clear out the lawsuit logjam, work on the regulations which could include more), and then provide tax incentives and R&D for thorium burners, and fusion.
Fission could displace fossil fuels in a decade or so, thorium not much longer than that, and then maybe we’d have worked out fusion by then.
We should do this regardless of AGW.
Andy, you write ” There is also certainty that atmospheric CO2 is the principal non-condensing greenhouse gas which acts to prop up the terrestrial greenhouse effect, and that water vapor and clouds are the fast feedback effects that simply multiply the CO2 greenhouse warming by a factor of three or four.
The bottom line is that atmospheric CO2 is the principal control knob that governs the global temperature of Earth.”
One wonders why you write this nonsense. There is barely a shred of empirical data to support any if this. It is all based on hypothetical musings and the output of non-validated models.
Who do you think on this blog is going to take you seriously?
The alarmist faithful will. What else they gonna do?
“It is all based on hypothetical musings and the output of non-validated models.”
no its fundamental physics. stop your nonsense old man
statistical infilling is not “fundamental physics”
Steven, you write “no its fundamental physics.”
The two issues are not mutually excusive. Hypotheses are usually based on fundamental physics, but just because they are based on fundamental physics does not mean that they are correct. We do not know whether a hypothesis is correct until it has been verified by the empirical data; which in the case of CAGW, is completely absent. Models can be based on fundamental physics, but until they have been validated, they are useless for making predictions about the future.
I am sure there is no fundamental reason why CO2 cannot work as a control knob. But it is more likely to be Bedknobs and Broomsticks.
In physics problems need to be solved correctly.
Come on Mosher, you don’t agree that
“and that water vapor and clouds are the fast feedback effects that simply multiply the CO2 greenhouse warming by a factor of three or four.”
is known with certainty. And if it were, we would face the Kim Corollary that we would be freezing our asses off right now if not for the last decade or so of ACO2.
I had the exact same thought as A Lacis:
i.e. Nevermind the weak link. It’s the strong link (RDC = relevant dominant certainty) that cannot be sensibly ignored.
The set of permissible climate model states IS LIMITED BY LAWS of large numbers & conservation of angular momentum IN A VERY SIMPLE WAY.
From the article above:
“[…] scientists who can look at the physical system as a whole and successfully use the science to identify […]”
There are still a few days left in November.
(Remember the challenge issued in early October??)
Have you yet managed to reproduce, sensibly interpret, and realize the simple implications of Dickey & Keppenne’s (NASA JPL 1997) figure 3a&b?
Or is uncompromisingly deep ignorance of section 8.7 still the guiding principle in climate academia? (I sure hope not as that certainly doesn’t inspire trust.)
End-of-November report card will read?…
A. “Showed no interest in knowing nature — never even bothered to try.”
B. “Totally got it — natural functional numeracy wizard!”
We’ll know soon. (Not betting on B.)
Compare and contrast this:
Selective reasoning is selective, Judith.
How do you know that most of the insults haven’t been deleted before you even saw them?
They could be.
I doubt it.
And even if they were, the fact is that they remain, undeleted – in contrast to my comments which are not as insulting and which are in response to comments like those. Judith applies a double standard. It is stealth activism.
> How do you know […]
Chuck Norris simply knows.
Others can use an RSS reader.
(Hi Bob, MiniMax.)
at least you are in good comany
I know your typo ;-)
Philosophical reflections on the reliability of climate models are nice. But policy makers and the general public do not understand technical discussions on statistical and methodological reliabilities. To put it bluntly, scientists should just say the models are unreliable and cannot be used for forecasting and policy decision making.
If I had used the climate models and inputted my financial information instead of climate data for the last couple of decades and then used the those climate models to predict my future financial position.
And then had gone to the bankers to finance some large project, how would the bankers react knowing the history of the past predictions of those models?
I suspect it would a short and sharp, You’ve got to be bloody joking if you want money on the basis of those model’s predictions.
Andrew Lacis demonstrates precisely why scientists should keep quiet when they are uncertain. Too often they present mere conjecture as irrefutable fact. Too often they ignore the real fact that nothing out of the ordinary is actually happening, regardless of how much they expect it to.
“The bottom line is that atmospheric CO2 is the principal control knob that governs the global temperature of Earth.” is actually 100% conjecture, not a fact and is not even logical, never mind scientific, bcause there is no CO2 mechanism for cooling the Earth. ie If the CO2 grows to its maximum extent during warming events then only a sudden and massive carbon sink appearing out of nowhere can cause cooling.
This is why the “pause” is inexplicable by mainstream AGW believers: In their simplistic model you simply cannot have dominant CO2 and cooling events! Hence they have invent stories about manmade cooling masking manmade warming or that nature is somehow hiding the warming somewhere we can’t see.
Nature however doesn’t follow their suppositions and natural variability still includes cooling events which are clearly well capable of dominating over any putative manmade warming.
The real facts are screaming that the hypothesis is busted. No warming of the troposphere or cooling of the stratosphere since 1995 and no ocean warming since accurate measurements (Argo) records replaced guesswork.
This quote spoils it for me ” Yet there is also an anti-science lobby which uses very scientific sounding words and graphs to bash well-meaning science and state-of-the-art modeling.”
Apparently they do not see the scientific debate over AGW per se, so they miss the point. Also policy is not based on mathematical decision theory.
Any skeptic might write: ‘There is a pro-science lobby which uses scientific words and graphs to bash activist science and to show that the models are not fit for purpose.
And these guys are supposed to be philosophers of science. It’s all a bit shocking.
Conclusion-I It’s good that WWII scientists utterly ignored the recommendations of Climate Etc denialists.
Conclusion-II The most foresighted of present day climate-scientists are wisely heeding the same WWII principles as Einstein, Fermi, and Szilard.
Conclusion-III Fermi, Szilard, and Einstein were guided by Hansen-style thermodynamical physics, NoT by “weak beer” statistical/phenomenological reasoning. That is why their recommendations proved to be correct.
It is a pleasure to help you (and Climate Etc readers too) to a better appreciation that the principle “Keep Silent About Climate-Change” is utterly lacking in scientific, historical, and moral foundation, JamesG.
Yes you’re right. I should have written that scientists who are utterly certain about blatantly uncertain things should qualify their remarks by stating that it is just their opinion based on as yet unverified assumptions.
When used for policy facts must be separated out from opinions by somebody.
That CO2 is a greenhouse gas is a fact.
That it feasibly may cause warming of up to 1K per doubling is very close to an accepted fact (though based on 1D thinking).
That CO2 is the main climate driver is not a fact.
That an extra 2K+ will result from positive feedbacks is not a fact.
That manmade warming can be teased out of natural warming is not a fact.
That observations so far invalidate the models is a fact.
Alas there are no Einsteins in the climate field.
Plus as far as I am concerned, anyone who denies that the current pause in warming is not a refutation of the cataclysmic scenario is the real denialist. It is gradually dawning on the public and policymakers that these denialists are wasting money, halting growth and handing our children a major and unnecessary energy crisis.
Don’t seek to claim the high moral ground when the ideas you suupport may be responsible for the avoidable deaths of people in the here and now – based on pessimistic projections of what might happen in 100 years.
I’m actaully in favour of renewables research – and even involved in pursuing it (if nuclear fusion is classed as renewable). But I’m not stupid enough to believe we can just stop using fossil fuels before the alrternatives are ready to replace them.
“Yet there is also an anti-science lobby which uses very scientific sounding words and graphs to bash well-meaning science and state-of-the-art modelling.”
What about the science lobby [skeptics] who use scientific words and graphs to educate the ill meaning science and state of the artifice modelers.
not to mention the anti-science lobby [think Greenpeace] who use very unscientific sounding words and emotions to bash and maul well meaning scientific skeptics.
Or ice hockey fans.
Is it possible to have two legitimate sides to an argument without the obligatory free backhander?
one could suggest that the AGW movement is as anti science in their presentations as the anti vaccination movement but it might might make this thread tremble a bit.
‘ill meaning science and state of the artifice modelers.’ Ooh, I’m jealous.
A chink in the wall with this very interesting article on No TrickZone, about the Austrian Weather Service admtting that Models are not reflecting reality.
A chink in the official wall.
A very interesting article on No TrickZone, about the Austrian Weather Service admtting that Models are not reflecting reality.
Pingback: Social cost of carbon: Part II | Climate Etc.
Reflection on reliability of climate models…
“The fact that all of the models have been peer reviewed does not mean that any of them have been deemed to have any skill for predicting future temperatures. In the parlance of the Daubert standard for rules of
scientific evidence, the models have not been successfully field tested for predicting climate change, and so far their error rate should preclude their use for predicting future climate change” (Harlow & Spencer, 2011)
In written testimony before the U.S. Environment and Public Works Committee, Spencer noted that the magnitude of global-average atmospheric warming between 1979 and 2012 is only about 50% of that predicted by the climate models.
Gee if this was an undergrad physics lab experiment, the students would flunk with an error that big. But this is IPCC, the modelers get more funding and a Nobel Prize.
I- P- C-C U-N C- O-T-E-R-I-E-S !
Something there is that doesn’t love a wall …
H/t Robert frost.
So in the end, climate scientists are shown to have been absolutely correct in saying the problem is one of communication.
For, avidly following the duplicitous advice early on from pre-committed alarmist Stephen Schneider, they have systematically, doggedly and deliberately failed to advertise the weaknesses and clearly establish the limitations of current climate science. Only idiots now actually believe what they are telling us, support for them being entirely politically motivated.
The only question now is : how long will it now take to un-egg 20 years of over-egging the climate pudding ?
I appreciate your topic ‘reflection on reliability of climate models’ which, in my opinion, expresses your openminded attitude to the potential reasons for global warming. Especially I relate this to the the issues you have been raising about climate model uncertainty. There is reason to believe, that the uncertainty of model results is mainly associated with the assumed role of anthropogenic CO2 emissions, without any empiric evidence. Anybody could abandon the hypothesis of climate warming believed to be caused by anthropogenic CO2 emissions, as she/he could learn to understand that mere anthropogenic CO2 emissions have had indistinguishable influence on the recent increase of CO2 content in atmosphere, and that even the recent total increase of CO2 content in atmosphere has had indistinguishable influence on the recent global warming. Then there should no more be any such an climate model uncertainty; any global warming could not be proved to be caused by anthropogenic CO2 emissions.
Judith Curry; http://judithcurry.com/2013/11/24/warsaw-loss-and-damage-a-climate-for-corruption : ”UNFCCC seems to assume that every weather hazard is associated with anthropogenic global warming.” I interpret this statements of yours, Judith Curry, that there is no evidence for the recent warming believed to be caused by anthropogenic CO2 emissions, but that it is essentially based on assumptions.
The fact is that there is not available any empiric evidence according to which the recent global warming could be dominated by total CO2 emissions, and yet less by anthropogenic CO2 emissions. Thus the paradigm of anthropogenic warming adopted by both UNFCCC and IPCC can be questioned. I have understood that even you, Judith Curry, would want to base the investigation of climate change mainly on empiric observations instead of hypothetic climate model calculations. On the basis of my own scrutiny I am convinced that the hypothesis of anthropogenic warming is false, whereas there are several evidences based on empiric observations according to which natural factors have controlled even the recent global warming:
1) NATURAL INCREASE OF CO2 CONTENT IN ATMOSPHERE IS DOMINATING
In earlier comments of mine (e.g. http://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992 ) I have proved – consistently with Segalstad and Salby – that nowadays the anthropogenic share of the total atmospheric CO2 content is only about 4 % at the most, whereas correspondingly the natural share of the total CO2 content in atmosphere is about 96 %. Even in the recent increase of CO2 content in atmosphere the share of anthropogenic CO2 emissions has been only about 4%. For instance the recent increase of about 2 ppm CO2 a year in atmosphere has been dominated by natural CO2 emissions where the share of anthropogenic CO2 emissions has been only 0.08 ppm. As a matter of fact the mere yearly, record-breaking increase of anthropogenic CO2 emissions should cause only an increase of 0.005 ppm CO2 in atmosphere. The real increase of 0.08 ppm of anthropogenic CO2 and even the total increase of about 2 ppm in atmosphere are caused by decreased CO2 absorption from atmosphere to the warming sea surface water on the areas of oceanic CO2 sinks.
2) GLOBAL WARMING IS NOT DOMINATED BY INCREASE OF CO2 CONTENT IN ATMOSPHERE
During the last 17 years the climate has not warmed in spite of an exponential increase of CO2 content in atmosphere. According to Arno Arrak during the recent 35 years there has not been any global warming trend caused by greenhouse gases. Bob Tisdale states that the recent increase of CO2 content in atmosphere has not caused any warming in the oceans.
3) INCREASE OF CO2 CONTENT IN ATMOSPHERE FOLLOWS WARMING AND NOT VICE VERSA
For instance in my earlier comment above I have stated: ‘Being based on measurements in reality during the last three decades Lance Endersbee claims: “Oceans are the main regulators of carbon dioxide”, http://www.co2web.info/Oceans-and-CO2_EngrsAust_Apr08.pdf . – – – This means that the global mean sea surface temperature mainly controls the CO2 content in the atmosphere; when the mean sea surface temperature is rising, the CO2 content in the atmosphere is increasing.’ I have explained even the mechanism, according to which the rise of mean temperature of global sea surface waters has been mainly controlled by warming of sea surface waters in the areas where sea surface CO2 sinks are. The warming of sea surface water on the areas of CO2 sinks makes absorption of CO2 from atmosphere to sea surface become slower in consequence of which more CO2 from total CO2 emissions remains in atmosphere to increase its CO2 content.
Anthropogenic CO2 emissions do not cause any distinguishable global warming. Earlier I have already stated: ”As to energy policy, there is no role for curtailment of CO2 emissions, because it has been proved to be unworking concerning any control of global warming. As to actions, there has to be the first priority to protect availability of competitive energy that is produced cleanly enough.” As a necessary addition to the priority I regard that we have to learn, well enough, to adapt ourselves to natural events concerning both weather and climate.
Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?