by Judith Curry
Insights from Ed Lorenz, pioneer of chaos theory, on the detection of anthropogenic global warming.
Ed Lorenz has been arguably the most influential meteorologist in history, having laid the foundations of chaos theory. From his 2008 obituary:
In discovering “deterministic chaos,” Dr. Lorenz established a principle that “profoundly influenced a wide range of basic sciences and brought about one of the most dramatic changes in mankind’s view of nature since Sir Isaac Newton,” said a committee that awarded him the 1991 Kyoto Prize for basic sciences.
The Bulletin of the American Meteorological Society has just published what is probably Lorenz’s last interview, on the topic of the limits of predictability and the impact on weather modeling.
MIT has a complete online collection of Lorenz’s publications [here].
Of particular relevance to topics under discussion at Climate Etc., I refer to this paper:
E.N. Lorenz (1991) Chaos, spontaneous climatic variations and detection of the greenhouse effect. Greenhouse-Gas-Induced Climatic Change: A Critical Appraisal of Simulations and Observations, M. E. Schlesinger, Ed. Elsevier Science Publishers B. V., Amsterdam, pp. 445-453.
Relevant excerpts (JC bold):
In the minds of many of us who are gathered here, the most important question concerning greenhouse gases is not whether they will produce a recognizable global warming, but when will they do so? Probably we take it for granted that, barring some catastrophe that halts or overwhelms the accumulation of carbon dioxide and other constituents, the warming predicted by theoretical studies will eventually occur. The apparent upward trend of global-average temperature during the most recent century, and the unusually warm and dry weather that has invaded parts of the world during parts of the most recent decade, have led some of us to speculate that the greenhouse warming is already being felt. In this talk I wish to examine the basis for speculating that the greenhouse effect is not the main cause of what we have been experiencing and, particularly, that the suggested warming is due to processes purely internal to the atmosphere and its immediate surroundings.
The term “chaos” currently has a variety of accepted meanings, but here we shall use it to mean deterministically, or nearly deterministically, governed behavior that nevertheless looks rather random. Upon closer inspection, chaotic behavior will generally appear more systematic, but not so much so that it will repeat itself at regular intervals, as do, for example, the oceanic tides.
In view of these considerations, how are we to know when a stronger greenhouse effect is finally making its presence felt? First, we must realize that we are not looking for the onset of the effect. Presumably there is no critical concentration of CO2 or some other gas beyond which the greenhouse effect begins to operate; more likely the absorption of terrestrial re-radiation by CO2 varies continuously, even if not truly linearly, with the concentration. What we are looking for is the time when the effect crosses the threshold of detectability.
It has sometimes been objected that in dealing with this problem we have relied too heavily on theory, but I would maintain that the problem cannot be wholly dissociated from theoretical considerations. Imagine for the moment a scenario in which we have traveled to a new location, with whose weather we are unfamiliar. For the first ten days or so the maximum temperature varies between 5° and 15°C. Suddenly, on two successive days, it exceeds 25°C. Do we on the second warm day, or perhaps on the first, conclude that somebody or something is tampering with the weather? Almost surely we do not; we are quite familiar with this sort of behavior, and we take it for granted that this is what the weather often does.
Consider now a second scenario where a succession of ten or more decades without extreme global-average temperatures is followed by two decades with decidedly higher averages; possibly we shall face such a situation before the 20th century ends. Does this scenario really differ from the previous one? We may feel that it does; for example, we may believe that if the atmosphere is subjected to similar external influences over separate long intervals, say decades, the average conditions should be similar, with the short-period fluctuations tending to cancel. If so, our conclusions have been reached through theory, that is, through what we believe is demanded by the physical laws, even though the theory may be qualitative and not worked out in detail. Certainly no observations have told us that decadal-mean temperatures are nearly constant under constant external influences.
At this point we may, in the second scenario, turn to statistical procedures. We may introduce a null hypothesis, which could say that the mean of the population of decadal mean temperatures to which the last two observations belong is not different from the mean of the population to which the earlier observations belong. We would then seek the probability that a discrepancy as large as the one that we have observed would occur, if the null hypothesis is true. If the probability is rather small, we would be likely to reject the null hypothesis, and conclude that the populations do indeed have different means. If the probability is large, the populations may still have different means, but we will lack a basis for concluding that they do.
Returning to the second scenario, should we assume that decadal mean temperatures are also highly persistent? Our observations are insufficient to yield an answer, but we may turn to theory. The high persistence revealing itself in Figs. 3-5 suggests the possibility that real atmospheric decadal-mean temperatures are persistent; at least, it indicates that there is no obvious theoretical reason for hypothesizing that they are not persistent, no matter what intuition might tell us. There is a good chance, then, that in a real situation resembling the second scenario, we might not be able to reject the null hypothesis, that is, we might have to say that any change in the climate, including a change brought about by the greenhouse effect, has yet to cross the threshold of detectability.
If our only evidence were observational, we might have to pause at this point, and wait for more years of data to accumulate. However, since we do have theoretical results, and since, in fact, the entire greenhouse effect would have remained unsuspected without some theory, we can put the theory to use. Different models agree reasonably well as to the increase in globally averaged sea-level temperature that would be produced by a prescribed increase in CO2 concentration. We are therefore equally justified in introducing a second null hypothesis, which would say that the difference between the means of two populations, one to which the earlier decadal mean temperatures belong, and one to which the later ones belong, is not different from the numerical value that the consensus of the theoretical studies stipulates. Again, we can ask whether anything as unusual as the difference between the observed sample means would be likely to have occurred, if the new null hypothesis is correct. Again, there is a good chance that we might lack sufficient evidence for rejecting the new hypothesis.
This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis. This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.
What we would conclude, then, if the second scenario is realistic, is that, although we may lack sufficient direct evidence that an increased greenhouse effect is influencing our climate, we just as surely lack direct evidence that it is not. If the effect is important, we may have to wait a few years to verify that it is, but, by the same token, if it is not important, we may have to wait a few years to verify that it is not. The implications of such a conclusion for future decision making speak for themselves.
Unfortunately, recognizing a system as chaotic will not tell us all that we might like to know. It will not provide us with a means of predicting the future course of the system. It will tell us that there is a limit to how far ahead we can predict, but it may not tell us what this limit is. Perhaps the best advice that chaos “theory” can give us is not to jump at conclusions; unexpected occurrences may constitute perfectly normal behavior.
JC comments: Lorenz’s paper was published in 1991. Recall, 1990 was the year that the IPCC FAR was published, which found “The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability.” The year 1991 was 62% into the warming period from 1976-2000.
If Lorenz were looking at the climate data in 2013, how would he interpret it? Frankly, I don’t think the AGW detection arguments have advanced much since 1991. Paleoclimate hockey sticks do not resolve global or hemispheric climate variability on timescales of several decades. Hence, the IPCC resorts to climate models to compare AGW forced climate change against simulations without AGW forcing – these same models do not adequately simulate natural internal variability at time scales beyond about 15 years. Hence, the resulting circular reasoning by the IPCC ends up assuming that climate variability on time scales beyond 15 years is externally forced.
The prospect of the current hiatus lasting until the mid 2030’s (as per the stadium wave and related theories of natural variability) is a decisive test for IPCC’s AGW detection arguments. Detection of AGW is a prerequisite for the IPCC’s attribution arguments. The IPCC’s statements of 95% confidence that most of the warming is anthropogenic, and expectations of substantial warming between now and 2036, has the IPCC skating on very thin ice, in my opinion.
Interesting times Judith. The work of Lorenz certainly has highlighted the need for caution when making conclusions about the future direction of climate, which is the culmination of a data series extending over many millennia.
Peter Davies said:
“The work of Lorenz certainly has highlighted the need for caution when making conclusions about the future direction of climate…”
Nope. This is not what deterministic chaos is telling us at all, and thinking it does misses the whole point. It is all about time frames and over what period of time we can see the anthropogenic signal versus the noise of natural variability, or internally forced quasi-periodic cycles such as the “stadium wave.” The ultimate future direction, at least so long as GH continue to increase, is to a warmer world. Lorenz was telling us to use caution in looking at too narrow of a time-slice of that path or you might confuse signal and noise.
R Gates if you had quoted my full sentence I would agree with your paraphrasing of Lorenz advising caution in looking at a too narrow of a time -slice of that path or you might confuse signal and noise. This has been my pet peeve with the AWG debate so far.
I was certainly not alluding to his introduction of non-linear analyses nor with the problem that chaos theory has in relation to the choice of initial conditions impacting greatly on the outcomes of various models.
The overall future direction isn’t in question- even Lorentz knew that. It really comes down to how wiggly the path is getting there, and so the caution comes in trying to draw any conclusion about seeing that direction from too short a slice of the path. Warmist and skeptics are guilty of this as their confirmation bias will direct their attention to whatever slice of the wiggle supports their point if view.
I agree with you about confirmation bias R Gates, it is indeed a problem with the current debate. I am still sitting on the fence about the overall future direction of climate because the slices of wiggles that we are looking at are not at all conclusive one way or the other.
I would be interested in your opinion as to which regions are likely to be impacted given that you believe that the global warming metric will continue to rise, because it seems to me that some regions may well get colder and dryer rather than warmer and wetter.
I’ll offer a response.
It seems to me there is a lack of persuasive evidence that AGW is a serious threat. I’ve asked warmists repeatedly for evidence to support their beliefs that AGW would be CAGW, but they avoid and dodge the question. So I am more convinced than ever that there is no persuasive evidence that AGW is CAGW.
Therefore, to answer your question about which regions are likely to be impacted I’d suggest this:
1. The poles will get significantly warmer. That’s excellent news.
2. The mid latitudes will get a little warmer, but mostly in winter and night – so longer growing season. E.g. Sydney will get Brisbane’s weather. That will save Sydneysiders the cost of uprooting to move to a ‘better climate’, and likewise Melbourneites moving to Sydney. More rain will see reduction in the aerial extent of our deserts (as per AR4 WG1, Chapter 6). So more food for all, less deaths from cold which out number deaths from heat. Pinepples, Bannanas and passionfruit in Sydney.
3. Tropics will be only slightly warmer, mostly at night.
All good. A better world.
Peter Lang. Thanks for your thoughts on the possible impacts if the globe does indeed warm significantly. You seem to believe that all areas will get warmer, more or less, and that the poles in particular will also follow suit.
The global average temperature metric is obviously an artifact since there are always cold areas and warm areas with just about everything in between. IMO the likely impacts will be impossible to evaluate because of differences between the oceans (70% of the globe) and land (30% of the globe)
If the poles do indeed get warmer, it is highly likely that the jet streams and other oscillations of the Atlantic and the Pacific oceans will change as well with indeterminate effects on land areas abounding them. With greater circulation of cooler winds the tropical areas could well become cooler.
My comments are from IPCC. I was just deconfusing them and providing perspective :)
Touche, PL; I wasn’t half through that before scoffing to myself that there was no way you should be so certain.
Peter Lang said:
“1. The poles will get significantly warmer. That’s excellent news.
2. The mid latitudes will get a little warmer, but mostly in winter and night – so longer growing season.”
If the global temperature is flat and the Arctic is warming that means another region is cooling. The Arctic usually warms under negative AO/NAO conditions which is accompanied by cooling in the mid latitudes. This temperature differential exists at all scales.
http://wattsupwiththat.com/2013/08/23/the-medieval-warm-period-in-the-arctic/#comment-1398577 (and following comments)
Peter Lang you got me there. I can see just how weaselly the IPCC has become, with words that can be bent to suit any occasion and any future scenario!
@Ulric Lyons said, “If the global temperature is flat and the Arctic is warming that means another region is cooling.”
Yes, but don’t forget how small in area the Arctic actually is on the globe/planet (versus in the GISTEMP maps). It also doesn’t necessarily mean that any specific or equal area region is cooling.
To amplify what RG said, the chaos in the climate system is essentially noise riding on top of a mean 289K thermal bath.
Internal chaos won’t spontaneously cause this number to change unless some huge volcanic eruption burps up huge volumes of magma or darkens the sky. But then we have bigger problems if that was indeed the case.
What GHGs can do is raise this average thermal bath value from 289K to a higher value. The GHG of CO2 acting as a control knob was completely responsible for raising the temperature of the earth from its original steady-state radiative value of 255K to 288K, and it will continue to raise the temperature as CO2 continues to increase.
The following figure is the so-called SALT model which maps the global temperature anomaly to an increasing ln(CO2) concentration, perturbed by 4 fluctuation terms.
All a _scientist_ would need to do is show that there is any evidence of ‘GHG’ (sic) having any effect whatsoever on the steady state. All the models use the initial CO2 ‘forcing’ to result in more ocean evaporation so more atmospheric water vapor, which in turn creates a lot more forcing and then runs recursively into a tipping point. However, there is no observational evidence of any increase of atmospheric humidity – and there is no tropical tropospheric hotspot (despite a large effort to find it) – meaning that there is no _observational evidence_ for the water vapor forcing that is ESSENTIAL for the AGW hypothesis. The AGW hypothesis is therefore falsified.
To paraphrase Feynman, “It doesn’t matter how beautiful and sophisticated your model is, it doesn’t matter how smart you are. If it doesn’t agree with observational evidence, it’s wrong.”
The way the transient effect is playing out is that the drier areas, like the Arctic and land areas are warming fastest, and I think that is undeniable. This has led to a less obvious initial rise in water vapor, but only because the warming of the tropical oceans has been relatively slow so far. It is little comfort that the land has warmed twice as fast as the global average over the last 30 years. It is misleading to think that the warming will be uniformly fast everywhere. It is only at equilibrium that the oceans finally catch up.
A warmer ocean would be nice exiting the Holocene. Would be. Especially would be if it could forestall the exit, but that’s probably one hope too far.
kim, even a warmer ocean would be cool relative to what the land would be like. Nice? Not so much.
Climate is the continuation of the oceans by other means. It’s as simple as A d B Ceedoceous.
Ian W said ” ‘GHG’ (sic) ”
and then mumbled “no observational evidence of any increase of atmospheric humidity ” … “Feynman”
Please click on Pielke Pere’s reply, too, upper right @ his site. Thanks, Web, for the link.
Heh, more four year old ‘resent’ stuff. Classy penguin reading.
Kim remarkably said:
“kim | October 14, 2013 at 6:57 am |
Climate is the continuation of the oceans by other means. It’s as simple as A d B Ceedoceous.
Nice of you to finally get that. When the oceans show net loss of energy on a decadal basis, you could finally say with accuracy, “the globe is cooling.”
Since that huge 289K thermal bath is pretty important.
That is a plot of different part of the surface of the thermal bath. The well mixed part, 30S-60S seems to filter out most of the noise naturally. It reveals what appears to be a long term secular trend, imagine that?
The well mixed part, 30S-60S seems to filter out most of the noise naturally. It reveals what appears to be a long term secular trend, imagine that?
The westerly wind belt at the higher latitudes 50-65s through salt spray cloud condensation nuclei (by around 20%) shows in models a banded latitudinal area which has a negative cloud radiative forcing of -0.7wm^2 and with the negative radiative forcing of O3 is greater then the so called AGW forcing. eg Korhonen et al. (2010) Ozone assessment report chapter 4 .
Do not confuse Lorenz’s description “behavior that nevertheless looks rather random” with thermal noise. (I admit you appear to be in company with Richard Muller on this.) Until you clear this up in your own mind, nothing you say about natural variability (or detection or attribution, imho) can be of value.
Nothing that you say can be of any value until you contribute something of substance to the discussion.
The Gibbs free energy for a closed system is
dG = -S dT + V dP
If you break the SALT model down, one can see the differential components of Gibbs. The pressure differential in SOI, the extra free energy terms in TSI, volcanic aerosols and CO2, and the long-term LOD as a proxy for kinetic energy changes.
This is all energy balance stuff, which is remarkably well modeled by minimizing the error in the SALT terms with respect to global average temperature, dT.
Remember, that from outer space, the earth is just another.object that we can apply macro thermodynamic principles to.
Webster, who is dealing with a closed system? Several people have tried to explain to you that you are dealing with several open systems where the dissipation from one influences the response of the next. If you want to deal with just a closed system, then stick with the TOA while assuming albedo and TSI are constant.
Boo hoo, little baby Cappy whines when he can’t keep up with the math of variational analysis.
Webster, “Boo hoo, little baby Cappy whines when he can’t keep up with the math of variational analysis.”
No I chuckle when I see people playing with the noise. Then people matching the noise that is changing because it has a “cooling” bias :) Especially people that damn one scientist for getting lost in the red noise weeds while running amok in their own patch of weeds :)
That is your signal
Now it is changing :)
You ready to take another look at that 20 year signal in DYE3? :)
previous question was addressed to @WHUT, in case that was unclear
Linear trend lines for the annual average of ~116 million global surface stations from 1940-2011
Rel Humidity y=-0.0122x +70.207 R^2=0.0394
Sea Level Pressure y=-0.0084x +1015.2 R^2=0.1123
Mean Temp y=0.0154x +51.514 R^2=0.0098
Jim D | October 14, 2013 at 4:40 am |
“The way the transient effect is playing out is that the drier areas, like the Arctic and land areas are warming fastest, and I think that is undeniable.”
So, they also cool the fastest at night, although that’s not why Arctic temps have gone up as much as they have, Many of the stations in the Arctic are near water, not areas of low humidity.
When looking at only these station for 1940-2011
Rel Humidity Linear trend y=0.026x +77.694 R^2=0.0867
though there are far fewer station samples in 1940(251), than 1950(10473), 1950-2011 has this trend y=-0.0216x +79.928 R^2=0.164
Mi Cro, the Arctic area is dry in terms of water vapor amount because it is cold. Water vapor above water surfaces doubles for each 10 degrees C rise in temperature. The water vapor over the Arctic is not large and CO2 has a big effect on surface temperatures there.
You are correct that at those low temps (average 1940-2011=17.53F) the absolute water vapor with a Rel Humidity of 78.68% is rather low, and Co2 might dominate the GH effect, but temps drop like a rock once the Sun goes down. I see multi-degree/hour rates in NE Ohio (41Lat) on clear days. The same thing happens in deserts, where you can see a 30-40F drop in temps at night.
I hope you realize that a constant relative humidity indicates an increasing specific humidity as the air temperature warms. So any increase above the current value in relative humidity means the amount of water vapor is rising as predicted and that will generate more GHGs in a positive feedback fashion.
Webby, of course I do, but how much does measured temps change?
y=0.0154x +51.514 R^2=0.0098 in case you forgot.
“The GHG of CO2 acting as a control knob was completely responsible for raising the temperature of the earth from its original steady-state radiative value of 255K to 288K, and it will continue to raise the temperature as CO2 continues to increase.”
Not necessarily so. You are assuming that as CO2 rises nothing else could possibly be going on, like positive or negative feedbacks (reacting to the CO2 or something else), or other natural variables fluctuating.
This is the same as the calorie-in-minus-calories-out-equals-added-weight oversimplification in nutrition, which assumes (or ignores) that the absorption functioning of the intestines is not a factor.
The null hypothesis of CO2 seems to be that CO2 going up is always directly proportional to the global temperature going up. Yet that is exactly what the “CO2 divergence problem” is all about – the hypothesis is being well strained by the “hiatus” since 1998 (or 1997 or 1995, depending on who one asks is the starting point of the hiatus). CO2 has not abated, per Mono Loa, but global temps have people wondering that if additional heat is entering (or being retained, which is the same thing), where is it going? That the deep ocean is sequestering it is still just a conjecture with inadequate evidence.
Not many millenia: from 900 to 3000 years. More info below, and in my “Refuting IPCC’s claims …”:
By analogy, if one flea is added roan elephant each day, there is absolutely no doubt the elephant’s weight will increase.
With all of the other daily activity if the elephant, how long will it be before you can unambiguously detect the anthropological induced weight gain in the elephant?
By analogy, if one flea is added to an elephant each day, . . . how long will it be before you can unambiguously detect the anthropological induced weight gain in the elephant?
Suggested final typographical revision:
By analogy, if one flea is added to an elephant each day, . . . how long will it be before you can unambiguously detect the arthropodically induced weight gain in the elephant?
Sorry but I fail to see the connection between fleas and anthropological induced weight gain. Correlation and causality are moot.
Hence, …………… how long will it be before you can unambiguously detect the entomologically induced weight gain in the elephant? ;)
Ectoparasites – symptoms include restlessness, irritability and weight loss.
The analogy has been scratched.
One itch, on one damned elephant, stampeded the herd, scratching badly the members, some mortally.
Fleas remove weight – they do not add weight.
Currently we have added more than 40% to the weight of CO2 compared to what was there naturally, just for some perspective.
Jim D, we only have continuous background measurements since ~1960 when it was ~315 ppm. Now it’s ~400 ppm, so it’s 85/315 = 27% increase. A significant part of that may have been natural (most IMHO).
Fossil fuel budgets say we have actually already added almost as much CO2 as was in the atmosphere, from ice core records, prior to the start of the burning. About half of this is now in the ocean, that consistently also now has a lower pH due to more dissolved CO2.
You seem to have not grasped the analogy. The elephant is the entire climate system. Maybe the weight of the entire atmosphere would be more apropos?
Of course the elephant will do whatever it is that elephants do to control its flea population — dust baths, mud baths, water baths, rub against tree trunks etc. There is no reason to believe that the elephant’s equilibrium flea population will increase with the addition of exogenous fleas.
Similarly, there may be (likely are?) feedbacks that keep the earth’s “temperature” within a narrow range so that no CO2 induced heating is detectable.
Now does the big fella stand in the sunshine or the shade?
Again, this is an oversimplification. It says nothing about what the fleas do after you add them. Do some fall off? Do some see a fatter elephant and choose to jump on that one instead? Do some die and then fall off? Do some mate and make even more fleas? Do the fleas affect the well being of the elephant to where it gets diarrhea? Does the elephant have friends who groom it and get rid of some of the fleas? Does the elephant bathe itself enough that some fleas leave so as to not be drowned? Does the bath cake up any mud on the elephant?
It’s not a simple A+1 = B equation.
The IPCC is probably correct in their predictions of continual degradation of climate due to the burning of fossil fuels, but with the wrong emphasis. The effects of heat emissions from fossil fuel combustion (and nuclear power) will continue to proceed resulting in glacial melting and rising oceans. This can not be mitigated by capturing and sequestering CO2, since the cooling potential of CO2 through photosynthesis may exceed the heat potential caused by infra red absorption. Through photosynthesis CO2 absorbs 5000 btus of solar energy per pound of CO2 that otherwise would become heat. Scientists should consider that both heat emissions and CO2 may contribute to global warming and should make an unbiased effort to determine the magnitude of each.
The 21st Century’s Three-Step Program
for Resolving Edward Lorenz’s Uncertainty
• Step 1 Switch from local temperature measures to global energy-balance measures as “the best available climate science”, then
• Step 2 Observe secular increases in sea-level and ice-mass loss, free of decadal fluctuations, and consistent with persistent energy imbalance.
• Step 3 Reliably conclude that CO2-driven climate change is real, accelerating, and serious.
Conclusion Global-scale altimetry, gravimetry, and ocean thermometry (Jason, Grace, and Argo) — instruments unknown to Edward Lorenz’ generation — have irrevocably altered the 21st century conception of “best available climate-change science.”
It’s not complicated, Climate Etc readers! Climate-change science has fundamentally altered since Edward Lorenz’ era … altered for the *better*.
Edward Lorenz would approve of this transformational strengthening of the observational *and* theoretical foundations of climate-science!
By switching from regional meteorology to considerations of global metrics climate science have entered into the field of cosmosology and their work has become of academic interest only IMO.
The politicisation of the issue of observed global warming has arisen due to the normative basis of modern social science to which climate science has become a glittering example.
All temperatures, like politics, are local.
Indeed he would approve of this global energy balance perspective as he fully recognized the far more variable and lower thermal inertia properties of the atmosphere, such as he expounds on in this paper:
You list a three step enhancement to the Lorenz essay and add:
I doubt it, Fanny. Lorenz got it right without your suggested “improvement”.
And it is highly doubtful in my mind that Climate-change science has fundamentally altered for the *better* since Edward Lorenz’ era – it’s just gotten more political and corrupted by IPCC’s forced consensus process.
Lorenz seemed to understand that it was unwise to jump to conclusions and that one should have patience and wait until enough data was accumulated. I also did not notice him saying that if the data disagreed with the model, that one should look closely at the data to see where is should be adjusted. When we introduce new measuring devices such as Argos, and PIOMAS, etc. it would really be wise to collect data for 15-20 years with them before jumping to conclusions (or changing the data).
There once was a time when suggesting that ‘global temperature’ was a bad metric would have landed you in the skeptic bin.
but global temperature has such great marketing appell.. well used to
It’s still a pile in many location around the planet, a bigger pile prior to the early 70’s, and waist deep prior to about 1950, and I suspect space based energy balance measurements fail to meet even this standard of accuracy.
The conclusion that Lorenz drew, was that given that such small variations can create such massive variation in output, it was impossible to “model” a weather system. (See, “10 Billion Butterfly Sneezes—More chaos than you can shake a stick at,” by Andi Cockroft)
A day without Lorenz is like a day without Sunshine.
“For the first ten days or so the maximum temperature varies between 5° and 15°C. Suddenly, on two successive days, it exceeds 25°C…” (a)
“Consider now a second scenario where a succession of ten or more decades without extreme global-average temperatures is followed by two decades with decidedly higher averages…” (b) – Lorenz.
It’s my understanding that in some cases, Scale doesn’t matter.
(a) is in days
(b) is in decades
Otherwise it seems to be the same thing.
In the case of (a), we conclude nothing has really happened, usually.
While I agree with Dr. Curry on the use of models for the most part, he was using a model with his great discovery.
This is an interesting quote for me
“We are therefore equally justified in introducing a second null hypothesis, which would say that the difference between the means of two populations, one to which the earlier decadal mean temperatures belong, and one to which the later ones belong, is not different from the numerical value that the consensus of the theoretical studies stipulates. Again, we can ask whether anything as unusual as the difference between the observed sample means would be likely to have occurred, if the new null hypothesis is correct. Again, there is a good chance that we might lack sufficient evidence for rejecting the new hypothesis.”
Now that we have another one changing the null, perhaps we could investigate whether the null is falsified. Are we out of the range of all the paleo reconstructions yet or not? How about the instrumental record before AGW is said to have a discernible influence.
RSS and UAH do not falsify at this point, but GISS may, depending on how you do the math, but this seems a much tougher test.
” How about the instrumental record before AGW is said to have a discernible influence.” The early part of the instrumental has a margin of error of about 0.25C while the newer part is about 0.125C, but a bit suspect with the satellite data, especially SST tending to show less warming/more cooling. With roughly +/-0.35C of “slop” in the instrumental, nearly 2/3rds of the warming is questionable without alternate theories of why. Add to that normal swings of +/- 0.25 C for ENSO and no, you have to very creative statistically to “discern” a global CO2 signal.
With this for sampling
I find this ” The early part of the instrumental has a margin of error of about 0.25C while the newer part is about 0.125C” hugely optimistic.
Bob, you might get a kick out of this.
When I noticed that ERSSTv3b no longer included satellite data because of a “cooling bias” I just had to check it out :)
Doubled rainbows arc over doubled waves in the stadium below.
Micro, “I find this ” The early part of the instrumental has a margin of error of about 0.25C while the newer part is about 0.125C” hugely optimistic.”
I agree, but comparison of the different products is about that from 1880 to present and there is good agreement with other data sets. Absolute temperature and seasonal is way worse, but averaging anomalies cures a lot of ills.
capt, they’re bogus, well unless taking temp measurements in 1-2 5 dozen locations globally makes for a well sampled planet. And if you don’t see an increase in error between 1971 or 1972 and 1973 or 1974 it’s suspect as well.
Micro, They aren’t bogus, they are just measuring different things. The land data is dominated by northern hemisphere changes and has lots of variance and seasonal differences You can slice and dice the data a dozen ways and “Globally” end up with about +/- 0.2 C, but it gets worse the further you get from your baseline. That is mainly because 70% of the data is oceans that don’t change very fast.
Temperature isn’t a great metric for energy though on a “global” scale if you focus on the small land percentage though. That temperature ranges from about 35C to -80C . To warm a region from -25C average to -20C average doesn’t take much, but a huge FIVE C DEGREE heat wave sure sounds impressive :)
The NCDC data set for 1930, has 22 stations for the entire world, 40:324, 50:1679, 60:3770, 70:2755, 80:8511, 90:9712, 00:8279, 10:11448. Thay can’t all create a global average with the same margin of temp error.
Sergey Kotov used the mathematics of chaos to analyze a 4000 year Greenland ice core temperature record and found a pattern in the data. Extrapolating into the future Kotov’s approach predicts cooling to 2030, then warming, and then 300 years of cooling beginning in 2100.
Curios given that chaos is intrinsically unpredictable.
Not if you are Russian.
“Presumably there is no critical concentration of CO2 or some other gas beyond which the greenhouse effect begins to operate; more likely the absorption of terrestrial re-radiation by CO2 varies continuously, even if not truly linearly, with the concentration. ”
Odd that he seems unaware of Arrhenius’ law.
Odd that you seem to be unaware that there is no such thing as Arrhenius’ law in the context you use.
Are you perhaps referring to his equation relating to the speed of chemical reactions relative to temperature?
If so, why would Lorenz need to acknowledge that he knew, or did not know, of its existence? I would be grateful if you could provide me with additional information.
Live well and prosper.
Google Arrhenius greenhouse. Kinetics wan’t the only thing he did.
how can one ”detect” something that doesn’t exist; as ”anthropogenic GLOBAL warming”?…
Good. So now we can get rational with policies. No more Kyoto Protocols or carbon taxes until AGW has been detected. In the meantime we can proceed with adaptation and economically rational ‘no regrets’ mitigation policies.
Back around the same time as Lorenz’ essay, IPCC had just completed its first assessment report.
In this report (Ch.8, p.253) IPCC conceded that the enhanced greenhouse warming signal had not yet unequivocally been detected, and proposed that this would be the case when the global average temperature rose by 0.5C above the 1990 values.
This was projected to occur between 2000 and 2047.
It’s now 2013, and we have not reached that point yet.
The HadCRUT4 anomaly was around 0.25C in 1990, so would have to rise to 0.75C before we recognized an AGW signal.
We are now at around 0.45C, so that means another 0.3C warming.
Assuming warming resumes at the average rate seen since 1990 (0.11°C per decade), this should occur in 27 years, or by 2041 – still within the range of “before 2047” as predicted by IPCC back in 1990.
So let’s see if by 2041 there is an enhanced greenhouse signal or not.
If the current slight cooling trend continues for another two decades before warming resumes, at will take well beyond mid-century before we can recognize an unequivocal enhanced greenhouse signal.
Until this occurs, Peter, I’d agree with you that we should hold off trying to mitigate against added CO2 emissions.
Interesting, thanks. And we have no shortage of transport fuels to worry about it seems. I’ve just read this: http://www.mauldineconomics.com/frontlinethoughts/sometimes-they-ring-a-bell/
[you’ll need to subscribe to read it]
manacker, you took the 1990 peak at 0.25 C. The long-term value then was less than 0.15 C. The trend line from 1970 till now would give us that 0.5 C rise by 2020.
No, Jim. You are wrong (once again).
IPCC FAR used the temperature rise by the year 1990 as the basis for the detection of an enhanced greenhouse signal in the observations, with an added 0.5C as the “magic number” that would have to be reached before such a signal was unequivocally detected.
HadCRUT4 in 1990 was at 0.29C; in 1991 it was 0.25C
The average from 1988 to 1991 was 0.22C
So, using the 1988-1991 average, it would take 25 years (rather than 27) to reach an unequivocal enhanced greenhouse signal, i.e. by 2039, rather than 2041.
Not much difference, Jim
Take a look at this (again).
You could also choose 1994 that was 0.2 C cooler than 1990, which was a local warm peak 0.1 C above the trend line. Extrapolate this to 2020 and you can see how easy it is to have that 0.5 C rise by then. 1980-2010 had a similar rise, as did 1970-2000, so 1990-2020 is just the same again.
You cite the “trend line from 1970” as the basis for future warming projections, apparently preferring this to the one I used (past two decades)
This was the observed warming trend during the late 20thC warming cycle.
That cycle lasted around 30 years and ended around 2000.
Since 2001 we have an observed slight cooling trend.
Get used to it. It is silly to use this trend for future projections.
If we take the long-term trend without the multi-decadal fluctuations that are arguably caused by natural variability, this is closer to 0.05C per decade and it would take even longer to detect the enhanced greenhouse signal in the observations.
“Before 2047 at the latest”, as predicted by IPCC back in 1990, is probably as good a guess as anything (unless the current cooling phase continues for another decade or two).
– It is not “ME”, it is “IPCC” who chose the 1990 date as the baseline.
– This was done long before 1994 (and also before Mount Pinatubo in 1991, which caused temporary cooling in subsequent years, including 1994).
Finagling the numbers won’t get you anywhere, Jim.
The “enhanced greenhouse warming effect” has not yet been unequivocally identified according to the guidelines set by IPCC in its first assessment report, and its estimate then that this should happen before 2047 appears to still be valid, barring a continuation of the current slight cooling trend, in which case this could slip into the second half of this century.
manacker, I use this graph fairly often now to illustrate that the pause still fits in with previous variability. We have excursions to -0.1 C that don’t last long and rebound back towards +0.1. These rapid changes of 0.2 C occur in very few years, and we are due for one. The climate does not warm steadily when looking at 12-month running means, and is characterized by these 0.2 C oscillations more than anything.
manacker, with the benefit of hindsight we can see that 1990 was not a good year to choose, just like 2013 will not be for the opposite reason.
Agree that your curve shows very well how the current pause fits in with natural variability. But we see that the trend is beginning to move outside the +/-0.1C amplitude range. Another year with no warming (or even slight cooling) and the curve would be outside the envelope.
I’d also prefer to use a curve that covers the entire period, rather than one that starts in 1970, around the beginning of the latest warming cycle, because that introduces a warming bias to the underlying trend.
This curve shows a rough sine curve with an amplitude of around +/-0.25C on a tilted axis, with a long-term warming trend of around 0.05C per decade.
The amplitude can be partly explained by things such as cyclical natural variability (“stadium waves”?) and random forcings, such as volcanoes, etc.
Syun-Ichi Akasofu suggests that the long-term trend is attributable to the natural recovery from the Little Ice Age (without identifying specific mechanisms), but it could be (at least partly) explained by the net forcing from AGW.
Bringing in the earlier period also brings in solar effects and rising aerosols prior to the main CO2 rise rate which has increased by a factor of four since 1950. These other factors are not periodic or continuing, so they don’t help us much in visualizing the future trend, so I think going back to 1970 is sufficient for extrapolation in the near future. This rate should continue to accelerate gradually.
You believe one should not use the entire past record for making extrapolations for the future, but rather start at the beginning of the latest multi-decadal warming spurt in 1970.
The entire record tells us much more, because it incorporates several of those multi-decadal periods of rapid warming or light cooling, rather than just the latest one.
So we agree to disagree.
Words of wisdom indeed. And important principles that many in the climate science community seem to have forgotten.
I have said many time that the idea that every little wiggle in the temperature record has to be “explained” as being due to some “forcing” is as ridiculous as trying to “explain” why it rained today but not yesterday. Complicated non-linear systems undergo natural irregular oscillations even when the applied forcing is constant. In the case of the climate, we simply don’t know what the range of these natural fluctuations is, and climate models cannot tell us the answer since they are so simplified, under-resolved and over-damped.
Here’s an interesting recent article that touches on these issues in the context of climate models:
Well sais. It has to be remembered that with the ‘global average’ what we are see8ing in the resultant charts is a averaged wiggle which probably doesn’t exist in any individual data base.
Tony, I was hoping you’d turn up. Lorenz’s comments about decadal variations surely are supported by the work you have been doing?
Thanks for your comment. I wrote two articles on decadal periods of noticeable climate change.
This was a follow up, figure 5 is especially interesting
We are affected by even short periods of change, be they annual or decadal. We seem to have got hung up on 30 year and century long periods of climate change and read all sorts of meanings into them. They in turn need to be put into context with the much broader picture which does tell us something which is that our overall climate is hugely variable and does not conform to the notion of a static climate until 1880 or so that has become the norm.
BTW I enjoyed your comment about there being three responses to a problem of which one is ‘do nothing.’
In my experience politicians seem incapable of this very sensible course of action as they always have to be seen to be ‘doing something.’
Yup, very often we would recommend doing nothing, but they would come back and say they had to be seen to be doing something. So, we tried to figure out the cheapest and least binding way to do just that! – J
On their shields, or without them.
“Doing nothing.” Johanna, for years (some time ago) Queensland Treasury’s Microeconomics Branch served the public well by getting ridiculous ideas from many departments knocked back. But we were criticised because we put forward fewer proposals for action than the departments whose nonsense we kept in check.
Many thanks for cross posting this to WUWT.
In recognition I will ask Big Oil to increase your already very sizeable annual stipend.
Woo-hoo! I will be cracking a bottle of The Widow in anticipation of this unexpected infusion of funds. :)
Faustino – they should have been careful what they wished for. I expect that proposals for micro-economic reform in their own areas would have been less than welcome.
Thanks for posting the link to a very interesting essay by atmospheric physicist, John Reid.
This quote stuck in my mind:
Well, they went to war with the weapons they had, not with ones that wouldn’t break.
manacker, re: testing the models. For example, they could run the model without increasing CO2 and see if that also produces the warming. Wait, I think they did that many times and it didn’t. What about running the model without CO2 or with half the CO2 to see if they can get anything like the current climate? What other test would you suggest for these models to see if the AGW part is working?
You “test” a model in real life, not by more model simulations, but by comparison with actual empirical data, based on real-time physical observations or reproducible experimentation (Feynman).
GCMs have not yet “passed” this test.
And “scientific hypotheses” should be tested by conscientiously trying to disprove them.
For example, the current observed slight cooling trend (“pause” or “travesty” or whatever you want to call it), despite unabated GHG emissions and concentrations reaching record levels, should be cause for trying to disprove the previously held CAGW hypothesis (as outlined by IPCC in AR4 and being repeated in AR5).
Instead, it was countered with all sorts of rationalizations why CAGW was still valid despite the observations to the contrary (Chinese aerosols, heat “disappearing” into the deep ocean, natural variability, etc.).
The skeptics are very reluctant to accept natural variation as a cause for the pause, not the AGW side. The pause is a negative natural variation on top of a rising trend. The trend is always up or sideways, not downwards, you may have noticed. This is consistent with a rise and natural non-trending variability of 0.2 C amplitude superposed.
The AGW side acknowledges natural variability on a time scale up to 10-15 years. They explicitly disavow longer term natural internal variability as being significant, and do not consider it in their attribution of late 20th century warming. A line has been drawn in the sand, whereby the AGW side expects warming to resume momentarily, whereas skeptics expect the pause or even cooling for the next two decades.
Our hostess says “a line has been drawn in the sand”. But what she does not say is what should be done about it. Do we wait x years to see what will happen to global temperatures over the upcoming decades, or do we engage in a SCIENTIFIC discussion with the Royal Society, and all the other learned institutions who support CAGW, to determine which science is most likely to be correct? And who is going to force such a discussion to take place, if that is the proper course of action?
JimD, I think it is because Team Denier lack the skills to do any kind of detailed analysis. It is clear that we can model natural variations as a sole contributor to the pause. Consider this: The following link is a multiple regression model fit that applied temperature data back to 1880, using indices such as SOI, Aerosols from volcanic activity, LOD, and TSI, which I refer to as SALT, to more than adequately describe the pause:
Skeptics will put on blinders rather than address this — they won’t because
(1) they can’t admit to it
(2) they can’t do it
Using a Mosh-style analysis, (1) is understandable, but (2) is embarrassing to their ego.
My own prediction for reasons I have outlined elsewhere is a 0.2 C rise is due within very few years as we are at one of the local minima in running annual temperatures that are typically followed by sharp rises of up to 0.2 C. Then the pause talk will be gone, and we can return to the real climate change issues again.
There is an interesting recent paper on something called “stadium waves” that models long-term variation using characteristics such as Length of Day (LOD) data to correlate temperature changes against. As it turns out, the LOD contributes very little to a sustained trend since 1880, and so a multiple regression fit that incorporates LOD still points to CO2 as almost a 100% contributor to the overall warming trend.
Seeing is believing.
Yup, I keep looking at that flat global surface temperature trend for the past 15+ years
“The AGW side acknowledges natural variability on a time scale up to 10-15 years. They explicitly disavow longer term natural internal variability as being significant, and do not consider it in their attribution of late 20th century warming. A line has been drawn in the sand, whereby the AGW side expects warming to resume momentarily, whereas skeptics expect the pause or even cooling for the next two decades.”
I am actually amazed at your oversimplification of the “AGW side” and your monolithic approach to positions on that side. There are many who fully acknowledge the presence of multi-decadal natural variability. Just as there are many who fully realize that the most accurate approach is not to dwell to closely on merely tropospheric temperatures or energy especially as measured only by sensible heat, as that doesn’t reveal the full story of energy flowing in and being stored or removed from the Earth system. Yes, some on the AGW side expect warming of the atmosphere to resume “momentarily” (as in the next few years), but others think it might not be until the early to mid-2020’s. Some areas of the planet (like Australia) have already shown the continuation of the upward March in atmospheric temperatures. And of course, as measured by ocean heat content, there never was a “pause”. Personally I think it a better than even chance that 2010-2019 as a decade will be warmer than 2000-2009, and certainly a much better chance still that 2020-2029 will be warmer still.
Well there may be many, but not the IPCC, and deviations from the IPCC are not treated kindly. If the pause continues to the mid 2020’s, the global climate models are pretty much falsified in terms of providing useful future scenarios of climate change.
Yup, I keep looking at that flat global surface temperature trend for the past 15+ years
It seems almost intransitive.
Instead of Webby’s wobbly waffles you write:
Now, let’s see if we can define “within a few years”.
Our hostess says this could be a decade or two.
Or do you think this will most likely be shorter?
If so, how short?
You then apparently predict (as IPCC did in AR4) warming of 0.2C per decade, somewhat higher than the observed warming during the late 20thC warming cycle (IPCC’s “poster period”) or the statistically indistinguishable early 20thC warming cycle (before there were any appreciable human GHGs) – is this really the rate of warming you predict to occur over an extended time period? Or are you simply talking about a short “blip”?
Try getting specific, Jim.
It won’t cost you anything but your credibility.
My “prediction” would be that our hostess is probably correct is guessing that the current pause (i.e. slight cooling trend) will continue for “another decade or two” (let’s say 15 more years) before the LT warming trend warming resumes, and that it will then warm at a similar rate as seen in the early and late 20thC warming cycles (around 0.16C per decade), for around 3 decades before switching to a slight cooling trend again for the next 3 decades.
So we’d have:
– Continued cooling of around 0.03C until 2030 to an anomaly of ~0.42C
– 30 years of warming at 0.16C/decade to an anomaly of ~0.9C by 2060
– 30 years of slight cooling at 0.03C/decade to an anomaly of ~0.81C by 2090
IOW the cumulative warming from today to the end of the 21stC would be around 0.36C
Adding in the 0.2C from the “1986-2005 mean” used by IPCC, we would have total 21stC warming of around 0.6C (similar to the 20thC)
(But we’ll all be long gone before then – and not as a result of CAGW).
You’ll get no argument from me that the usefulness of the IPCC as well as the consensus seeking approach are dead. But you know that there are many scientists working well outside the confines of IPCC who are still firmly on the “AGW side”, but have looked at the issue from alternative perspectives and this is where the next generation of both new climate discoveries and advanced models will come from. Who knows, perhaps even the stadium-wave will be part of the models some day.
manacker, yes, 0.2 is possible by 2016. I think it will be 0.4 C warmer than now by 2030, just by extrapolation, but that has to be plus or minus 0.1 for a given year.
curryja | October 14, 2013 at 8:12 am |
The AGW side acknowledges natural variability on a time scale up to 10-15 years. They explicitly disavow longer term natural internal variability as being significant, and do not consider it in their attribution of late 20th century warming. A line has been drawn in the sand, whereby the AGW side expects warming to resume momentarily, whereas skeptics expect the pause or even cooling for the next two decades.
So, alarmists. Are you feeling lucky?
Using a Mosh-style analysis, (1) is understandable, but (2) is embarrassing to their ego.
Science doesn’t coddle theories; it tries to break them by smashing them into a brick wall at 90 mph. If the theory is sound the pieces laying around will be that of the wall. Ladies and gentlemen, start your engines….
@jc: If the pause continues to the mid 2020′s
And if 2020 is 0.5 °C above 2010?
Well 0.5C in one decade is much larger than can be accounted for by AGW, so another explanation is needed either for continued pause or very large increase
Even with a rise of 0.5 degrees C you couldn’t necessarily ascribe the rise to Human GHGs. Furthermore, were the rise from HGHGs, the implied climate sensitivity means that absent HGHGs, the world would be a much colder place.
So be grateful, Human GHGs have been beneficial in either case.
@JC: 0.5C in one decade is much larger than can be accounted for by AGW
@kim: Even with a rise of 0.5 degrees C you couldn’t necessarily ascribe the rise to Human GHGs.
I’ll go with Judith’s stronger statement: you would necessarily not ascribe the rise to rising CO2. For 2010-2020, I doubt that CO2 (with all feedbacks) could account for more than around 40-45% of a rise of that magnitude.
@kim: Furthermore, were the rise from HGHGs, the implied climate sensitivity means that absent HGHGs, the world would be a much colder place.
Aha, a counterfactual in the running with “Were the moon made of green cheese the cow could jump over it.”
That’s not to say I consider a rise of 0.4-0.5 °C unlikely. As I’ve said before, if that’s not what 2010-2020 witnesses it will be back to the drawing board for my model (the one I posted about in December). If it does happen less than half will be due to CO2.
V, the higher the climate sensitivity to humanGHGs the lower the temperature would now be without them. So pick a climate sensitivity that frightens you and calculate how cold the earth would now be without man’s effect.
We should hope that natural variability dominates the recent temperature picture, for fossil fuels are finite.
And the cow jumped over the moon.
Sorry, kim, I should have gone into more detail and agreed with you that if CO2 was the sole cause of a 0.5 °C rise during this decade then you’d be right about the climate sensitivity having to be absurdly large.
CO2 is likely to rise about 7% during this decade. log_2(1.07) is about 0.1, and my model (the one from December that Matthew Marler comments on occasionally) has climate sensitivity (neglecting ocean warming) at a steady 2.1 °C/doubling for the last century. So we should expect CO2 to add about 0.21 °C between 2010 and 2020.
Natural fluctuations largely canceled last decade’s CO2-induced rise. My model has them adding 0.2-0.3 °C to CO2’s 0.2 °C this decade, the stadium wave model as I understand it forecasts the opposite. Neither model however even faintly imagines that rising CO2 could cause a 0.5 °C rise in that time. That impossibility was what was behind my offhand “counterfactual” remark.
Paul, four years old, but still excellent.
Oops! Thanks. So not “recent” at all.
We can ask what the comments mean from the point of view of pure science and what they mean from the point of view of decision-making.
In pure science there’s nothing wrong in testing many different null hypotheses. Each of the tests tells whether one specific statement can be rejected at a specified level of certainty. Rejecting many statements limits the range of likely correct states of understanding. Based on the data available in 1991 he concludes that neither of the two hypotheses he describes can be rejected and that it will take time to get enough data for rejecting either one.
From what has happened since 1991 we know that the evidence speaks against the assumption of no GHE as the temperature rose rapidly in 1990s and has stayed at the higher level since 2000. The evidence may be stronger than Lorenz expected. How strong it is, is still an issue being argued upon. There are several methodological issues that affect the conclusion.
Another approach is to make estimates for parameters that describe the strength of the possible influence of additional CO2 assuming that the temperature is linearly dependent on the logarithm of the concentration, but making no preset assumptions on the coefficient, i.e. allowing also for no dependence. The model could be made more complicated by adding some dynamics where the deviation of the concentration of some “equilibrium value” determines the rate of change of the temperature.
The best known example of the latter approach is that of estimating TCR. Finding out that TCR has a specific positive lower limit at a given level of certainty tells that the existence of GHE warming is confirmed at that level. with that certainty. The statement of IPCC on most warming since 1951 with 95% certainty is a statement of this nature and seems to be based on a methodologically valid analysis although there are certainly again issues that can be argued upon.
I return to the case of decision-making in a separate comment.
Now on the point of view of decision-maker. Here a recent comment of Steven Mosher demonstrates the logic well.
The decision-maker cannot avoid making decisions, deciding to not react at all is also a decision. Making decisions based on information includes
– choosing sources of information,
– deciding how much weight to give on each source,
– deciding how to weight uncertain outcomes that have great influence if they turn out to be true.
It’s obvious that the reaction to a likely serious threat is almost the same as the reaction to the same threat considered even more certain (90% is no different from 95% or 99% in that).
It’s obvious that also serious threats that are less than 50% likely but still not very unlikely should be considered. That’s done in all areas of decision-making all the time.
Decision-making is not dependent at all on the certainty of more than 50% warming due to AGW, for decision-making the primary threshold is that AGW is not highly likely to be very weak. The primary threshold does, however, determine only that the issue must be taken seriously, choosing the actual policies requires much more information of various types. More accurate estimates of the strength of the AGW would also be valuable at that level.
True. But in the case of GHG emissions firstly, we do not have persuasive evidence of a serious threat (temperature change is not a threat on it’s own, unless the net damages costs and benefits can be conveyed convincingly and they have not been yet) and, secondly, those arguing most strongly for GHG mitigation policies are, in most cases, the same people as those arguing for ridiculously uneconomic and useless policies – like carbon pricing and renewable energy – and arguing against ‘no regrets’ and economically viable policies.
Pekka P. said:
“The decision-maker cannot avoid making decisions, deciding to not react at all is also a decision.”
This is fallacious. It implies that every time we don’t do anything, it is a conscious decision. There may be nothing to react to.
You and Mosher have a lot to learn about the policy process. I don’t comment on physics, because I am humble enough to admit that I don’t have the expertise. A bit of reciprocity would go a long way.
You put words in my mouth that I have not said, and you assume that I have thoughts that I do not have. If I understand Steve Mosher correctly you do the same with him.
The acts of real people have are conscious to an variable extent – and to an extent that nobody can really tell (including themselves). Comments written here cannot cover all the caveats, at best they try to convey a partial message correctly.
Just like Mosher’s “models are the best we have” mentality, you say:
“at best they try to convey a partial message correctly.”
I thought physics was about precision? I certainly would not have got away with a sloppy statement like the one you made about decisions as a policy analyst. As previously mentioned, you need to stop assuming that your expertise in one area is easily transferable to another. Not to mention that people in other fields could not be as intelligent and skilled as you are.
It is one thing to be aware of reasons to act or not and decide not to act, and another to not act through mere ignorance of the facts. I don’t think there is a middle road.
I have not worked at a ministry or other comparable government office, but I have a lot of experience from working with ministries and other decision makers over a period of more than 20 years. I agree on much that you have written based on your experiences. I have tried to take all that into account in formulating my comments that relate to decision-making.
It’s not to be expected that different people would have identical views on, how the processes of decision-making operate in detail, but perhaps you could see my comments in a different way, if you would make different assumptions on what I might have taken into account.
I’m sure that all readers of my comments interpret them often in a way that I don’t have in mind. In some cases I try to explain them better, but very often I conclude that such an attempt would be futile, nobody would perhaps read an explanation comprehensive enough to cover everything.
The last paragraph of my second comment above should tell clearly enough that I’m not at all ready to support the strongest policies that have been proposed to mitigate global warming. Accepting IPCC WG1 full reports as balanced does not mean that I accept all proposals claimed to be based on them.
jim D, another amateur, comes to the fore.
You would be surprised how few hard “facts” there are in policy. There are competing spins on hard facts, there are “studies” presented as facts (which they often are not), but if you think that it is like 2 + 2 = 4, you are demonstrating that as a policy analyst, I would make a good astrophysicist, or vice versa.
Pekka, I accept your comment in the spirit that it was given. But a lot of the damage that has been caused to science and to the economies of countries by CAGW alarmism has been caused by people stepping outside their areas of expertise (while pretending not to).
And, your apparent contrition has not been bolstered by claiming that I put words into your mouth, not to mention Mosher’s. And that this is what I routinely do.
I do not argue with you about physics, because you have infinitely more expertise in it than I do. And if you think that engaging with the bureaucracy as a scientist is the equivalent of more than 25 years in the belly of the beast, you are fooling yourself.
Let’s just stick to our knitting, shall we?
For example, Queensland could have acted far in advance to prevent their floods, but didn’t because, it appears, of ignorance rather than any deliberate decision. They thought their one dam was all they needed. Perhaps there were people who proposed more dams, but who knows what happened in the policy/decision stage. A lot of reliance of 500-year flood information is getting outdated, and acting may be called for, but politically not good because it also costs money, so they may prefer to chance it. This is what is happening on the larger scale with global climate change and policy.
JimD, “For example, Queensland could have acted far in advance to prevent their floods, but didn’t because,”
LOL Queensland acted further in advance to prepare for the droughts!
Now that is one prepared cowboy :)
Pekka continues to miss the point that any warming man seems capable of causing is far more likely to be net beneficial than net detrimental.
Thanks again, Jim D, for demonstrating that you do not have a clue about how things actually work.
Politics is full of what Americans have aptly called “Monday morning quarterbacks”. Or the old, but good, “hindsight is 20/20 vision”.
I have no particular opinion about Queensland flood preparation, except that it wasn’t perfect. There may well have been non-objective decisions involved – heaven knows I’ve seen plenty of them.
But the notion that there is some kind of scientific formula for public policy decisions is simply naive.
For what it’s worth, Pekka’s claim only implies that “doing nothing” is still doing something. This has very little to do with economics or policy analysis. OTOH, it has been studied in philosophy under many angles: metaphysics, phenomenology, philosophy of mind, collective action theory, and whatnot.
Johanna’s promotion of the laissez-faire policies she favours rests on the idea that IF we have no compelling reason to intervene, we should not. No wonder we never see her putting fotrward any reason to intervene and her defense that doing nothing is not intervening, this time on the basis that only policy analysts can get that profund argument.
The most direct way to refute such claptrap is to observe that the laissez-faire is a myth (h/t AK) that has no bearing in reality. Most fossil fuels are subsidized, for instance. A less direct way would be to show that the counterfactual is not realistic: policies under uncertainty are made all the time.
Anyway, please continue.
I’d 100% agree that it’s a myth, but the second part of that statement is, at best, an extremely careless misunderstanding of what’s being hat-tipped:
Such “action“, as a class, does include inaction, however the very fact that the myth of laissez-faire incents such inaction means it has great “bearing in reality.“
Which is the rationalization of the denier.
From the Wikipedia entry for Denial
There is a difference between “action” (doing something specific to reach a desired goal – “walking the talk”) and “actionism” (giving the appearance of doing something specific – “talking the talk”)
Politicians are well known for “actionism”.
But we have three even more basic problems here.
1. First of all, we are not even certain that we have a potential problem from AGW at all – nor whether or not this problem has a chance of becoming a potential serious threat to humanity or our environment some day in the future, nor (least of all) when this potential serious threat could possibly occur.
All we have is the output of GCM simulations, which have proven to be unable to predict actually observed climate trends, consistently overestimating the impact of AGW, and no real analyses of “winners and losers” from a slightly warmer world with somewhat higher CO2 levels.
2. We have also not defined any actionable proposals, which could be implemented, with a cost/benefit analysis of what these proposals could achieve and what they would cost. All we have is political arm waving: “reduce CO2 emissions by year X to Y% of what they were in year Z”, or even more absurd, “hold global warming to no more than 2C” (these are obviously not actionable proposals, which could be implemented); the proposal to “levy a direct or indirect carbon tax” is “actionable” but obviously will have no impact on our climate – no tax ever did.
3. We have not examined what the unintended negative consequences of any actions might be.
So the politicians who choose “actionism” rather than “action” are making the right choice.
Is this inadvertent?
Or is it because politicians realize that the wind shifts position frequently and yesterday’s bold initiative can very easily become today’s major mistake?
You are closer to the scene than most of us here. What do you think?
Webster, “Which is the rationalization of the denier.”
Oddly, that was a rationalization of Guy Callandar who predicted that CO2 increases would have an impact on climate. If you look at the BEST data, the greatest increase in temperature is in the winter months with the least in the summer months. Spring/Fall have become more “stable” likely due to the combination of land use and CO2. The diurnal range decreased until ~1985 and has since stabilized or increased slightly. All in all, a 0.8 C increase Globally has been fairly pleasant especially with the recent decrease in Hurricane activity.
For an Engineer, Callandar did a fair job “Sciencing” :)
Rather than simply spouting drivel on “denialism” from Wikipedia, why don’t you seriously address the issue of potential “winners and losers” from a world that is 1 to 2C warmer than today, with CO2 levels that are say 50% higher.
Because you can’t – that’s why.
The interaction between politics and policy is a huge subject, well beyond the scope of a comment here. But an important factor is what the Germans so brilliantly called the zeitgeist.
CAGW was a 97% :) fit.
> the very fact that the myth of laissez-faire incents such inaction means it has great “bearing in reality.“
Sure, and by the same token creationism has great bearing on reality. Perhaps there are other ways to interpret “bearing on reality”, i.e it corresponds to the facts of the matter.
Then I’m the one who gets accused of semantic quibbling.
That laissez-faire incents inaction remains to be seen. Even an inactive Congress can decide to keep its gym open:
Is shutting down a government laissez-faire?
With justice! Look at the context of your statement that “laissez-faire is a myth (h/t AK) that has no bearing in reality.”
This default policy is, in fact, the central object of incentivization of the whole laissez-faire myth: don’t interfere with anybody’s freedom of action without a “good” reason. The fact that the detailed reality is somewhat different doesn’t change the meaning of the myth, nor the fact of how successful societies that apply it have been.
An incentive tends to encourage people to act in certain ways. It doesn’t compel them. Nor does it necessarily specify the precise details of such action.
Most proponents of laissez-faire would (AFAIK) approve of gridlock, within limits. Whether the current situation surpasses those limits is probably a matter of divergent opinion among them.
> The fact that the detailed reality is somewhat different doesn’t change the meaning of the myth, nor the fact of how successful societies that apply it have been.
Perhaps, but one has to admit that the atomic model is a myth that has a different bearing in reality than creationism. One does not embrace fictionalism by turning “a myth that has no bearing in reality” into an oxymoron.
The simple fact is that governments exist and do interfere with markets, even those which advertise some kind of laissez-faire, perhaps even more so if we consider the indirect measures [1, 2], and notwithstanding externalities like military interventions [3, 4].
Max, I only pick “winners and losers” when it comes to doing the science. And you are losing Big Time (as vice satan Darth Cheney once said).
yes, Pekka, Johanna is putting words in my mouth. At some point she also thought I had no knowledge of Weber. Oh well.
This is even better
““The decision-maker cannot avoid making decisions, deciding to not react at all is also a decision.”
This is fallacious. It implies that every time we don’t do anything, it is a conscious decision. There may be nothing to react to.”
somebody should teach her some logic. And of course by slipping the word conscious into her sentence she thought she gave herself an out.
deciding there is nothing to react to, is a decision. Whether it is conscious or not is immaterial. More importantly, the context i provided was one where the policy maker was collecting information to make a decision.
we are always deciding.
Johanna did also write just recently a comment where she told, how she had been explicitly involved in deciding on not to act.
Now she’s telling that choosing (1) is not a decision.
Hmmm, from whom would I rather take policy advice? Tough decision.
Better, from whom would I be most likely to get useful policy advice?
It’s two minutes to midnight, and Pekka, willard, moshe and johanna huddle around the table, listening to the faint signal of the transcript being read elsewhere. Big ears. Big mouths. Gotta decide quick, now.
Of course. And if you actually have an index of everybody’s comments here, you’ll find some of mine pointing out that very thing. (I don’t, if I did I might use it.) What the myth of laissez-faire accomplishes is to tilt the political playing field in the direction of solving problems with freer market approaches than would otherwise be applied. That tilt is applied in an environment where a bunch of other incentives exist, with the ultimate result depending on a complex political process.
But the myth has its effect, which is why it’s incorrect to state that it “has no bearing in reality.“.
The same is true of Creationism, as any politician can tell you.
“somebody should teach her some logic. And of course by slipping the word conscious into her sentence she thought she gave herself an out.
deciding there is nothing to react to, is a decision. Whether it is conscious or not is immaterial. More importantly, the context i provided was one where the policy maker was collecting information to make a decision.
we are always deciding.”
This kind of sophistry might win you points in meetings of the high school Philosophy Club, but it doesn’t work in the real world.
So, if I am asleep, I am deciding. If I am staring into space, thinking about nothing, I am deciding. If I am walking down the street and a plane is flying overhead, I am deciding that it will not fall on me.
Decision making, as it is being discussed here, is about responses to a specific stimulus which is intended to provoke a reaction. Note, intended.
In a policy/political environment, players are bombarded with such stimuli. See my example about domestic pets above.
Saying that “do nothing” as the most common response demonstrates some kind of ideological commitment to laissez-faire economics is nonsense. It is a rational response to constant lobbying by people whose views are passionately held and often extreme (otherwise they wouldn’t need to lobby).
When you have spent some of the best years of your life reading letters demanding the death penalty for everything from people who mistreat their pets to graffiti vandals, you get a perspective on these things that is unavoidably lacking in some of the high-falutin’ scientists and players in the Climate Wars. I can spot you a mile away, no matter how you dress it up.
Jim D @ 10/14 8.40 am: For the record: “For example, Queensland could have acted far in advance to prevent their floods, but didn’t.” “Far in advance” (the 1980s), the then Queensland government proposed to build the Wolfdene Dam to the south of Brisbane. It was an excellent site for a deep dam, and would have served SE Queensland’s water needs for many years. The ALP was elected in 1989 on a promise that the dam would not proceed. If it had done, the perceived need to keep the Wivenhoe Dam full in the summer of 2010-11would not have existed, nor would we have wasted billions in recent years on unused water infrastructure, including a de-sal plant (a strange choice for a government gung-ho on emissions reductions).
In the shorter term, the prospect of exceptional rain and the strong risk of severe flooding was raised three months before the flood, and various proposals were made to avert disaster. The “responsible” minister never responded to the advice passed to him.
When the situation appeared critical, a week or more before the major flood, many argued for precautionary releases of water: but the “keep-the-dam-full” mentality persisted. Some in-house suggestions to release water were turned down because it would have caused relatively minor flooding in rural areas below the dam, cutting some river crosses. It was only when there was a danger of dam collapse (the stored water was twice nominal capacity) that water was belatedly released. Given the Wolfdene Dam hadn’t been built, we probably couldn’t avoid some flooding in 2011, but most experts and modelling shows that it would have been far less severe with better management, in line with suggestions made and ignored before the flood.
It was not ignorance but deliberate decisions (and poor dam management, ignoring the manual at the critical time, in part of because of political pressure) that caused severe flooding.
The 2011flood reflected bad government, it was nothing to do with “thinking only one dam” was needed or out-dated flood knowledge. Many affected properties were built below the flood line determined by the 1974 floods, because it suited government and property developers for building to proceed there.
I can’t see any parallels with “global climate change and policy.”
Other than it is the same crowd (‘progressives’) who blocked the dam development and gave us the carbon tax! :(
BTW: The exposure drafts for repeal of the carbon tax were released today: http://www.environment.gov.au/carbon-tax-repeal/consultation.html
I’m coming in late here, so won’t address all points. My main point,as a highly experienced policy adviser, is that I’ve found that Johanna’s posts on CE consistently show an understanding of policy-making which accords with my own, and her take on policy is very similar, near-identical, to my own. When it comes to questions of policy, Johanna is one of the first you should listen to.
Broadly, on the no-action question: ministers and bureaucrats have a pressure to be seen to be “doing something,” and are bombarded with suggestions, mainly from vested interests and people with limited knowledge, as to what that “something” should be. Often, it is in their perceived self-interest to “do something,” but it is rarely in the public interest. Most often, “do nothing” is the superior choice, and winding back government interventions and regulations is generally superior to expanding them. I don’t say this from an ideological position, I say it from having seen many, many policy proposals and implementation of policies from advising Prime Ministers and state Premiers on them. Anyone who has worked in central policy departments and is genuinely committed to the public interest (and even given that people might have different views of what that is) will tend to favour smaller and less intrusive government.
How to handle small issues introduced by special interest groups is one issue, how to handle major issues is another.
Thanks, Faustino. It is irksome to keep being accused of proceeding from some ideological perspective just because you try to explain that the thousands of “suggestions” that a Minister receives each year are mostly non-starters.
Pekka P. then said:
“How to handle small issues introduced by special interest groups is one issue, how to handle major issues is another.”
I assume that he means that issues that he raises are “major”, while other people’s are not. Here’s a tip, Pekka: in a democracy your views on pets can gain or lose you a vote.
Of course there is a hierarchy of issues. Balancing practical politics and sound policy is what it is all about.
BTW, did you know that that e = mc squared?
Pekka, the scale of the issue does not affect the approach, which remains the same: seek to understand the issue, ascertain what options there are to address it (including “do nothing”), evaluate those options and present them without bias in such a way that the decision-makers can clearly understand the issue and the merits of options, and feel that they have sufficient information and understanding to make a decision.
In passing, most of the many Cabinet submissions, NCP reviews etc on which I briefed did not comply with the above, more than 90% when I was in the Queensland Premier’s office. Sadly, my efforts to raise standards were seen as obstructive.
If the scale does not matter then I disagree totally.
On important matters choosing not to act is an active choice. It’s a decision on equal status with decisions to act. On some small issues I could understand that not acting can be justified solely on the ground that the particular interest group for which it matters has not presented proper justification, on matters of national significance that’s clearly not a real option.
Pekka, as a germane example, when I advised the Queensland government on whether or not to support the Kyoto protocol, I did a great deal of reading, including many scientific papers (my capacity to comprehend them was much greater than it is now), and surveys of work in the fields which covered more than 200 research papers. I drew on experts for information and advice, and directed CGE modelling to ascertain the potential impacts on the state economy, and the paper was given to other departments and external; experts for comment. The process was very comprehensive.
The degree of effort reflected the importance of the issue. Most issues could be dealt with far less effort. But the process would be the same.
For the record, I advised the government in 1997 that, while the potential impacts of AGW were not known, there was sufficient grounds for insuring against them; and that the costs to the economy of meeting the proposed emissions-reductions, while significant, were not insurmountable – modelling suggested that state domestic product would grow 35 % over the next ten years without GHG reductions, 32 % with them [this was by way of comparison rather than forecast]. The Kyoto target period was 2008-2012, I advised the government to support the protocol but to take only no-regrets measures in the short term [if such could be identified, this has been discussed on CE before], and in the meantime, pursue developments in the science and revise policy as required. I argued that prior to 2008, we should have a much clearer understanding of the potential significance of global warming, the approach was to make a measured response while understanding developed.
The government accepted my submission, and advised the Commonwealth government (much to its horror) that it supported the KP.
[As it happens, by 2008 I would have been arguing that the Australian and Queensland governments’ anti-GHG emissions policies were excessive and had no demonstrated net benefits.]
As far as I can see we agree totally on this issue. I just wonder, what made you think that we would disagree.
At the same I disagree very clearly by some of the statements of Johanna and she has also stated that she disagrees with me.
Pekka @ 6.05. Pekka, the scale does not affect the process. But it determines the effort you put into it. And the decision on whether or not to take action is not affected by whether or not vested interests or great national interests are at stake, a recommendation to take no action would be made only if, after due consideration of the issue and the options, that was the most beneficial option. And that would often be the case even in areas of great significance, e.g. where the expected impacts of government action exacerbated the issue rather than improved it. I assure you that that is very often the case, and have at times quoted authorities such as the World Bank in support of that. So while I agree that doing nothing is an active choice, in my experience it is often preferable to the alternatives, even in matters of great importance. For many reasons, governments often make poor decisions, and even when they make good decisions, implementation is often flawed.
Of course, we pay more attention to issues of major significance. But I repeat, as I think that Johanna has done, that the process remains the same, as I outlined above.
Pekka @ 6.21 In general, I appreciate your comments and that you have a better background than me on many issues. It appears from, e.g., your 6.05 comment that you do not “totally agree,” but perhaps the further posts have removed some or all of the differences. I can’t comment on the minutiae of your disagreements with Johanna but, as I said early, she and I have a very similar framework, perhaps some misunderstandings have arisen between you.
Johanna, I’ve just seen your 5.54 comment, perhaps Pekka and I have worked some things out since then.
Perhaps the problems are even semantic or due to reading from comments something that’s not written but assumed to hide between the lines.
There seem to be differing interpretations for such issue like
– What is a decision?
– What is a model or theory?
– What constitutes a fact?
People try to use the language in support of their preferences even when those attempts cause only confusion. All net discussions are full of misunderstandings based either on ambiguously written comments or on less than careful and for some reason biased reading of the comments.
Pekka, that “some reason” tends to be involved with ego. I’ve worked hard to reduce the impact of ego, to be able to observe dispassionately. I’m far from ego-less, but it is perhaps why I get less embroiled in disputes than many on this site.
> So, if I am asleep, I am deciding. If I am staring into space, thinking about nothing, I am deciding. If I am walking down the street and a plane is flying overhead, I am deciding that it will not fall on me.
This just proves that some action are not decision-based, and would only disprove a literal interpretation of “we are always deciding”, to make sure that the “always” becomes irrelevant for our discussion.
Suppose Prime Minister PM asks a policy analyst PA to read her mail and filter out the crap, which PA does (he does have a knack for crap, real or not). And since PA’s a good advisor, he figured out the cheapest and least binding way to make believe that PM does something, while doing nothing most of the time.
PM may be responsible for PA’s decision, but she is not deciding for him. PM defers to PA’s decision. On the other hand, PM will always pretend that she reads her mail when asked, say by the press, except perhaps comes the time to throw PA under the bus.
What is clear is that it is not PM who filters out the crap and decides what’s what: it’s PA. Now, in which way PA’s decisions about crap imply any voluntary act from PM? In the same way as for any chains of command: it rules the overall behaviour of PA without ruling any specific instances, if we exclude spot-checking of course.
This case should be enough to show that, in the actual context under discussion, doing nothing is a decision, as Pekka claimed. It also shows that not all decisions imply conscious acts from the same agent all the way down, which refutes johanna’s point that Pekka must presume so.
Tough to know if johanna has any experience in reading anything else but crap.
“If Lorenz were looking at the climate data in 2013, how would he interpret it? ”
For the decadal analysis, he would have found the probability being rather small, and rejected the null hypothesis as he put it. At the time he wrote it the probability was much higher.
This is an interesting point:
“This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis.”
Indeed. As noted by Callender in 1938 after discovering world temperature and CO2 going up ‘at the same time’: “The course of world temperatures during the next twenty years should afford valuable evidence as to the accuracy of the calculated effect of atmospheric carbon.”
Great citation from 1938. I read every word.
The most remarkable thing one concludes from it is that there has been little improvement in understanding of the effect of anthropogenic CO2 in the last 75 years nor is there is any additional observational evidence that changes either Callendar’s conclusions nor the criticisms.
Callendar calculated about 1.5C per CO2 doubling and said this appeared to be low based on observational evidence. Of course Callendar didn’t know that beginning in 1940 a 30-year “hiatus” in global temperature rise would take place and that would ratchet back observational evidence of CO2 warming.
Interestingly the criticisms of his paper was failure to address potential vertical distribution of temperature due to convection and assigning a constant value to humidity and reflection. Exactly the same criticisms are leveled today. There is virtually no improvement in either understanding of the strictly radiative transfers which were good in 1938 and are good today nor in the response of the hydrologic cycle and albedo which were poor then and remain poor today.
In addition Callendar stated that additional surface radiation in the shortwave (“sun heat”) is not the same as longwave (“atmospheric heat”) and that the latter effect would be much smaller. This is a point I make today fairly often – all radiation (forcing) is not equal. Shortwave is almost completely absorbed by the ocean at depth up to 100 meters while longwave is absorbed by a hair-thin surface layer and immediately transformed into latent heat of vaporization. Only on a dry surface are shortwave and longwave forcing roughly equivalent.
So we basically got 75 years of climate science since 1938 and nothing to show for it.
Rococo baroque, oh!
That is naively true because there is not much to the basic mechanism when viewed empirically. It’s essentially a sensitivity to CO2 that is logarithmic, and then are a few natural variability terms that mask the warming at low doses of CO2.
The SALT model of natural variability is essentially SOI, aerosols (volcanic), LOD, and TSI, which is not enough collectively to slow down the inexorable effects of CO2:
Webster, Your SALT is a prefect example of instability caused by over simplification. Your solution is an exponential curve that is unsustainable :)
Cappy D doesn’t realize that this curve uses CO2 and is based on purely empirical data:
No exponential in sight.
Cappy is skunked yet again.
Webster, “Cappy is skunked yet again.”
Oh, I stand corrected, you actually have something new for a change. You joined team cyclomania and “corrected” the “Global” observational data. Good show! Now correct for the remarkably long recovery from past cooling events, and you might be on to something :)
There is a little spreadsheet with buttons for the two CO2 curves to play with :)
I have written extensively about Callendar in a previous article. I even obtained his archives which confirmed he was cherry picking his Co2 readings. These readings were then taken up by Keeling.
Callendars theory was elegantly demolished by Giles Slocum in the 1950’s.
In his last year of life according to his biography he came to believe his theory was wrong.
He had a VERY interesting and honourable war record however.
Slocum casted doubts on whether CO2 was actually rising. Probably he lived to see Keeling demonstrate that Callendar was more likely right after all.
oops! that link is botched.
That should work I hope :(
…except the bill.
Callender in 1938:
Too bad nobody listened to Callender back then.
CO2 in 1938 = 308 ppmv (Siegenthaler et al 1986, ice core data)
CO2 in 1958 = 315 ppmv (Mauna Loa, checks with Siegenthaler)
ln(C2/C1) = 0.0225
ln(2) = 0.6931
assumed 2xCO2 TCR = +1.6ºC
theoretical GH warming from added CO2:
+1.6ºC * 0.0225 / 0.6931 = +0.05ºC
1938 to 1958 HadCRUT4 linear trend = -0.07ºC per decade (-0.14ºC cooling over 2 decades, instead of +0.05ºC warming).
none of us have lived long enough to see the proof of that yet.
Callendar was right that CO2 was rising, and it stood to reason because of the burning rate going on, and his correct conjecture that the ocean couldn’t keep up, so this was later proved right by Keeling and other records despite Slocum’s cold water.
it was rising according to those co2 readings he decided to select.. However, he chose those at the lowest end of the range rather than those from the middle/upper end of the range.
The readings he used were from the same scientists and instruments that have subsequently been called unreliable because they are higher than the modern narrative suggests.
Charles Keeling admitted he was very much influenced by Callendar’s work and used his figures.
Whether this all means that for over 130 years (from around 1820) numerous scientists using reliable and tested equipment got all their co2 readings completely wrong (except apparently the ones Callendar used) which Keeling-who had little experience at the time in this field- managed to immediately put right at Mauna Loa is something we can all argue over.
At the very least, if the old time scientists were wrong for so many years (except of course for the readings that Callendar used) it illustrates that perhaps our modern climate scientists might also be proved wrong in due course..
It seems to me that “climate science” over the last 30 years has been an attempt to obscure Callendar’s conclusions as they are not alarming enough. Callendar also thought this slight warming should be beneficial rather than harmful.
I had been put off by Callendars manipulations of the 10,000 prior CO2 measurements for some time, and still hold some reservations, but the CO2 sensitivity curve he ascertained is quite a bit more plausible than the IPCC sensitivities of the past.
The most remarkable thing one concludes from it is that there has been little improvement in understanding of the effect of anthropogenic CO2 in the last 75 years …
After reading that paper it is more, it is honestly quite shocking the similarities, the same darn arguments being made to a tee. It’s as if their late 30’s hockey-stick uptick is now our hockey-stick uptick and here we are back at the very same point and temperature in 1998. Talk about a Groundhog Day scenario. Please wake me up in 2056 so I have a few years to prepare to monopolize on the next one as it passed by. ;-)
Thank you so much for gophering out that old paper X Anonymous! Quite enlightening.
Judith writes: “…these same models do not adequately simulate natural internal variability at time scales beyond about 15 years.”
They also do not simulate properly simulate natural internal variability within time scales of 15 years.
Wayman has a krackpot post at WUWT describing some imaginary massive failure of climate science in trying to predict the precise signature of noise.
Energy balance is winning for those keeping score at home.
A precise signature of noise on this blog is “whut”.
If you’d reply to me by name, WebHubTelescope, I might be able to find your nonsensical replies.
If you somehow believe my post at WUWT is flawed, please write a rebuttal blog post. If you were capable of contributing anything of value, you’d be able to write a rebuttal blog post in a few hours. Then ask Judith to publish it for you. Simple.
Additionally, if you haven’t figured this out yet, no one cares about energy balance. They only care about two metrics: surface temperatures and sea level. And because many people live inland, they really only care about surface temperatures.
Looking forward to seeing your rebuttal blog post published here at Judith’s blog..
Have a nice day.
Models will never and can never properly simulate “natural internal variability” if by “properly” you mean exactly as it occurs. They can randomly simulate it if you want to see pretty wiggles, but there is an inherent unpredictable component of deterministic chaos. This was Lorenz’s key finding. Why can’t some people get this?
Wouldn’t you agree that modelers often think something is “an inherent unpredictable component of deterministic chaos” before the system is understood well enough to be modeled accurately? Models need not be perfect by any means. They need to perform within a reasonably tight margin of error for each characteristic being forecasted for the time periods being forecasted.
GCM’s currently don’t meet that description. Not for global temperature and not for the other conditions they are designed to forecast. My guess is that regional models forecasting for a few decades will be the needed change in focus to get the required accuracy.
Actually, given that one of the most important contributors to natural variability is ENSO, and Lorenz specifically states that “chaotic behavior will generally appear more systematic, but not so much so that it will repeat itself at regular intervals..” you might want to reconsider your statement. Your misrepresenting his science.
I think that Lorenz was very weak in two areas of knowledge:
1) Ocean to atmosphere energy transfers with specific focus on the natural variabililty that might lead to more or less energy coming from the ocean to the atmosphere. This was a critical point of weakness considering that at any given time more than 50% of the energy in the atmosphere comes directly from the ocean. A small change in ocean-atmosphere energy transfer makes a huge difference in tropospheric temperatures– far more than solar variability and even greater than aerosols.
2) Large scale atmospheric circulation dynamics. The studies of the Brewer-Dobson circulation really came into its own late in Lorenz’s career, but by this time he’d gone down the path of developing deterministic Chaos, which we are of course all the better for. Still, earlier in his career, knowledge of the BDC would have helped him immensely. Knowledge of the BDC would also help those who continually prattle on about the lack of a the “tropospheric hotspot”. The observed enhancement of the BDC, (an alternative hypothesis to the tropospheric hotspot) is a huge gap in many nonspecialist’s knowledge.
Gates, “Knowledge of the BDC would also help those who continually prattle on about the lack of a the “tropospheric hotspot”.”
Knowledge of the BDC should have prevent the modelers of predicting a troposphere hot spot and as much warming as they did. Kinda the point doncha know. Assuming a nice orderly up/down radiant model was not exactly a skeptic concept :)
The tropospheric hotspot and the enhancement of the BDC are not fully compatible with each other, and the later has been confirmed whereas the former was indicated in earlier models. The process of warmer air moving from troposphere into the stratosphere being enhanced as GH gases accumulate, affecting everything from ozone to SSW’s is an exciting field with great opportunity for going researchers.
R. Gates aka Skeptical Warmist: “Models will never and can never properly simulate ‘natural internal variability’ if by ‘properly’ you mean exactly as it occurs.”
For multidecadal variations, they have to simulate it as it has and will occur. See Ruiz-Barradas, et al. (2013):
“If climate models do not incorporate the mechanisms associated to the generation of the AMO (or any other source of decadal variability like the PDO) and in turn incorporate or enhance variability at other frequencies, then the models ability to simulate and predict at decadal time scales will be compromised and so the way they transmit this variability to the surface climate affecting human societies.”
For ENSO, climate models simply have to be able to simulate the processes, and they’re far from being able to do that. See Guilyardi et al (2009):
“Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes”
Referring back to Ruiz-Barradas, et al. (2013), who included the PDO in the key quote, because the PDO is an aftereffect of ENSO, climate models will have to be able to simulate the multidecadal variability of ENSO before they’ll serve any value.
Ed Lorenz has been arguably the most influential meteorologist in history, having laid the foundations of chaos theory.
This has definitely to be corrected.
E.Lorenz for influential he had been, has neither discovered nor developped the chaos theory. He merely made it popular because meteorology is a communication vehicle that everybody listens to.
The true father and discoverer of the chaos theory was H.Poincaré – yes the same one as in the relativity theory, Poincaré recurrence time and the famous Poincaré conjecture.
Poicaré had written about chaotic properties of solutions to the 3 body problem already in his famous memoir in 1889 – 70 years before Lorenz !
In studying the 3 body problem, Poincaré found that the evolution of such a system is often chaotic in the sense that a small perturbation in the initial state such as a slight change in one body’s initial position might lead to a radically different later state than would be produced by the unperturbed system.
If the slight change isn’t detectable by our measuring instruments, then we won’t be able to predict which final state will occur.
So, Poincaré’s research proved already 100 years ago that the problem of determinism and the problem of predictability are distinct problems what is the foundation of non linear dynamics.
What the chaos means for gravitationally interacting systems can be seen here : http://www.Vimeo.com/11993047.
The 3 (or N) body system is also an example of a non ergodic system, e.g there exists no pdf giving the probability that the system will achieve a given final state. In other words there is no answer on the question “What is the probability that one of the bodies escapes to infinity ?” or on the question “What is the set of initial conditions that lead to stable orbits ?”
Expanding on Poincaré work, the celebrated KAM theorem studies the conditions of stability of chaotic orbits (http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%80%93Moser_theorem).
What Lorenz did was to discover numerical instabilities in numerical models of the atmosphere and followed by his famous equations representing a 2D forced convection flow which gave rise to the “butterfly” chaotic attractor which shows the orbits of the system in the 3D phase space (http://hypertextbook.com/chaos/21.shtml).
So from the historical point of view, most of the foundations of non linear dynamics were set by Poincaré 100 years ago. Lorenz showed that (numerical) solutions Navier Stokes equations exhibited the same irregular behavior like what Poincaré found in celestial mechanics. The word (deterministic) chaos has been coined in the same decade, the 60ies, by Yorke and was followed by a fast development through famous names like Mandelbrot, Ruelle, Feigenbaum etc.
Despite the importance of this branch of physics and despite the early important theoretical work accomplished already 100 years ago, the fundamental ideas began to slowly penetrate the scientific community outside of astronomy only starting in the 80ies. This explains that the principles of non linear dynamics are still widely ignored and/or misunderstood by many scientists who had not the opportunity to be taught it during their graduate studies.
it’s counterintuitive to plain experience, and yet.
This is kind of analogous to the Einstein-Lorentz (Hendrik) / Morley-Michelson thing I was discussing with jim2 on another thread. In that case, I blame k-12 education. In this case, it’s more a matter of obscurity. But don’t be surprised if science books in high schools start parroting this. K-12 textbooks are uniformly awful. Seems like there was some guy named Feynman who had some choice words about green stars and sums of temperatures other gems like that.
Poincare did indeed lay the foundations. He was the first to imagine the importance of phase space. Hadamard in this eulogy said that Poincare was alone among his contemporaries in seeing this. (Unlike a number of other important discoveries in mathematics, which were discovered independently by two or more great minds.)
But I think Lorenz deserves more credit than you seem willing to give. You emphasize the numerical aspect of his work, but fail to mention the significance of this: Lorenz was the first to describe sensitive dependence in a dissipative dynamical system, i.e. a real dynamical system as they are in the sub-celestial realm. Thus he was necessarily the first to observe a chaotic attractor, which promoted chaos to the first rank of robustness.
To my knowledge, no one else found any signs of long-term sensitive dependence in dissipative (down-to-earth) dynamical systems before Lorenz, with the exception of Smale, who discovered his horseshoe in relaxation oscillations, but did not mention attractors at the time.
Later research advanced our knowledge of chaotic attractors through theory (e.g. a rather difficult mathematical proof that something like the butterfly is an attractor). But the door was opened by Lorenz.
in his eulogy
with the exception of Smale … and Y. Ueda, who found it in analog computers, but spent years understanding Birkhoff before publishing in 1973.
As usual, I am mainly interested in our hostess’s comments. She points out that the IPCC seems to be “skating on thin ice” with respect to it’s 95% claim. How do we get the Royal Society, the APS, AGU, etc to discuss this question in a completely scientific way? The RS had a recent 2 day ‘love in’ discussing the AR5, when, so far as I can ascertain, this sort of difficult question for warmists, was deliberately excluded from any discussion. No scientist who is a skeptic was invited to speak.
We can allude to this issue until kingdom come, but until the warmists, the RS, etc. agree and accept that a SCIENTIFIC problem exists, nothing is going to hsppen.
Who is going to bell the cat?
The skeptics are still all over the map in what they are saying, so until they get a viewpoint that makes some coherent sense, it it going to be difficult. Incoherence doesn’t help their cause, as you can see.
The chattering teeth add noise through the frosted breath.
Well I think on the contrary that the overwhelming majority of skeptics says something very coherent :
1) The numerical models are all over the place as well spatially as temporally.
2) For irrelevant it is, the ECS is all over the place too.
3) There has been no progress in 1) and 2) in 30 years despite massive funding.
4) How long will we have to wait for some people to acknowledge that the numerical models are neither right nor useful and start to look for something that is more useful ? (Skeptics may differentiate about the details of what they think more useful but when one is in a dead end, then diversity of valid ideas is rather a very good news)
To that most of the skeptics add :
5) 1 or 3 degrees in 1 or 2 century lead to no demonstrated important effects and there is more than enough time anyway. Almost all the skeptics I know are much more skeptical of WG2 than of WG1.
Seems remarkably coherent to me.
You can read the summaries by AGU, AMS, RS, NAS, etc., and you get a coherent idea of the center of the AGW view and this is because the science that they evaluated themselves has backed it up. What is an equivalent skeptical statement? Would it be that there is good certainty that nothing unusual has been happening in the climate and won’t for all foreseeable fossil fuel burning? Who, even here, would sign on to that? Or, if that isn’t the skeptical view, they are not conveying anything coherent like a phrase such as this (or the society climate statements) could encapsulate.
“Would it be that there is good certainty that nothing unusual has been happening in the climate and won’t for all foreseeable fossil fuel burning? Who, even here, would sign on to that?”
I sign on to that, if you mean global climate and significantly unusual. 0.1 deg C or so does not count.
Edim as the leader of Team Denier is a prime example of the incoherence that JimD is pointing out.
Here you have a guy, Edim, that cannot even accept the fact that CO2 will contribute anything more than 0.1C to the warming, and actually elsewhere has asserted that CO2 is a cooling agent.
Team Denier has huge problems with dissension in their ranks.
Whut, I have nothing to do with the Team Denier (warmists). I have been skeptical since ~1990s.
As Emerson wrote,
As another skeptic, I would agree with all your points, adding
6) There has been no empirical evidence (from actual observations or reproducible experimentation) to corroborate the CAGW premise (as outlined specifically by IPCC in AR4, and now AR5); therefore, it remains an uncorroborated hypothesis.
Max, ” therefore, it remains an uncorroborated hypothesis.”
The current hypothesis is bullet proof, “We are 95% certain that man contributed at least 50% to whatever that may have happened.”
Of course, the CAGW hypothesis is “bulletproof” (by definition).
But it is also “uncorroborated by empirical evidence”.
You write that “the skeptics are all over the map”.
Well, so is the “consensus”
2xCO2 ECS is estimated (or ASS-U-MEd) to be between 1.5C (no problem for humanity or our environment) to 4.5C (could become a problem in the late 21stC).
That’s “all over the map”, Jim. (And it’s supposed to be a “consensus”, at that.)
Max, “But it is also “uncorroborated by empirical evidence”.”
The empirical evidence is itself corroboration. How could we know that it has warmed by a few tenths of a degree without surface stations? :) It is a self fulfilling prophecy. Without “global” coordination of temperatures which requires a manufactured “Global” day and a “Global” normal day, we would be blissfully ignorant of our “Global” impact. In the 1850s the variance in the “Global” temperature was nearly 4 times the AGW impact. In order to tease out the change, man had to do all the selection of baselines and metrics. Man is most certainly the cause of “Global” warming :)
manacker, I was answering Jim Cripwell who wonders why these societies make statements supporting AGW. It is probably because AGW is well defined and supported and everyone knows what it is and what its consequences imply. The skeptical side, not so much. It is their fault for stating their case so badly, with a few loudmouths and cranks dominating their public viewpoint. Try harder.
EDim is such a contrarian that he refuses to be considered an AGW denier. Kind of like a 3-year old that says no to everything. That’s EDim.
Love to eat them mousies,
Mousies what I love to eat;
Bite they little heads off,
Nibble on they tiny feet.
H/t B. Kliban
Seems like a whole new side of you. You are having a smashing day.
“The prospect of the current hiatus lasting until the mid 2030′s (as per the stadium wave and related theories of natural variability) is a decisive test for IPCC’s AGW detection arguments.”
Let’s assume you are correct and there is no additional warming, on average, for the next 17 years. Given what we have seen with the latest IPCC report, isn’t it reasonable to assume that the AGW coalition will simply manipulate the data to conceal this extended pause? Of course you, and others, will detect this manipulation and object, but such objections will be dismissed as coming from deniers and ignored by the main stream media. Is there any compelling reason to believe that a longer pause would force the AGW coalition to admit a problem?
Big cat. Old cat. Teeth falling out.
But he still gots claws. Don’t mess with Biff.
Here you go, a recipe for concealing the pause:
What’s more, it’s all natural ! Nature is concealing the pause for us! As alarmists, we don’t have to do any of the work !
What’s more, we can use the natural LOD index (courtesy of Curry) instead of AMO to really nail the pause:
This is fun stuff. We just have to sit back and let nature do all the work.
Do not worry that there are more things between heaven and earth than are found in your science. Do worry that there are more things found in your science than there are between heaven and earth.
my how biblically pretentious.
Is Theo Goodwin an alternate take on Theocratic God Wins?
Stick to the science. We all know that you can’t model everything but we aren’t Luddites as you appear to be.
WebHubTelescope (@whut) | October 14, 2013 at 4:03 pm |
The original line is from Hamlet. The adaptation that includes the reference to science is from Nelson Goodman.
The point is that the usual work of theorizing is known to generate a few fictions. The serious scientist will actively pursue those fictions and will jump for joy when a critic of his work identifies one.
Kim, here is a 2012 peer reviewed reply to Dessler’s 2010 argument:
No trend in water vapor observations.
Yes, but nut robust, send more acorns. Thanks for the link.
Of course it isn’t robust. The data is going the wrong way. It was much more conclusive when the data ended at 2005. That is the data Dessler would have seen when making his argument and the data that was commented on in AR4.
Speaking of trends …
“Americans Eugene Fama, Lars Peter Hansen and Robert Shiller won the Nobel prize for economics on Monday for developing new methods to study trends in asset markets.
“The Royal Swedish Academy of Sciences said the three had laid the foundation of the current understanding of asset prices.
“While it’s hard to predict whether stock or bond prices will go up or down in the short term, it’s possible to foresee movements over periods of three years or longer, the academy said.
“These findings, which might seem surprising and contradictory, were made and analyzed by this year’s laureates,” the academy said.
They can’t predict the weather but they can the climate.
“While it’s hard to predict whether stock or bond prices will go up or down in the short term, it’s possible to foresee movements over periods of three years or longer, the academy said.
What is of course BS. If it wasn’t, everybody (and the authors first) would be multimillionnaire in 3 years. Observation shows that they are not. Follows that the statement is BS. It is one thing to develop hypothesis about the past and it is another to make correct predictions about the future.
And those who don’t know that they don’t know.
I think the source of the confusion is that people intuitively get that thermodynamics applies to more than just physical things; i.e. there’s a long-term equilibrium that applies to phenomena such as economics.
The part that people miss is the fact that in many systems, equilibrium never happens because the goalposts move chaotically and constantly.
In most systems, equilibrium is either instant or never.
Harold, “In most systems, equilibrium is either instant or never.”
Right. What is an instant to a planet?
Harold @ 11.42: “The part that people miss is the fact that in many systems, equilibrium never happens because the goalposts move chaotically and constantly.” Which is why, as I regularly argue, “sustainability” is a nonsense: nothing is sustainable in an always-changing world.
If someone seeks a “sustainable” outcome, you know that they lack understanding of reality.
Harking back to the “do-nothing” discussion, how often do you see the “do-somethingers” seek “sustainable” outcomes?
“Bubbles are one of the most tangible manifestations of the disagreement between Mr. Shiller and Mr. Fama. The housing crash that began in 2006 is widely regarded as evidence that prices had climbed to irrational heights, and Mr. Shiller’s accuracy in diagnosing the problem suggests that future bubbles could be identified, too.
Janet L. Yellen, nominated earlier this month to lead the Fed, has said the central bank needs to reconsider its traditional view that bubbles cannot be spotted and should not be popped — or restrained from growing too large.
But Mr. Fama, like other proponents of efficient markets theory, is dismissive of Mr. Shiller’s record as a forecaster and more broadly, of claims that it is possible to consistently identify asset bubbles before a collapse.
Asked in 2010 about those who warned that housing prices would crash, he responded, “Right. For example, Shiller was saying that since 1996.”
They don’t even agree with each other.
That is a double oxymoron. It cannot be determined to be chaotic unless all possible external forcing (such as solar) can be decidedly excluded, and if it is stochastic by nature, any outcome cannot be determined.
Meanwhile the case for short term solar forcing is strengthening:
The CO2 signal is the faster than “natural” temperature increase in the 1976-1998 and the lack of cooling during the current pause. Less, of course, all other effects, including but not limited to: volcanoes, +aerosols, -aerosols, landuse changes, groundwater and reservoir irrigation, carbon black, eutrophication of ocean, etc. Easy-peasy
The AGW due to CO2 is faster than the response :) From ~1950 it should take 200 years to “see” a statistically significant response to anthropogenic CO2. It should take only 15 years to “see” a statistically significant response to land use, regionally, and about 60 years to have enough data on “natural” variability to filter that out, since the accuracy of the measurements are about the same as the magnitude of the natural internal variability.
LOL, there is currently about twice as much error in the data as there is CO2 forcing :)
Pingback: Words Of Wisdom From Ed Lorenz | The Global Warming Policy Foundation (GWPF)
Works of wisdom from Judith Curry
Needless to say, it’s entirely possible — even likely! — that short-term “dynamical fluctuators” (like Ed Lorenz) and long-term “energy balancers” (like James Hansen) are *both* scientifically correct … a celebrated SkS animation vividly illustrates short-term/long-term reconciliation.
Career Question How can young climate scientists (like Marcia Wyatt) contribute to *both* short-term *and* long-term climate understanding?
The Lorenz/Hansen/”Stadium Wave” strategy
• Step 1: Craft a Lorenz-style minimal dynamical model that exhibits Wyatt/Curry stadium-wave fluctuations.
• Step 2: Guided by the minimal dynamical model, tune the parameters of large-scale dynamical circulation models such that Wyatt/Curry stadium-wave fluctuations are exhibited.
• Step 3: Predict whether stadium-wave fluctuations are robust in the face of CO2-driven Hansen-style increases in energy imbalance.
If stadium waves were shown to induce decadal fluctuations in surface temperatures, while ocean heat content and/or ice-mass loss both increased steadily *without* decadal fluctuations, that would be *very* much in accord with climate observations!
Soberingly, in the absence of Step 1 (especially), as a necessary guide to Lorenz-style dynamical understanding, it scarcely seems likely that “Wyatt/Curry statistical cycles” will be any more influential than innumerable previous cycle-chasing statistical models (which have not contributed much).
Optimistically, completion of Steps 1-3 would ensure that Wyatt/Curry stadium-waves were an enduring contribution to a unified understanding of climate-change.
Best wishes for continued progress in understanding short-term and long-term climate-change, Marcia Wyatt and Judith Curry!
Lorenz is applying that “null hypothesis” over decadal climatic variations. But the minimum statistical samples are 30 (as a convention); and in climate this should be 50 or 100. So the changes in climate could only be evaluated only after 900 to 3000 years.
More info of the appropriate statistical approach on:
At the Alamo. Col. Travis assembled the defenders, drew his sword, and drew a line in the sand. Then he said WWTE, “You can cross the line, and leave honorably, or you can stay and defend The Alamo to the death”. No-one crossed the line, and the rest is history.
What matters is not that a line was drawn but what people DID about it.
Uh … Jim … in science and mathematics, whenever you discover that your scientific opposition “has got there first-est with the most-est“, then you’re *SUPPOSED* to give up your ignorance.
It’s gettin’ t’be time for climate-change denialists t’surrender, ain’t it Jim Cripwell?
“For what it’s worth, Pekka’s claim only implies that “doing nothing” is still doing something. This has very little to do with economics or policy analysis. OTOH, it has been studied in philosophy under many angles: metaphysics, phenomenology, philosophy of mind, collective action theory, and whatnot.
Johanna’s promotion of the laissez-faire policies she favours rests on the idea that IF we have no compelling reason to intervene, we should not. No wonder we never see her putting fotrward any reason to intervene and her defense that doing nothing is not intervening, this time on the basis that only policy analysts can get that profund argument.”
This is just incoherent. On one hand, the claim that “doing nothing” has little to do with economics or policy. On the other, it has “profund” effects.
Make up your mind.
As for the general argument, I can only assume that you have never been exposed to a politician’s mail and other representations. To take an example, there are people who think that household pets (especially dogs and cats) are an unmitigitated good that should be supported and funded by the State. There are people who loathe them, for practical reasons or sheer dislike. Then there are the Animal Liberationists, who object to animals being managed in any state except being allowed to run free.
What you don’t get is that a pollie’s mailbag is stuffed with the outpourings of fringe believers who care passionately about the subject. Nobody writes in because of their balanced view of keeping pets.
I welcome a well-informed and talented amateur, but am tired of anyone who has an opinion pretending to be an expert on public policy.
> This is just incoherent.
In other words, it makes no sense.
> On one hand, the claim that “doing nothing” has little to do with economics or policy.
Here’s what I claimed: Pekka’s argument only implies “doing nothing” is still doing something. This implication (i.e. doing nothing is still doing something) has little to do with economics of policy.
That doing nothing is still doing something is something philosophers discussed for a while. That claim has very little to do with policy analysis.
> On the other, it has “profund” effects.
Here’s what I claimed: there’s no need for any special expertise in policy analysis to get the profundity of johanna’s argument:
Not only Pekka needs no such implication, but such implication have little to do with policy analysis.
(Web might note that the “nothing to react to” rests on counterfactual thinking. He may like to pay due diligence to this kind of reasoning.)
Unless johanna can demonstrate that policy analysts are the authorities we should turn for an analysis of action, decision, and consciousness, I suggest she observes her own policy and keeps to her own knitting.
Lorentz: “This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis. This would be the case, for example, if the models had been tuned to fit the observed course of the climate. ”
Which of course is _exactly_ the case. Multiple model parameters are tuned to do just that. To the extent that they do a very poor job of reproducing earlier variability.
Neither to these models actually model real CO2 effect, they pump it up with assumed feedbacks. Yet another reason why, in Lorentz’s terms, the new null is “unacceptable”.
Pingback: These items caught my eye – 14 October 2013 | grumpydenier
Lorenz is quoted as follows:
“In view of these considerations, how are we to know when a stronger greenhouse effect is finally making its presence felt? First, we must realize that we are not looking for the onset of the effect. Presumably there is no critical concentration of CO2 or some other gas beyond which the greenhouse effect begins to operate; more likely the absorption of terrestrial re-radiation by CO2 varies continuously, even if not truly linearly, with the concentration. What we are looking for is the time when the effect crosses the threshold of detectability.”
What a strange paragraph. After carefully explaining that there is no “onset of the effect,” he finishes with the claim that “we are looking for the time when the effect crosses the threshold of detectability.” In this last sentence, neither ‘threshold’ nor ‘crosses’ has a literal meaning. So far so good. But then the semantic blunder. He finishes with the word ‘detectability’. In the language of science, though not the language of the man in the street, to say that something is detectable is to say that it is factual; that is, it is to say that it does in fact exist. By contrast, he could have used words or phrases such as ‘high probability’ or ‘justified inference’. As the old saying goes, with his left hand he takes away what his right hand gives.
His first sentence gives it away. It is not whether but when it will be detectable that was the question in 1991. Skeptic, he is not.
Very well said.
Interestingly, if those who would point to Lorenz and his deterministic Chaos theory as a reason for anything, it would be the reason why the models will never be right, ever. Yet Lorenz not only was brilliant enough to know that increasing GH gases would warm the climate, he also knew that the models were useful, even if they would never get all the wiggles on the way up exactly correct as the planet warmed.
What a strange paragraph. After carefully explaining that there is no “onset of the effect,” he finishes with the claim that “we are looking for the time when the effect crosses the threshold of detectability.”
he never claims there is no onset of the effect. he claims that we are not looking for an onset. the reason is there is no critical concentration.
“In this last sentence, neither ‘threshold’ nor ‘crosses’ has a literal meaning. So far so good. But then the semantic blunder. He finishes with the word ‘detectability’. In the language of science, though not the language of the man in the street, to say that something is detectable is to say that it is factual; that is, it is to say that it does in fact exist.”
wrong again. Both threshold and cross have a meaning ( it’s your notion of literal which is in question). Detectability simply refers to a phenomena passing a specified test (always with uncertainty) . On the basis of that result existence is posited because positing the existence allows one to quantify over the posited entity. Ontology, ‘what exists” is a pragmatic matter. I assume you exist because it is simpler than other explanations for your words appearing here.
Heh, the frontier between the Peoples’ Republic of Estimation, and the Democratic Republic of Measurement.
“wrong again. Both threshold and cross have a meaning ( it’s your notion of literal which is in question).”
Why say that they have meanings and then not specify them?
“Detectability simply refers to a phenomena passing a specified test (always with uncertainty).”
Saying that it is detectable presumes that it exists. A physician might tell a patient that her cancer has been detected. Obviously, the detection might involve no uncertainty at all. A psychologist would never tell a patient that his Oedipus Complex had been detected.
Why say that they have meanings and then not specify them?
Simple. The meaning of any term is dependent on the context. So I might establish a threshold of 2, and that threshold is crossed when I go above two or below two, depending on the test I am doing.
“Detectability simply refers to a phenomena passing a specified test (always with uncertainty).”
Saying that it is detectable presumes that it exists.
no. it presumes nothing. Its rather a decision to posit something as existing rather than the alternative.
“A physician might tell a patient that her cancer has been detected. Obviously, the detection might involve no uncertainty at all. A psychologist would never tell a patient that his Oedipus Complex had been detected.”
Sure he would. he might not use that locution, but if its helpful and useful to the patient to posit a thing called “oedious complex” he might use sentences that are translatable to “your Oedipus complex has been detected” Of course the patient may say, positing the existence of such a thing doesnt help me. And perhaps he would visit a priest who would tell him pray to god. That might work. In which case god would exist for that person because when he posited the existence of that entity it worked for him.
I believe that Dr. Curry’s quotation from Lorenz is helpful if proper emphasis is placed on the sentences that she presented in bold font. However, there is a huge problem lurking elsewhere. If this quotation is taken as stating a criterion for deeming something chaotic then it is hopelessly vague and, for that reason, misleading.
Chaos Theory plays a very strange role in climate science. Everyone agrees that weather is chaotic. Fancy that! In the technical sense of Chaotic, everyone agrees that climate is chaotic. However, no one seems to know what this claim means in a practical way. None of us have engaged in scientific or formal arguments that such-and-such a phenomenon is an attractor, for example. (Formal discussion of “tipping points” is discussion of an attractor. Informal discussion of “tipping points” is hysteria.) Apparently, no one is advancing the argument that Earth’s chaotic climate cannot be modeled. It seems that no climate scientists who are active in climate research are using Chaos Theory to advance an explanation of some particular aspect of weather or climate. So, why do we all engage in “hand waving” references to Chaos Theory?
The butterfly effect implies that hand-waving affects the climate. I just scratched my nose and will cause a shift in the jet stream over Ulan Bator in 2040.
There’s another very relevant quote from Lorenz, in a different paper.
“A one- or two-degree local temperature change is not a spectacular event. The significance of Fig. 4 is that the globally and annually averaged temperature is changing by this amount, and corresponding changes in the real atmosphere seem to be of comparable magnitude. In all likelihood an overall warming or cooling of the atmosphere resembling what appears in Fig. 4, say from year 30 to year 60, year 115 to year 135, or year 350 to year 375, would, once detected, be interpreted as a climatic change by many observers, and attempts would be made to determine the cause.
In Fig. 4 the changes simply represent the model’s natural variability; there are no variations in external conditions. However, the nonlinearity associated with the moist processes leads to weak interactions between the mean temperature and the cross-latitude temperature gradient. If these interactions were suppressed, T0 would in due time approach equilibrium. It appears, then, that the variability in the temperature gradient and its associated westerly wind current, i.e. the index cycle, is acting as a weak quasi-random forcing upon the mean temperature, producing the “climatic” variations.”
That was 1986.
Thank you for the link to that paper NiV. And that is of course a very insightful statement by Lorenz you rarely read… they would, and have, gone off in search for the ’cause’ in the 1-2°C noise of the assumed ‘climate change’. What, this was written in the late ’80s? Most climate scientists can’t seem to help it while ignoring selected traits, a flaw in human nature I suppose.
Ilya Prigogine, Chaos Concept:
Lorenz said: “This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis. This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.”
In other words, all D&A methods that rely on models constructed with knowledge of the observed climatic trend are UNACCEPTABLE. There is a correlation in the IPCC’s models between climate sensitive and sensitivity to aerosols, which allows these models to produce roughly the observed trend in temperature over the 20th century*. Therefore, we have strong evidence that the IPCC’s D&A methods – which were “unorthodox” from the start – would be unacceptable to Lorenz.
As best I can tell, experience with “perturbed physics” ensembles has shown that: a) there is no reliable way to identify an optimum set of parameters for a climate model using OBSERVATIONS OF CURRENT CLIMATE and b) the variability in climate sensitivity exhibited by such ensembles makes D&A essentially impossible. In Section 10.1 of WG1 AR4, the IPCC basically admits that the multi-model ensemble of AOGCMs shouldn’t be used for D&A:
“Since the ensemble is strictly an ‘ensemble of opportunity’, without sampling protocol, the spread of models does not necessarily span the full possible range of uncertainty, and a statistical interpretation of the model spread is therefore problematic.”
D&A certainly involves making a statistical interpretation of the model spread as do all projections of global warming.
Ilya Prigogine (Thermodynamician, Nobel Price in Chemistry):
“Que devient le démon de Laplace dans le monde que décrivent les lois du chaos ? Le chaos déterministe nous apprend qu’il ne pourrait prédire le futur que s’il connaissait l’état du monde avec une précision infinie. Mais on peut désormais aller plus loin car il existe une forme d’instabilité dynamique encore plus forte, telle que les trajectoires sont détruites quelque soit la précision de la description”
Dr. Curry, I think you might be seeing the importance of such statement since you highlighted the last sentence. And possibly you also see this applies not only to the chaos theory subject. If the models are using any pre-adjusted data upon such assumptions, they have violated the very thing they were attempting to shed light upon. Can the models use adjusted temperature records of any sort and not the raw data letting variances fall as they may and cancelling naturally? We have all seen how these records have been systematically adjusted to add a ~+0.7°C slant in the trend. Is this not exactly what Lorenz was pointing out? The same might be applied in the assumption that our lower atmosphere is not already opaque to the primary co2 lines in such a magnitude to leave it marginal in ability to further raise temperatures with an increase in concentration. Those assumptions are in the equations placed within the models, not in the data, and to me that very addition fall into the same category of tainted influences.
Is that why they are all off so far?
That statement of Lorenz refers only to the fact that standard statistical tests cannot use data that has influenced the model or hypothesis being tested, nothing more.
Please address the preceding sentence, which reads:
“This would be the case, for example, if the models had been tuned to fit the observed course of the climate.”
Tuning a model to the observed data is using data to influence the model. So, hindcasts cannot be used to validate a model.
Nothing that has influenced in any way the model has full value in testing its validity. That means also that including such data in the test means that standard formulas cannot be used in determining significance levels.
That does not mean that data that has influenced the model has always zero value in judging the validity of the model, but the value is never full and determining, how much value it has is very difficult. Therefore it’s usually best to exclude such data totally from the tests.
What’s problematic is expressed by my words: in any way. That means that indirect influence during the model building process is enough to disqualify the data. Not only data that has been used in explicit tuning, but everything known directly or indirectly during the model definition phase is suspect.
Hindcasts will never be as good as real observations, Lorenz simply points out that the process is unorthodox.
Having no warming trend, then adding CO2, then seeing if there is an effect, in contrast to “knowing” that glaciers are in retreat, rapid changes are occurring in the arctic, warming is in the news papers, flowers are blooming earlier, people are talking about the weather (has anything changed?) and realizing that GHGs are rising, is “after the fact”.
Arguably the data gave people like Callender the idea for the hypothesis. Lorenz acknowledges that this is “unorthodox” and that extra care must be taken. He does not say the Hypothesis is unacceptable.
During the 1970s better justified estimates were made than that of Callender. Manabe was certainly involved, perhaps Lorenz as well. All that was before the warming started. I think Lorenz had those estimates in mind.
Perhaps, however the decadel null hypothesis put forward by Lorenz above, which turn out to be rejected, was virtually the same as Callenders, which, as manacker has noted, would have to be accepted.
Which could be why he added “Again, there is a good chance that we might lack sufficient evidence for rejecting the new hypothesis.”
Since in this case the opposite is now true. Thus, two decades of warming (or lack of) doesn’t conclude anything. We are left with:
“although we may lack sufficient direct evidence that an increased greenhouse effect is influencing our climate, we just as surely lack direct evidence that it is not.”
From the linked MIT collection, Lorenz’s “1991: Reply to questionnaire. For Kyoto Award” also contains some sage advice to potential young scientists:
“Your problem must be tractable.”
Unfortunately he doesn’t appear to specifically comment on what the young scientist should do if the problem is not tractable in less than one human lifespan.
He also finishes his Kyoto Award Lecture with some more sobering advice.
Is chaos theory an airhead 90s fad, a chance to state the obvious at extreme length and with needless complexity? Nobel fodder rather than intellectual nourishment?
Global warming, Antarctic Sea Ice, etc….
Is chaos theory an airhead 90s fad, a chance to state the obvious at extreme length and with needless complexity?
This is just a (small) part of chaos theory. Hardly obvious but indeed complex because the nature is so. Without these insights we would not even understand why and how planetary orbits can be quasi stable what is a necessary condition for us to be here.
As for Lorenz and his discovery of chaos in Navier Stokes (by extension dissipative systems) mentionned by Bruce Stewart.
This discovery was announced by Poincaré already in 1903 :
If we knew exactly the laws of nature and the situation of the universe at the initial moment, we could predict exactly the situation of that same universe at a succeeding moment. but even if it were the case that the natural laws had no longer any secret for us, we could still only know the initial situation approximately. If that enabled us to predict the succeeding situation with the same approximation, that is all we require, and we should say that the phenomenon had been predicted, that it is governed by laws. But it is not always so; it may happen that small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. Prediction becomes impossible, and we have the fortuitous phenomenon. – in a 1903 essay “Science and Method”
But the sensitivity to initial conditions has been discovered and studied even earlier by Hadamard.
There is also van der Pol and his chaotic oscillator who pre date Lorenz.
Without diminishing Lorenz’s merits, what he basically did was to put together the pieces that existed 70 years before his numerical experiment.
This is a mystery why Poincaré (or Hadamard) didn’t do it much earlier instead .
But that’s the same thing that Lorentz (another one) and Poincaré also had the pieces for special relativity but it was Einstein who put them correctly together.
Apparently intelligently assembling existing knowledge is as difficult as discovering the partial knowledge.
The turbulent flap of a butterfly’s wings makes no difference to the glide-path of a 747, because large-scale energy-balance dominates smaller-scale turbulent fluctuations. Fortunately!
If you add +/- 1C to the turbulent path of climate, it wouldn’t matter there either :) If you try flying 747s in Blue Angels routines, you might want to watch that turbulence.
The turbulent flap of a butterfly’s wings can (but usually doesn’t) determine the path of a tornado a week later. And that tornado will make a difference to any path a 747 takes in its vicinity.
You are right of course that the idea that Lorenz ‘discovered’ chaos is misleading, it goes back at least to Poincaré. Part of the answer to the ‘mystery’ of why more wasn’t done earlier is that it’s difficult to study chaotic systems without computer simulations, which is why the subject really took off in the 70s.
Poincare was the first to realize the importance of phase space; he discovered the significance of stable and unstable manifolds, homoclinic tangles, and sensitive dependence on initial conditions. Beyond that, he established whole fields of mathematics, e.g. differential topology, because he saw them as essential to progress in understanding dynamical systems.
What he did not do, to my knowledge, was mention that chaos can take the form of an attractor in a dissipative dynamical system. Nor did Hadamard, van der Pol, or Smale.
If chaos is known only in conservative systems, one may need to make a special choice of initial condition to see a chaotic orbit. Chaotic attractors are much more robustly observable. (The van der Pol oscillator is non-conservative, but what van der Pol and van der Mark described as noise seemed to be transient behavior.)
To characterize Lorenz’s discovery as a numerical result, or a result in the field of fluid dynamics, is too narrow, imho.
As Paul Matthews suggests, the discovery of attractors was facilitated by computers. To prove mathematically that an attractor exists is a very tall order, one needs to prove that all orbits (in a basin) settle onto it. But the same property of attractors makes them hard to avoid in simulation. The difficulty is understanding what it means. Lorenz himself said his first thought was his computer was broken.
There was a sort of parlor game at specialist conferences on chaos in the 80s: who had numerical or experimental evidence of chaotic attractors tucked away in their drawer without realizing it, until Jim Yorke spread the word about Lorenz?
“Apparently intelligently assembling existing knowledge is as difficult as discovering the partial knowledge.” Tomas, probably more so, it is the capacity to make connections between apparently disparate facts and ideas that is critical to many breakthroughs.
Not wishing to be immodest, but as an economist with a broad interest in drivers of economic growth rather than a narrow speciality, I found part of my value was in being able to make connections which others could not. It requires having an open and receptive mind and a broad interest in many apparently unrelated fields. I suspect (while having nothing to cite at hand) that many great discoveries have come from people with broad fields of interest as well as a particular speciality or specialities.
Faustino I agree with your thoughts on what I call “intuitive” thinking, which is what climate science really needs. Mosher is another such thinker IMO.
Peter, I occasionally go to presentations or lectures in fields in which I have little or no background, and find I can apply the “connecting” skill there. E.g., on one recent occasion, I made the first (and only) comment on a medical research presentation, and the researcher said excitedly “You’ve gone right to the heart of the matter!” There seems to be increasing specialisation in many fields, which lessens the opportunity to develop the, to me, vital skill at integrating from a wide field of knowledge. If you don’t know about things, you can’t connect them.
Pingback: Is This What Europe is Committing Suicide Over? | al fin next level
Re: Climate Etc., “Words of wisdom from Ed Lorenz”, 10/13/13
In Post WWII Los Angeles, car salesman Madman Muntz (see Wikipedia) ran a huge billboard on Wilshire Boulevard in the heart of the Hollywood district. In black letters several feet tall, it said [in today’s dollars], “Your Car May Be Worth $100,000!*” If one parked his car and walked up to the billboard with his glasses on, he could just make out the asterisk at the bottom of the ad and the tiny letters that said, “*Very Unlikely”.
Is climate predictable? Of course! IPCC proves that. It predicts a catastrophe in the next century from the nominal equilibrium climate sensitivity: a nominal 3ºC rise in surface temperature estimated to follow from a doubling of CO2, followed by an explicit or implicit asterisk. Using IPCC’s own probability density estimates for climate sensitivity (AR4, Figure 9.20, p. 720), the ECS being measured today at 0.5 to about 0.7 or so is an order of magnitude less likely than IPCC’s least likely subjective score: “Exceptionally unlikely”, meaning less than one chance in a hundred.
The lesson here is that the question of whether climate is predictable is incompetent without some error bounds. The question as posed exposes a lack of scientific literacy.
Lorenz (1963) was careful to describe his experiment as one in numerically solving a certain system of equations which showed that “slightly differing initial states can evolve into considerable different states”, a condition others labeled as chaos. He was not so careful, though, in his use of the word system. With respect to his calculations, system referred to a set of mathematical equations, but in discussion it was sometimes a physical phenomenon, and at other times, it was vague. As a result, his papers are difficult to use to answer the (incompetent) question whether climate is chaotic, meaning (unqualifiedly) unpredictable. As Lorenz said later in his interview with Dr. Reeves, “I wasn’t really concerned at all whether this equation really represented any physical phenomenon”. Bold added. That casualness led him to speculate ambiguously,
>>The result has far-reaching consequences when the system being considered is an observable nonperiodic system whose future state we may desire to predict. It implies that two states differing by imperceptible amounts may eventually evolve into two considerably different states. If, then, there is any error whatever in observing the present state–and in any real system such errors seem inevitable–an acceptable prediction of an instantaneous state in the distant future may well be impossible. LORENZ (1963) p. 133.
As he would explain later, he was trying to replicate data produced by a system of nonlinear equations using a smaller set of linear equations. If he were to apply that method to climate, he would first have to extract a set of measurements for reproduction with a certain class of equations, and starting from a “present state”, a surrogate subset of his measurements to serve as initial conditions. So his “acceptable prediction” would have again been an output of his equations, not a property of the natural phenomenon of climate, or any other natural phenomenon.
The underlying problem here is the practice of scientists to attribute properties of their models to the real world they are supposed to represent. Except as a conjecture, this is an error, and it is often compounded by failure to use words with precision. The very definition of chaos can be an example of chaos. See, e.g., Wolfram MathWorld. http://mathworld.wolfram.com/Chaos.html . What Lorenz discovered was that the numerical solutions to his equations were so sensitive to initial conditions and to the values of constants that they did not produce any stable output state. Accordingly, the equations had no predictive power, whatever the properties of the natural world they were supposed to model. Far from discovering attributes of the natural world, his equations failed to fit the natural real world.
The natural world has no equations. Moreover, it has neither coordinate systems, nor parameters, nor units, nor arithmetic to be observed, much less measured and reduced to facts (Lorenz’s observations). Those are all manmade concepts. Therefore, no part of the natural world can be Lorenz chaotic. Nor, for that matter, can any part of it be linear or nonlinear, a binary state defined only in mathematics. Man can insert his concepts into the manmade part of real world, or falsely attribute them to the natural part, but the natural real world never inherits attributes of its models. Whatever a scientist’s qualifications, when he says climate is chaotic (AR4 §7.6, p. 566; TAR Technical Summary, p. 78) or nonlinear, much less “highly nonlinear”, (AR4, Appendix 3A, p. 336), he reveals deficits in scientific literacy. What he is actually doing is whining that his model of climate was too complex to work while confessing that his model was unable to predict. He blames the real world for his failure to produce a successful scientific model.
But failure to predict has many other causes without the excuse of blaming real world nonlinearities and chaos. A common cause is what is known in statistics as confounding variables. An even simpler view is that IPCC and AGW are trying to predict climate with the wrong variables. The time has come again to call the Greenhouse Effect but its old name, the Callendar Effect, and not just because Guy Callendar (1938) was the first to estimate the Equilibrium Climate Sensitivity (at 2ºC). Nor is it because he was the last to recognize that climate in its warm state is regulated by the powerful negative feedback of cloudiness. Callendar should be honored because of what he prompted his snarky lead reviewer to observe:
>>Sir George Simpson [Director, MetOffice, 1920-1938] expressed his admiration of the amount of work which Mr. Callendar had put into this paper. It was excellent work. It was difficult to criticise it, but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, and it also received heat by transfer from one part to another. In the second place, one had to remember that the temperature distribution in the atmosphere was determined almost entirely by the movement of the air up and down. This forced the atmosphere into a temperature distribution which was quite out of balance with the radiation. One could not, therefore, calculate the effect of changing any one factor in the atmosphere, and he felt that the actual numerical results which Mr. Callendar had obtained could not be used to give a definite indication of the order of magnitude of the effect. CALLENDAR (1938), Discussion, p. 237.
The Callendar Effect traces back and forth over almost a century in each direction to notables published in the field (J. Fourier (1824), Pouillet (1827), Tyndall (1859), Arrhenius (1896), Chamberlin (1899), Revelle (1957), C. Keeling (1958, et seq.), and IPCC, among others.) Regardless, for Lorenz’s and Simpson’s reasons, and for several other elementary scientific considerations trashed by IPCC, no combination of CO2, the Greenhouse Effect, Radiation Transfer, and Radiative Forcing is predictive of Earth’s climate.
Thanks, very helpful to me.
Callendar constructed his model from an understanding of the system which exceeded present day climatologists. Practically miraculous, but the real wonder is why his understanding was lost.
Re: Kim, 10/17/13, 7:55 am
Some of Callendar (1938) agrees with IPCC’s model, and some does not. Callendar modeled holding cloudiness constant so that climate would respond to CO2 concentration. Accord, IPCC. Callendar recognized that cloudiness regulates climate (as long as the ocean surface remains liquid); IPCC does not. Neither modeled dynamic cloudiness, a good reason being that cloudiness is a powerful negative feedback that mitigates warming from any source, and frustrating the AGW conjecture. At the same time, it is a positive feedback that amplifies TSI, making the Sun instead of man the source of warming.
Callendar (1938) and IPCC rely on the atmospheric absorption of longwave radiation by CO2. George Simpson presciently warned that such calculations wouldn’t mean anything. IPCC persisted, only to learn that “the uncertainty in RF [radiative forcing] is almost entirely due to radiative transfer [RT] assumptions and not mixing ratio [ CO2 concentration, humidity lapse rate] estimates” AR4, ¶2.3.1 Atmospheric Carbon Dioxide, p. 140. Those radiative forcing estimates provide the Equilibrium Climate Sensitive, the nominal, catastrophy-inducing 3ºC caused by man’s CO2 emissions, and “very unlikely [<10% chance] to be less than 1.5°C”. AR4 SPM p. 12. Since AR4 analysis of measurements shows it to be in the range of 0.7ºC, give or take a bit, and far less likely than one chance in a hundred according to IPCC’s probabilities. Radiative transfer is quite precise, meaning lots of significant figures, but nonlinear in and sensitive to hourly random variations in atmospheric parameters, making it on-topic for this thread: chaotic, impossible to calculate with much significance. It’s not the imaginary chaos in the climate that caused the failure of the GCMs, but the chaos in the RT routine.
Callendar and IPCC knew that CO2 readily dissolved in seawater, and both assumed a bottleneck existed so that enough manmade CO2 would accumulate in the atmosphere to make the AGW conjecture work. Thus both assumed CO2 to be long-lived in the atmosphere. Callendar compared arbitrary residence times of 2000 and 5000 years. Revelle and Suess (1957) invented the Revelle Factor to account for Callendar’s bottleneck, but they couldn’t make the numbers work without money from then upcoming First International Geophysical Year. The long-lived assumption also made atmospheric CO2 well-mixed. That assumption converted Keeling’s regional data, supported by fresh IGY money, into global data. Now all CO2 stations could be calibrated to agree with the Keeling Curve. IPCC didn’t try to account for why Keeling shows CO2 increasing at 3 to 5 times the rate estimated by Callendar for the globe.
IPCC author David Archer urged that the surface layer of the ocean was in perpetual thermodynamic equilibrium (no part of the climate is) so that carbonate equations of equilibrium would apply, forcing new CO2 in the atmosphere to queue up until the ocean made room for it by sequestering dissolved CO2 in precipitates. Archer estimated that took 35,000 years, too much even for IPCC to swallow, so IPCC accepted a (physically impossible) three-branch time constant from “30 years” to maybe “many thousands of years”. AR4, ¶220.127.116.11, p. 517. IPCC provides an ordinary formula from high school physics in its last two Glossaries that puts the residence time at 1.5 to 3.2 years with IPCC’s data, first including leaf water flux and second excluding it because IPCC ignored it in its carbon cycle.
The bottleneck conjecture alters Henry’s Law especially for climate by making the coefficient of dissolution depend on water pH, and moreover, making it different for anthropogenic than for natural CO2, the latter being immune to the bottleneck. When IPCC tried to rehabilitate the failed Revelle Factor, it re-discovered Henry’s Coefficient, and suppressed the discovery “in order not to confuse the reader.” See Figure 4 and discussion at http://www.rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html . Neither Callendar (1938) nor IPCC saw fit to mention Henry’s Law.
Both Callendar and IPCC observed natural warming occurring since the last glacial minimum and since the Little Ice Age, and occurring simultaneously with increasing atmospheric CO2, both since the Industrial Revolution. Like tyros believing that correlation implies causation, they conjectured that CO2 must have caused the warming. Science settles this with the principle of causality: a cause must precede all of its effects. Callendar didn’t have the data to estimate the lead. When IPCC learned that the paleo record showed CO2 lagging temperature, it rationalized the effect away. The lag, if valid, means that CO2 could not have been the cause of warming over the last half million years. Regardless, IPCC continues to urge “a positive CO2 feedback” to temperature, giving as an example cooling initiated by Milankovitch cycles and amplified by reduced atmospheric CO2. AR4 FAQ 6.1, p. 449. This response is analogous to the biologist whose lecture on deforestation referred to the Sahara Forest. When corrected that he should have said the Sahara Desert, he said, “Sure, now!”
Thanks, Jeff, a very useful precis, especially after reading it three times over. Quite dense that forest. A desert rich.
The warmists are manufacturing data to propagate their hoax. Even with constantly revised and manufactured data, the amount of warming is negligible. Maybe all the “climate scientists” focused on AGW research, should start learning how to drive cabs.
The turbulent flap of a butterfly’s wings makes no difference to the glide-path of a 747, because large-scale energy-balance dominates smaller-scale turbulent fluctuations. Fortunately!
I don’t know what such a statement has to do in a thread dedicated to Lorenz and chaos but it shows so much ignorance and misconceptions that a comment is in order.
Of course a flap of wings is not turbulent it is the atmosphere that is so. Then when one is interested by studying the trajectory of a plane, one is interested ONLY by the spatial scales (much) smaller than the size of the plane. It is common sense that what happens 1 km far from the plane has little to no importance on its dynamical behavior.
What this means can be nicely seen on this picture : http://en.wikipedia.org/wiki/File:Airplane_vortex_edit.jpg
While the small scale turbulence (mm to m) has little influence on the flight dynamics, be very sure that the large scale turbulence seen behind the plane has a damn importance for a plane that would happen to fly in. One doesn’t count the number of air plane crashes that happened precisely because of large scale turbulence.
This dumb statement would also mean (among others) that the trajectory and intensity of a hurricane can be accurately predicted because all this turbulence and chaos doesn’t matter. The contrary is true – even if a hurricane scrupulously observes energy and momentum conservation, its trajectory is chaotic and unpredictable. The uncertainty in the computing of the hurricane trajectory increases exponentially and the reason for that is explained by the chaos theory. So the fact that we have empirical formulas for small scale turbulence for a given flight envelope of a plane has absolutely no relevance for the understanding of the turbulence at all scales and no relationship to deterministic chaos either.
Every pilot knows that if the plane gets out of the envelope it has been computed for (e.g stalls), then its trajectory becomes chaotic and control becomes hard, sometimes impossible. That is the reason why most crashes happen at takeoff or landing when the speed and the attitude of the plane bring it near to the limit where the trajectory becomes chaotic (e.g extremely sensible to initial conditions). So actually the correct analogy of plane flight mechanics shows exactly the contrary of what the author of the statement thinks it does.
As for the last part of the quoted statement it only shows that the author ignores physics in general and fluid dynamics in particular. Everybody knows or should know that small scale dynamics are absolutely not dominated by some primitive large scale energy balances. It is actually the reason why Navier Stokes cannot be solved in a general case – while most of the energy is contained in the large scales, most dissipation occurs in the small scales. Unfortunately the latter is not constrained by the former what makes the Navier Stokes problem very “hard” as Terry Tao (Fields medal) wrote.
I realize that these insights are far above the abilities of the author of the quoted statement but for interested readers I warmly recommend : http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/
Reading it carefully allows to understand what the scale invariance in fluid dynamics means and why the small scale dynamics are NOT constrained by large scale energy balances.
Also very helpful! You and Jeff are explaining a field of which I know little.
Thanx for the illuminating history of chaos and nonlinearity, including the great work of Poincare.
A good example of the Lorenz butterfly attractor (or similar Roessler attractor) is the PDO. Bob Tisdale has shown that the PDO is an epiphenomenon of ENSO. The alternating phases of el Nino and La Nina dominance – the warm and cold phases respectively of the PDO, are the two wings of the butterfly attractor.
A good example of the Lorenz butterfly attractor (or similar Roessler attractor) is the PDO.
One shoud be very careful with such qualitative analogies.
When you see the «picture » of a Lorenz’ or any other 3D attractor, you must realize that the 3D space it is drawn in is not our ordinary space. It is the 3D phase space of the Lorenz system which has 3 dimensions only because the system is defined by exactly 3 differential equations. The number of dynamical equations could be and generally is more than 3 and then it is no more possible to represent the attractor by a “picture” with wings etc.
Now PDO or any other oceanic oscillation is something completely different. It is a spatio-temporal pattern. The space it happens in is an ordinary 2D space of a surface of a sphere (Earth). You can always make a picture of it at different times and if you make very many pictures and color the points according to some convention, you’ll have a movie which will show how every spatial point oscillates in time.
But, and this is the purpose of this post, the oscillating spatial pattern you see has nothing to do with attractors like Lorenz because a spatio-temporal pattern is infinite dimensional. Indeed every point P of the pattern oscillates and you need to know every function Fp(t) to know how it oscillates. But as there is an infinity of points, you need an infinity of functions and that’s why the spatio-temporal chaos is said infinite dimensional and much more complex than the mere temporal chaos described by Lorenz attractors etc.
Actually there exist attractors for spatio-temporal chaos too what could justify a kind of analogy to Lorenz. However they are infinite (or very high) dimensional structures that are impossible to be represented by “pictures” even if they are rigorously well defined mathematically (attracting subsets of a Hilbert space of square integrable functions).
Tomas Milanovic | October 18, 2013 at 5:40 am | Reply
A good example of the Lorenz butterfly attractor (or similar Roessler attractor) is the PDO.
One shoud be very careful with such qualitative analogies.
Thanks for the clarification. Once every point in the multi-dimensional phase space has a whole dimension to itself then we are talking about full blown chaos. But nonlinear emergent pattern occurs before full chaos is reached, we are not interested in full chaos / turbulence in this case.
In the interesting, borderline region short of full chaos there will be a reduced number of effective dimensions, since Hopf bifurcations have only just started and the system is short of full chaos, then the time plot of the system will show alternating periods of higher and lower amplitude in certain dimensions (y axis) – see the second figure in this article:
This looks not dissimilar from temperature plots of alternating phases of PDO.
I have argued previously that the BZ reactor is loosely a model of ENSO in that it is driven by what is called (e.g. by Bertram) am “excitable medium” – i.e. one where potential exists for intermittent positive feedbacks. This requirement is met for ENSO by the Bjerknes feedback.
If ENSO is a nonlinear oscillator (and many still hold out for linearity and stochastic triggering / forcing) then having an attractor such as a Lorenz butterfly attractor would not be unusual.
Pingback: Risky Business | al fin next level
The Lorenz attractor is a result of the 3 Lorenz ODE for a given range of control parameters. These ODEs describe a 2 D convection problem.
It is pretty sure that the ENSO is not equivalent to a 2D convection problem from which follows that chaotique or not, ENSO cannot present the Lorenz attractor (or Roessler for that matter .
Without over generalizing, you will find two main forms of consumers: people that buy things and those
that buy experiences Unknown the ability to gain access to money when needed might be
a serious problem for the huge number of people.
It’s perfect time to make a few plans for the future and it’s time to
be happy. I’ve learn this post and if I may just I desire to counsel you few interesting things
or tips. Perhaps you could write subsequent articles referring
to this article. I desire to learn more things about it!