by Judith Curry
Insights from Ed Lorenz, pioneer of chaos theory, on the detection of anthropogenic global warming.
In discovering “deterministic chaos,” Dr. Lorenz established a principle that “profoundly influenced a wide range of basic sciences and brought about one of the most dramatic changes in mankind’s view of nature since Sir Isaac Newton,” said a committee that awarded him the 1991 Kyoto Prize for basic sciences.
The Bulletin of the American Meteorological Society has just published what is probably Lorenz’s last interview, on the topic of the limits of predictability and the impact on weather modeling.
MIT has a complete online collection of Lorenz’s publications [here].
Of particular relevance to topics under discussion at Climate Etc., I refer to this paper:
E.N. Lorenz (1991) Chaos, spontaneous climatic variations and detection of the greenhouse effect. Greenhouse-Gas-Induced Climatic Change: A Critical Appraisal of Simulations and Observations, M. E. Schlesinger, Ed. Elsevier Science Publishers B. V., Amsterdam, pp. 445-453.
Relevant excerpts (JC bold):
In the minds of many of us who are gathered here, the most important question concerning greenhouse gases is not whether they will produce a recognizable global warming, but when will they do so? Probably we take it for granted that, barring some catastrophe that halts or overwhelms the accumulation of carbon dioxide and other constituents, the warming predicted by theoretical studies will eventually occur. The apparent upward trend of global-average temperature during the most recent century, and the unusually warm and dry weather that has invaded parts of the world during parts of the most recent decade, have led some of us to speculate that the greenhouse warming is already being felt. In this talk I wish to examine the basis for speculating that the greenhouse effect is not the main cause of what we have been experiencing and, particularly, that the suggested warming is due to processes purely internal to the atmosphere and its immediate surroundings.
The term “chaos” currently has a variety of accepted meanings, but here we shall use it to mean deterministically, or nearly deterministically, governed behavior that nevertheless looks rather random. Upon closer inspection, chaotic behavior will generally appear more systematic, but not so much so that it will repeat itself at regular intervals, as do, for example, the oceanic tides.
In view of these considerations, how are we to know when a stronger greenhouse effect is finally making its presence felt? First, we must realize that we are not looking for the onset of the effect. Presumably there is no critical concentration of CO2 or some other gas beyond which the greenhouse effect begins to operate; more likely the absorption of terrestrial re-radiation by CO2 varies continuously, even if not truly linearly, with the concentration. What we are looking for is the time when the effect crosses the threshold of detectability.
It has sometimes been objected that in dealing with this problem we have relied too heavily on theory, but I would maintain that the problem cannot be wholly dissociated from theoretical considerations. Imagine for the moment a scenario in which we have traveled to a new location, with whose weather we are unfamiliar. For the first ten days or so the maximum temperature varies between 5° and 15°C. Suddenly, on two successive days, it exceeds 25°C. Do we on the second warm day, or perhaps on the first, conclude that somebody or something is tampering with the weather? Almost surely we do not; we are quite familiar with this sort of behavior, and we take it for granted that this is what the weather often does.
Consider now a second scenario where a succession of ten or more decades without extreme global-average temperatures is followed by two decades with decidedly higher averages; possibly we shall face such a situation before the 20th century ends. Does this scenario really differ from the previous one? We may feel that it does; for example, we may believe that if the atmosphere is subjected to similar external influences over separate long intervals, say decades, the average conditions should be similar, with the short-period fluctuations tending to cancel. If so, our conclusions have been reached through theory, that is, through what we believe is demanded by the physical laws, even though the theory may be qualitative and not worked out in detail. Certainly no observations have told us that decadal-mean temperatures are nearly constant under constant external influences.
At this point we may, in the second scenario, turn to statistical procedures. We may introduce a null hypothesis, which could say that the mean of the population of decadal mean temperatures to which the last two observations belong is not different from the mean of the population to which the earlier observations belong. We would then seek the probability that a discrepancy as large as the one that we have observed would occur, if the null hypothesis is true. If the probability is rather small, we would be likely to reject the null hypothesis, and conclude that the populations do indeed have different means. If the probability is large, the populations may still have different means, but we will lack a basis for concluding that they do.
Returning to the second scenario, should we assume that decadal mean temperatures are also highly persistent? Our observations are insufficient to yield an answer, but we may turn to theory. The high persistence revealing itself in Figs. 3-5 suggests the possibility that real atmospheric decadal-mean temperatures are persistent; at least, it indicates that there is no obvious theoretical reason for hypothesizing that they are not persistent, no matter what intuition might tell us. There is a good chance, then, that in a real situation resembling the second scenario, we might not be able to reject the null hypothesis, that is, we might have to say that any change in the climate, including a change brought about by the greenhouse effect, has yet to cross the threshold of detectability.
If our only evidence were observational, we might have to pause at this point, and wait for more years of data to accumulate. However, since we do have theoretical results, and since, in fact, the entire greenhouse effect would have remained unsuspected without some theory, we can put the theory to use. Different models agree reasonably well as to the increase in globally averaged sea-level temperature that would be produced by a prescribed increase in CO2 concentration. We are therefore equally justified in introducing a second null hypothesis, which would say that the difference between the means of two populations, one to which the earlier decadal mean temperatures belong, and one to which the later ones belong, is not different from the numerical value that the consensus of the theoretical studies stipulates. Again, we can ask whether anything as unusual as the difference between the observed sample means would be likely to have occurred, if the new null hypothesis is correct. Again, there is a good chance that we might lack sufficient evidence for rejecting the new hypothesis.
This somewhat unorthodox procedure would be quite unacceptable if the new null hypothesis had been formulated after the fact, that is, if the observed climatic trend had directly or indirectly affected the statement of the hypothesis. This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.
What we would conclude, then, if the second scenario is realistic, is that, although we may lack sufficient direct evidence that an increased greenhouse effect is influencing our climate, we just as surely lack direct evidence that it is not. If the effect is important, we may have to wait a few years to verify that it is, but, by the same token, if it is not important, we may have to wait a few years to verify that it is not. The implications of such a conclusion for future decision making speak for themselves.
Unfortunately, recognizing a system as chaotic will not tell us all that we might like to know. It will not provide us with a means of predicting the future course of the system. It will tell us that there is a limit to how far ahead we can predict, but it may not tell us what this limit is. Perhaps the best advice that chaos “theory” can give us is not to jump at conclusions; unexpected occurrences may constitute perfectly normal behavior.
JC comments: Lorenz’s paper was published in 1991. Recall, 1990 was the year that the IPCC FAR was published, which found “The size of this warming is broadly consistent with predictions of climate models, but it is also of the same magnitude as natural climate variability.” The year 1991 was 62% into the warming period from 1976-2000.
If Lorenz were looking at the climate data in 2013, how would he interpret it? Frankly, I don’t think the AGW detection arguments have advanced much since 1991. Paleoclimate hockey sticks do not resolve global or hemispheric climate variability on timescales of several decades. Hence, the IPCC resorts to climate models to compare AGW forced climate change against simulations without AGW forcing – these same models do not adequately simulate natural internal variability at time scales beyond about 15 years. Hence, the resulting circular reasoning by the IPCC ends up assuming that climate variability on time scales beyond 15 years is externally forced.
The prospect of the current hiatus lasting until the mid 2030’s (as per the stadium wave and related theories of natural variability) is a decisive test for IPCC’s AGW detection arguments. Detection of AGW is a prerequisite for the IPCC’s attribution arguments. The IPCC’s statements of 95% confidence that most of the warming is anthropogenic, and expectations of substantial warming between now and 2036, has the IPCC skating on very thin ice, in my opinion.