The Principles of Reasoning. Part III: Logic and climatology

by Terry Oldberg
copyright by Terry Oldberg 2011

As originally planned, this essay was to end after Part II. However, Dr. Curry has asked me to address the topic of logic and climatology in a Part III. By the following remarks I respond to her request.

I focus upon the methodologies of the pair of inquiries that were conducted by IPCC Working Group 1 (WG1) in reaching the conclusions, in its year 2007 report, that:

  • “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…” [1] and
  • the equilibrium climate sensitivity (TECS) is “likely” to lie in the range 2oC to 4.5oC [2].

I address the question of whether these methodologies were logical.


This work is a continuation from Parts I and II. For the convenience of readers, I provide the following synopsis of Parts I and II.

A model (aka theory) is a procedure for making inferences. Each time an inference is made, there are many (often an infinite number of) candidates for being made of which only one is correct. Thus, the builder of a model is persistently faced with the identification of the one correct inference. The builder must make this identification, but how?

Logic is the science of the principles by which the one correct inference may be identified. These principles are called “the principles of reasoning.”

While Aristotle left us the principles of reasoning for the deductive logic he failed to leave us the principles of reasoning for the inductive logic. Over centuries, model builders coped with the lack of principles of reasoning for the inductive logic through use of the intuitive rules of thumb that I’ve called “heuristics” in identifying the one correct inference. Among these heuristics were maximum parsimony (Occam’s razor) and maximum beauty. However, each time a particular heuristic identified a particular inference as the one correct inference, a different heuristic identified a different inference as the one correct inference. In this way, the method of heuristics violated the law of non-contradiction. Non-contradiction was the defining principle of logic.

The problem of extending logic from its deductive branch and through its inductive branch was known as the “problem of induction.” Four centuries ago, logicians began to make headway on this problem. Over time, they learned that an inference had a unique measure; the measure of an inference was the missing information in it for a deductive conclusion per event, the so-called “entropy” or “conditional entropy.” In view of the existence and uniqueness, the problem of induction could be solved by optimization. In particular, that inference was correct which minimized the conditional entropy or which maximized the entropy, under constraints expressing the available information.

By 1963, the problem of induction had been solved. Solving it revealed three principles of reasoning. They were:

  • the principle of entropy maximization,
  • the principle of maximum entropy expectation and,
  • the principle of conditional entropy minimization.

Few scientists, philosophers or statisticians noticed what had happened. The vast majority of model builders continued in the tradition of identifying the one correct inference by the method of heuristics.

Illogical methodologies

When an inquiry is conducted under the principles of reasoning, the model that is constructed by this inquiry expresses all of the available information but no more. However, when people construct a model, they rarely conform to the principles of reasoning. In constructing a model, people exhibit “bounded rationality” [4].

A consequence from an illogical methodology can be for the constructed model to express more than the available information or less. When it expresses more than the available information, the model fails from its falsification by the evidence. When it expresses less than the available information the model deprives its user of information.

In constructing models, people commonly fabricate information that is not available to them. The penalty for fabrication is for the model to fail in service or when it is tested. If the model is tested, its failure is observable as a disparity between the predicted and observed relative frequencies of an outcome. In particular, the observed relative frequency lies closer to the base rate than is predicted by the model; the “base rate” of an outcome is the unconditional probability of this outcome.

In intuitive assignment of numerical values to the probabilities of outcomes, people commonly fabricate information through neglect of the base-rate [5]. Casino gamblers fabricate information by thinking they have discovered patterns in observational data that the casino owners have eliminated by the designs of their gambling devices [6]. Physicians fabricate it by neglecting the base-rate of a disease in estimating the probability that a patient has this disease given a positive from a test for this disease [7]. Physical scientists fabricate information by assuming that a complex system can be reduced to cause and effect relationships, the false proposition that is called “reductionism” [8] [9]. Statisticians fabricate it by assuming the existence in nature of probability density functions that are not present in nature. Dishonest researchers fabricate information by fabricating observed events from observations never made, a practice that is called “dry labbing the experiment.”

In constructing models, model builders commonly waste information that is available to them by failing to discover patterns that are present in their observational data. Statisticians waste information by assuming the categories on which their probabilities are defined rather than discovering these categories by minimization of the conditional entropy; to waste information in this way is a characteristic of the Bayesian method for construction of a model.

Inquiries and logic

If an inquiry was conducted, then how can one tell whether its methodology was logical? Using tools developed in Parts I and II, it can be said that a logical methodology would leave a mark on an inquiry in which:

  • the claims that came from the inquiry would be based upon on models (aka theories) and,
  • processes using these models as their procedures would make inferences and,
  • each such inference would have a unique measure and,

and so on and so forth.

Seemingly, by following this line of reasoning to its endpoint, one could compile a list of traits of an inquiry that was conducted under logical methodology for use in determination of whether the methodology of any particular inquiry was logical.

Need for disambiguation

In reality, the task is not so simple. Ambiguity of reference by terms in the language of climatology to the associated ideas may obscure the truth. Thus, there may be the necessity for disambiguation of this language before the truth is exposed.

My methodology

Thus, in going forward I’ll work toward 3 complementary ends. First, as needed, I’ll disambiguate the language of climatology. Second, using the disambiguated language I’ll identify traits of an inquiry that was conducted under a logical methodology. Third, I’ll compare these traits to the traits of WG1’s inquiries; from this comparison, I’ll reach a conclusion on the logicality of each inquiry.

Once I’ve judged the logicality of each inquiry, I’ll reach some conclusions about the claims that came from the two inquiries. The evidence on which I shall base these conclusions will include the text of the 2007 report of WG1 and the reports of the several researchers who have written on ambiguities of reference by terms in the language of climatology.

Disambiguating “model”

In the language of climatology, the word “model” ambiguously references: a) the procedure of a process that makes a predictive inference and b) the procedure of a process that makes no predictive inference. As making judgments about the correctness of inferences is what the principles of reasoning do, this ambiguity muddies the waters that surround the issue of the logicality of a methodology.

To resolve this ambiguity while preserving the semantics of “model” that I established in Part I, I’ll make “model” my term of reference to the procedure of a process that makes a predictive inference. I’ll make the French word modèle my term of reference to the procedure of a process that makes no predictive inference.

By my definition of the respective terms, a model provides the procedure for making “predictions” but not “projections.” A modèle provides the procedure for making “projections” but not “predictions.” Later, I’ll disambiguate the terms “predictive inference,” “prediction” and “projection” among other terms.

Past studies

First, however, I’ll summarize the findings of past studies. The linguistic ground that I am about to cover was previously covered by the studies of:

  • Gray,
  • Green and Armstrong and,
  • Trenberth.

The language in which the findings of these studies were presented left the ambiguity of reference by the word “model” unresolved. This ambiguity left the findings unclear. In the following review of these findings, I clarify the findings by making the model/modèle disambiguation. By the evidence that is about to be revealed, WG1’s claims were based entirely upon climate modèles and not at all upon climate models, for the latter did not exist for WG1.

In the course of his service to the IPCC as an expert reviewer, Vincent Gray raised the issue with the IPCC of how the IPCC’s climate modèles could be statistically validated [10]. No modèle could have been validated because no modèle was susceptible to validation. However, at the time Gray raised this question the waters surrounding the issue of the validation were muddied by the failure of the language of climatology to make the model/modèle disambiguation.

According to Gray, the IPCC un-muddied the waters by establishing an editorial policy for subsequent assessment reports. Under this policy: 1) the word “prediction” was changed to the word “projection” and 2) the word “validation” was changed to the word “evaluation”; the distinction between “prediction” and “projection” that was made by the IPCC under this policy was identical to the distinction that I make in this article.

The IPCC failed to consistently enforce its own policy. A consequence was for the words “prediction,” “forecast” and their derivatives to be mixed with the word “projection” in the subsequent IPCC assessment reports. In Chapter 8 of the 2007 report of WG1, Green and Armstrong [11] found 37 occurrences of the word “forecast” or its derivatives and 90 occurrences of the word “predict” or its derivatives. Of 240 climatologists polled by Green and Armstrong (70% of whom were IPCC authors or reviewers), a majority nominated the IPCC 2007 report as the most credible source of predictions/forecasts (not “projections”) though the IPCC’s climate modèles made no predictions/forecasts.

For the reader who assumed the IPCC’s pronouncements to be authoritative, it sounded as though “prediction,” “forecast” and “projection” were synonymous. If made synonymous, though, the three words made ambiguous reference to the associated ideas. By failing to consistently enforce its policy, the IPCC muddied the very waters that it had clarified by its editorial policy.

In establishing the policy that the word “validation” should be changed to the word “evaluation” the IPCC tacitly admitted that its modèles were insusceptible to statistical validation. That they were insusceptible had the significance that the claims of the modèles were not falsifiable. That they were not falsifiable meant that the disparity between the predicted and the observed relative frequency of an outcome that would be a consequence from fabrication of information was not an observable.

In retrospect, it can be seen that no modèle could have been statistically validated because no modèle provided the set of instructions for making predictions. Every modèle could be statistically evaluated but while “statistical validation” and “statistical evaluation” sounded similar, they referenced ideas that were logically contradictory. They were contradictory in the sense that a model’s claims could be falsified by the evidence but a modèle’s claims could not be falsified by the evidence.

On the evidence that most polled climatologists understood the IPCC’s climate modèles to make predictions, Green and Armstrong concluded (incorrectly) that these modèles were examples of models. They went on to audit the methods of construction of the modèles under rules of construction for models. The modèles violated 72 of 89 rules. In view of the numerous violations, Green and Armstrong concluded that the modèles were not models hence were unsuitable for making policy.

In response to this finding of Green and Armstrong, the climatologist Kevin Trenberth [12] pointed out (correctly) that, unlike models, the IPCC modèles made no predictions. According to Trenberth, they made only “projections.”

Disambiguating “prediction”

A “prediction” is an assignment of a numerical value to the probability of each of the several possible outcomes of a statistical event; each such outcome is an example of a state of nature and is an observable feature of the real world.

Disambiguating “predictive inference”

A “predictive inference” is a conditional prediction.  Like an outcome, a condition is a state of nature and an observable feature of the real world.

In a predictive inference, a numerical value is assigned to the probability of each condition and to the probability of each outcome AND condition, where by “AND” I mean the logical operator of the same name. By these assignments, a numerical value is assigned to the conditional entropy of the predictive inference. The conditional entropy of this inference is its unique measure in the probabilistic logic.

The “predictive inference”/”NOT predictive inference” pair

In the disambiguated language, a “model” is a procedure for making inferences, one of which is a predictive inference. While a modèle is a procedure, it is not a procedure for making a predictive inference. Thus, the idea that is referenced by “predictive inference” and the idea that is referenced by “NOT predictive inference” form a pair.

Disambiguating “projection”

A “projection” is a response function that maps the time to the value of a dependent variable of a modèle.

The “predictions”/”projections” pair

Using a model as its procedure, a process makes “predictions.” Using a modèle as its procedure, a process makes “projections.” Thus, the idea that is referenced by “predictions” and the idea that is referenced by “projections” form a pair.

Disambiguating “statistical population”

The idea of a “statistical inference” references a time sequence of independent statistical events; the description of each such event pairs a condition with an outcome. An event for which the condition and outcome were both observed is called an “observed event.” A “statistical population” is a collection of observed events.

Disambiguating “statistical ensemble

A “statistical ensemble” is a collection of projections. This collection is formed by variation of the values that are assigned to the parameters of the associated modèle within the ranges of these parameters.

The “statistical population”/”statistical ensemble” pair

The idea of a “statistical population” is associated with the idea of a model. The idea of a “statistical ensemble” is associated with the idea of a modèle. Thus, the idea that is referenced by “statistical population” and the idea that is referenced by “statistical ensemble” form a pair.

Disambiguating “statistical validation”

“Statistical validation” is a process that is defined on a statistical population of a model and on predictions of the outcomes in a sample of the observed events from this population. Under this process, the predictions collectively state falsifiable claims regarding the relative frequencies of the outcomes in the sample. A model for which these claims are not falsified by the evidence is said to be “statistically validated.”

Disambiguating “statistical evaluation”

“Statistical evaluation” is a process that is defined on a statistical ensemble and on a related observed time-series; there is an example of one at [3]. The example features projections that map the time to the global average surface temperature plus a selected observed global average surface temperature time series.

The projections belonging to a statistical ensemble state no falsifiable claim with respect to the time series. Thus, a modèle is insusceptible to being falsified by the evidence that produced by its evaluation.

The statistical validation/statistical evaluation pair

Statistical validation is a process on the predictions of a model. Statistical evaluation is a process on the projections of a modèle. Thus, the ideas that are referenced by “statistical validation” and “statistical evaluation” form a pair.

Disambiguating “science”

In common English, the word “science” makes ambiguous reference to two ideas. One of these references is to the idea of “demonstrable knowledge.” The other is to the idea of “the process that is operated by people calling themselves ‘scientists’.“

Under the Daubert rule, testimony is inadmissible as scientific testimony in the federal courts of the U.S. if the claims that are made by this testimony are non-falsifiable [13]. In this way, Daubert disambiguates “science” to “demonstrable knowledge.”

The “satisfies Daubert”/”does not satisfy Daubert” pair

A model is among the traits of an inquiry that satisfies Daubert. A modèle is among the traits of an inquiry that does not satisfy Daubert. Thus, “satisfies Daubert” and “does not satisfy Daubert” form a pair.

The “falsifiable claims”/“non-falsifiable claims” pair

A model makes falsifiable claims. A modèle makes non-falsifiable claims. Thus, “falsifiable claims” and “non-falsifiable claims” form a pair.

Grouping the elements of the pairs

I’ve identified a number of pairs of traits of an inquiry. Each trait can be sorted into one of two lists of traits. The traits in one of these lists are associated with a model. The traits in the other list are associated with a modèle.

By this sorting process, I produce a “left-side list” and a “right-side list.” They are presented immediately below.

left hand list right-hand list

model                                                                  modèle

predictive inference                                    NOT predictive inference

predictions                                                      projections

statistical population                                 statistical ensemble

statistical validation                                   statistical evaluation

falsifiable claims                                          non-falsifiable claims

satisfies Daubert                                          does not satisfy Daubert

The traits of an inquiry under a logical methodology

A remarkable feature of the two lists is that each member of the left-hand list is associated with making a predictive inference while each member of the right-hand list is associated with NOT making a predictive inference. That predictive inferences are made is a necessary (but not sufficient) condition for the probabilistic logic to come to bear on an inquiry. Thus, I conclude that elements of the left-hand list are traits of an inquiry under a logical methodology while the elements of the right-hand list are traits of an inquiry under an illogical methodology.

The logicality of the first WG1 claim

An inquiry by WG1’s produced the claim that “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…” An AOGCM is an example of a modèle. The traits of the associated inquiry are those of the right-hand list. Thus, the methodology of this inquiry was illogical.

The logicality of the second WG1 claim

WG1 defines the equilibrium climate sensitivity (TECS) as “the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2” [14]. In climatology and in this context, the word “equilibrium” references the idea that, in engineering heat transfer, is referenced by “steady state”; the idea is that temperatures are unchanging.

From readings in the literature of climatology, I gather that TECS is a constant; this constant links a change in the CO2 level to a change in the equilibrium temperature by the relation

ΔT =  TECS * log2 (C/Co)                            (1)

where ΔT represents the change in the equilibrium temperature, C represents the CO2 level and Co represents the CO2 level at which ΔT is nil.

Using Equation (1) as a premise, WG1 claims it is “likely” that, for a doubling of C, ΔT lies in the range of 2oC to 4.5oC. That it is “likely” signifies that WG1 assigns a value exceeding 66% to the probability that ΔT lies in this range.

If Equation (1) is false, WG1’s claim is baseless. Is it false? This question has no answer, for the equilibrium temperature ΔT is not an observable. As ΔT is not an observable, one could not observationally determine whether Equation (1) was true or false. As the claims that are made by Equation (1) are non-falsifiable, Equation (1) must be an example of a modèle.

As ΔT is not an observable, it would not be observationally possible for one to determine whether ΔT lay in the range of 2oC to 4.5oC. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an observable. However, an outcome of an event is an observable. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an outcome. WG1’s claim that it is “likely” that, for a doubling of C, ΔT lies in the range of  2oC to 4.5oC references no outcome. As the claim references no outcome, it conveys no information about an outcome from a policy decision to a policy maker.

As the “consensus” of climatologists plus some of the world’s most prestigious scientific societies seem to concur with WG1’s claim, a policy maker could reach the conclusion that this claim conveyed information to him/her about the outcome from his/her policy decision. The signers of the Kyoto accord would seem to have reached this conclusion. The issuers of the U.S. Environmental Protection Agency’s “endangerment” finding would seem to have reached the same conclusion.  However, as I just proved, this conclusion is false.

Moving to a logical methodology

The cost from abatement of CO2 emissions would be about 40 trillion US$ per year [15]. Before spending that kind of money, policy makers should have information about the outcome from their policy decisions: the more information the better the decisions. By moving to a logical methodology for their inquiries, climatologists would convey the maximum possible information about the outcome from a policy decision to a policy maker.

It might be helpful for me to sketch out a path by which such a move can be accomplished. Under a logical methodology, one trait of an inquiry would be a statistical population. The data of Green and Armstrong suggest that the majority of WG1 climatologists confuse the idea of a statistical population with the idea of a statistical ensemble. This confusion, perhaps, accounts for my inability to find the idea of a population in WG1’s report.

On the path toward a logical methodology for a climatological inquiry, a task would be to create this inquiry’s statistical population. Each element of this population would be an observed independent statistical event.

A statistical population is a subset of a larger set of independent statistical events. On the path toward a logical methodology, this set must be described. In describing it, it is pertinent that, in a climatological inquiry, the Earth is the sole object that is under observation. Thus, a climatological inquiry must be of the longitudinal variety.

That an inquiry is longitudinal has the significance that its independent statistical events are sequential in time. That these events are statistically independent has the significance that they do not overlap in time. Thus, the stopping point for one event must be the starting point for the next event in the sequence. The state that I’ve called a “condition” is defined at the starting point of each event. The state that I’ve called an “outcome” is defined at the stopping point.

I understand that the World Meteorological Organization (WMO) sanctions the policy of defining a climatological observable as a weather observable when the latter is averaged over a period of 30 years. I’ll follow the WMO’s lead by specifying that each independent statistical event in the sequence of these events has a period of 30 years. For the sake of illustration, I stipulate that the times of the starts and stops of the various events in the sequence are at 0 hours Greenwich Mean Time on January 1 in years of the Gregorian Calendar that are evenly divisible by 30; thus, they fall in the time series …, 1980, 2010,…

A model has one or more dependent variables, each of which is an observable. At the end of each observed event, a value is assigned to each of these variables. For the sake of illustration, I stipulate that the referenced inquiry’s model has one dependent variable; this is the HADCRUT3 global temperature, when the HADCRUT3 is averaged over the 30 year period of an event. There is a problem with this choice; later, I’ll return to this problem.

For the sake of illustration, I stipulate that the model has two outcomes. The first is: the average of the HADCRUT3 over the period over the period of an event is less than the median in the model’s statistical population. The second is NOT the average of the HADCRUT3 over the period of the event is less than the median in the model’s statistical population.

A model has one or more independent variables, each of which is an observable. At the start of each observed event, a value is assigned to each of these variables. Practical considerations prevent a model from having more than about 100 independent variables. In the selection of these 100, one looks for time series or functions of several time series that singly or pair-wise provide a maximum of information about the outcome. Some possible time series are:

  • CO2 level at Mouna Loa observatory,
  • time rate of change of CO2 level,
  • HADCRUT3 global temperature,
  • HADCRUT3, lagged by 1 year,
  • HADCET (central England temperature),
  • Precipitation at Placerville, California,
  • Time rate of change of sea surface temperature off Darwin, Australia
  • Zurich sunspot number,
  • Jeffrey pine tree ring index, Truckee California

If one or more of the AOGCMs were to be modified to predict the outcomes of events, the predicted outcome would be a possible independent variable. Internal variables of the AOGCM, such as the predicted spatially averaged temperature at the tropopause, would be possible independent variables. If an AOGCM were adept at predicting outcomes, it would generate theoretical or “virtual” events, the count of which would add to the count of the observed events in the assignment of numerical values to probabilities of outcomes.

If an independent variable space (perhaps containing some of the above referenced time series) were to be identified then this space could be searched for patterns. Under a logical methodology, this search would be constrained by the principles of reasoning. If successful, the search for patterns would create the maximum possible information about the outcomes from policy decisions for conveyance to policy makers.

The search for patterns might be unsuccessful. In this case, the maximum possible information that could be conveyed to a policy maker would be nil. The chance of success at discovering patterns would grow as the size of the statistical population grew.

The need for more observed events

Earlier, I flagged the 30 year period of the events in my example as a problem. The problem is that, with a period this long, the temperature record supports the existence of very few observed events. The HADCRUT3 extends backward only to the year 1850 thus supporting the existence of only 5 observed events of the type I’ve described. That’s way too few.

In long-range weather forecasting research, it has been found that a population of 130 observed events is close to the minimum size for pattern discovery to be successful. To supply at least 130 events of 30 year periods each, the HADCRUT3 would have to extend back at least 3900 years. In reality, it extends back only 160 years.

Thermometers were invented only 400 years ago. A lesson that can be taken from these facts is that temperature has to be abandoned as an independent variable and a proxy for temperature substituted for it.

Logic and meteorology

A logical methodology has been employed in at least seven meteorological inquiries. Experience gained in this research provides a bit of guidance on the designs of logical climatological inquiries. The degree of this guidance should not be overblown, for the problems that are confronted by meteorological and climatological research differ. In particular, the number of observed events that are required for pattern discovery to be successful might be quite different.

The results from seven meteorological inquiries that were conducted under a logical methodology are summarized by Christensen [16]. In brief, the aim of the research was a breakthrough that would facilitate forecasting of seasonally or annually averaged surface air temperatures and precipitation in the far Western states of the U.S. at mid- to long-range. This aim was achieved.

In the planning of the inquiries, the original intent was to exploit the complementary strengths of mechanistic and statistical models by integrating an AOGCM with a statistical pattern matching model. The AOGCM would contribute independent variables to the pattern matching model and possibly would contribute to the counts of the statistical events that were the source of information for the projects. However, a role for the AOGCM was abandoned when it was discovered that the AOGCM grew numerically unstable over forecasting periods longer than a few weeks. As the goal was to forecast with statistical significance over periods of as much as a year, the instability rendered the AOGCM useless as an information source.

In one of the inquiries [17] [18], one-hundred-twenty-six observed statistical events were available for the construction of and statistical validation of the model. In the construction, 100,000 weather-related time series were examined for evidence of information in them about the specified weather outcomes. Additional time series were generated by running the 100,000 time series through filters of various kinds; for example, one kind of filter computed the year-to-year difference in the value of an observable. From these data, the 43 independent variables of the model were selected for their relatively high information content about the outcome and relatively high degree of statistical independence.

Using a pattern discovery algorithm as its procedure, a process searched for patterns in the Cartesian product space of the sets of values that were taken on by the 43 independent variables. This process discovered three patterns.

One of these patterns predicted precipitation 36 months in advance, a factor of 36 improvement over the prior state of the weather forecasting art. This pattern was:

  • normal or high Pacific Ocean surface temperatures 2 summers ago in the western portion of the ±10o equatorial belt AND,
  • normal or high sea surface temperatures 3 springs ago in the northeastern portion of the equatorial belt AND,
  • moderate or low precipitation at Nevada City 2 years ago.

A match to this pattern assigned 0.59±0.11 to the probability of above median precipitation in the following year at a collection of precipitation gauges in the Sierra Nevada East of Sacramento.

A consequence from this inquiry was for the significance to be discovered of the El Niño Southern Oscillation (ENSO) for mid-to long-range weather forecasting. Exploitation of information theory made it possible to filter out the vast amount of noise in the underlying time series and to tune into the signal being sent by the ENSO about the precipitation that was to come in the Sierra Nevada.


A pair of WG1’s inquiries were examined for evidence of whether or not the methodologies of these inquiries were logical. It was found that traits which were the mark of a logical methodology were absent. However, that they were absent was obscured by ambiguity of reference by climatology’s language to the associated ideas. A consequence from the obscurity could be for it to appear to a policy maker that the IPCC conveyed information to him/her about the outcome from his/her policy decisions. It was shown that under the illogical methodology of WG1, the IPCC conveyed no such information.

With a move to a logical methodology, it would become conceivable for the IPCC to convey information to policy makers about the outcomes from their policy decisions. A path toward such a methodology was sketched.

Moderation note: this is a technical thread, will be moderated for relevance.

Works cited

[1] Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter 8. Executive Summary URL =

[2] Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter 10. Mean temperature.” URL =

[3] Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Frequently Asked Question 8.1: How Reliable Are the Models Used to Make Projections of Future Climate Change?” URL =

[4]  Kahnman, Daniel “Maps of Bounded Rationality.” Nobel Prize lecture, December 8, 2002,

[5] Kahneman, Daniel and Amos Tversky, “On the reality of cognitive illusions.” Psychological Review 1996 Vol 103 No. 3, 582-591 URL =

[6] Ayton, Peter and Ilan Fischer,The hot hand fallacy and the gambler’s fallacy: Two faces of subjective randomness?” Memory & Cognition 2004, 32 (8), 1369-1378. URL =

[7] Casscells, W, A Schoenberger and TB Graboys, “Interpretation by physicians of clinical laboratory results.” N Engl J Med Nov 2,999-1001.

[8] Scott, Alwin, “Reductionism revisited.” Journal of Consciousness Studies, 11, No. 2, 2004, pp. 51–68. URL =

[9] Capra, Fritjof, The Turning Point: Science, Society and the Rising Culture, 1982.

[10] Gray, Vincent: Spinning the Climate. URL =

[11] Green, Kestin and J. Scott Armstrong: “Global Warming: Forecasts by Scientists vs. Scientific Forecasts,” Energy and Environment, Vol 18, No. 7+8, 2007. URL =

[12] Trenberth, Kevin. URL =

[13] “Daubert Standard.” Wikipedia 18 January 2011. URL =

[14] Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter Definition of climate sensitivity.” URL =

[15] Lomborg, Bjorn, “Time for a smarter approach to global warming.” Wall Street Journal, Dec. 15, 2009. URL =

[16] Christensen, Ronald, “Entropy Minimax Multivariate Statistical Modeling-II: Applications,” Int J General Systems, 1986, Vol. 12, 227-305

[17] Christensen, R., R. Eilbert, O. Lindgren and L. Rans, “An exploratory Application of Entropy Minimax to Weather Prediction. Estimating the Likelihood of Multi-Year Droughts in California.” Entropy Minimax Sourcebook, Vol 4: Applications. 1981. pp. 495-544. ISBN 0-938-87607-4.

[18] Christensen, R. A., R. F. Eilbert, O. H. Lindgren, L. L. Rans, 1981: Successful Hydrologic Forecasting for California Using an Information Theoretic Model. J. Appl. Meteor., 20, 706–713. URL =

182 responses to “The Principles of Reasoning. Part III: Logic and climatology

  1. “As ΔT is not an observable, it would not be observationally possible for one to determine whether ΔT lay in the range of 2oC to 4.5oC. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an observable. However, an outcome of an event is an observable. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an outcome. WG1’s claim that it is “likely” that, for a doubling of C, ΔT lies in the range of 2oC to 4.5oC references no outcome. As the claim references no outcome, it conveys no information about an outcome from a policy decision to a policy maker.”

    Thank you.

    • But the policymakers think it does.

      • Roger Andrews writes “But the policymakers think it does.”

        So what. I have been saying this for years, and I can few pople who take me seriously. Fred Moolton (?sp) amongst them. I am not concerned with policy makers. I am concerned with organizations like the Royal Society and the American Physical Society. Also, scientists who hold senior positions in governments.

        If this statement is so blindingly obvious, which it is, why does Dr. Curry still cling to the idea that CAGW has some sort of scientific basis?

      • Jim:

        I’m not sure whether you are agreeing with me or pouring on the scorn here. But to make my position clear, the purpose of my brief comment was simply to point out that even if scientists are able to reach consensus on uncertainties, the interface with policymakers still remains to be negotiated. And so far the policymakers have shown a regrettable tendency to make up their own science (“We KNOW that global warming is real and threatens life on earth” etc. etc.)

        Nevertheless, I think Dr. Curry is right in trying to put “science” back into “climate”. We can argue that CAGW has no scientific basis (an argument I happen to agree with) but if this is indeed the case we need to demonstrate it scientifically.

      • Roger, You write ” but if this is indeed the case we need to demonstrate it scientifically.

        This is my point. Terry Oldberg HAS just demonstrated it scientifically. To me what he has written ought to be emblazoned in neon lights wherever CAGW is discussed. Surely, if Terry is right, and he is right, then organizations like the RS and the APS ought to completely, and unequivocally withdraw their support for CAGW.

        And yes, I am agreeing with you.

      • Jim:

        Thank you. You have a point.

        I also agree that the position of some scientific bodies on the CAGW issue isn’t supported by the science, which has to make you wonder how scientific some of these bodies really are.

      • Once before I witnessed unanimous support by pseudo-scientists in power to results from a logically unsound methodology. By crushing my dissent, they and their financial backers were successful in making these results the basis for public policy on a safety issue. This experience leads me to doubt that the logical shortcomings of IPCC climatology will be corrected simply by exposing them. In order for logic to be brought to climatology, an organized and well funded pressure group would have to be created for the purpose of contesting and overcoming the power of the pseudo-scientists.

  2. An interesting post.

    It would seem to support my conclusion that there is not a scientific consensus on the subject of climate change except in pretty narrow areas.

  3. “As the “consensus” of climatologists plus some of the world’s most prestigious scientific societies seem to concur with WG1’s claim, a policy maker could reach the conclusion that this claim conveyed information to him/her about the outcome from his/her policy decision. The signers of the Kyoto accord would seem to have reached this conclusion. The issuers of the U.S. Environmental Protection Agency’s “endangerment” finding would seem to have reached the same conclusion. However, as I just proved, this conclusion is false.”

  4. A good post indeed. The confusion between projections (from given scenarios), and prediction (deriving a falsifiable value for a state of affairs), and the parallel confusion between outcome spread among an ensemble of projections and models on the one hand, and the statistical distribution of probabilities for the various values of a random variable, has been made many times, though seldom with such rigour and clarity.

  5. That was a long, but interesting article!
    I would like to write a summary of what I got from this.
    Consensus Climate Theory and Models do not pass the Logic Test!
    I knew that!

  6. Wow – very impressive.

    Terry – could one increase the number of events by changing the 30 year period to 1 year – or would that not work because it is to short a period?

    That is the only way we could get 130 events out of the current temperature record.

    Moving to a proxy seems problematic, as it seems that all the proxies tend to dampen out the natural variability, and sort of compress it to a sort of 10 year or so average.

    Do you have any thoughts on what if any proxy might be suitable to use instead of the temperature record (if one year is just to short, to allow more than 130 events)?

    • RickA:

      Good questions!

      What the event period should be and whether a proxy should be used for temperature are among many hard problems. Acting by myself, I’m unable to solve these problems.

      Thus, my thoughts run toward identification of the organizational response that would be appropriate in addressing these problems. Evidence presented in my article is consistent with belief in the proposition that, as a group, professional climatologists are deficient in their understanding of information theoretically optimal model building.

      In addressing the hard problems, climatologists need this understanding. Thus, an appropriate organizational response would be for this understanding to be imparted to climatologists. In my view, the people of the world need a crash effort toward imparting this understanding.

  7. Interesting post, thanks.

    Perhaps this is obvious to other readers, but not to me:

    You say that deltaT ‘is not an observable’. I take it that by this you mean ‘it cannot be measured directly’. But isn’t it usually assumed (by the IPCC, The Consensus, etc.) that the change in global mean temperature is closely related to (perhaps ‘tracks’?) the change in equilibrium temperature?

    You seem to damn present climate science more or less entirely by saying ‘deltaT is not an observable so propositions about delta T cannot be tested and are therefore meaningless’ (I paraphrase). It might make sense to clarify exactly what you’re saying about deltaT and why (e.g. why can’t you just measure the change in global mean temperature as a proxy for deltaT? Seems like the global mean temperature should be monotonic with deltaT, so you can always say ‘deltaT is greater than the observed global mean temperature’ ).

    I’m sure somebody will helpfully point out if I’ve failed to grasp the obvious here…

    • Ceri, You write “You seem to damn present climate science more or less entirely by saying ‘deltaT is not an observable so propositions about delta T cannot be tested and are therefore meaningless’ (I paraphrase)”

      I hope Terry replies to you. To me, what you have written is blindingly obvious. deltaT is purely hypothetical and completely meaningless.

    • David L. Hagen

      What evidence do you have for measuring “the change in equilibrium temperature?”
      What is the “equilibrium”?
      Compare glaciations etc.

    • Ceri Reid,

      I agree it is a very interesting post. You make a good point about delta T being possibly monotonic with global mean temperature but I suppose that in itself does not imply that the change is in equilibrium temperatures. Not sure about that. Another point is that, in the context of the equation, delta T is pre-supposed to be exclusively linked with – or caused by – a change in CO2 (there are no other factors in the equation). This is an assumption and I would have thought that makes it not observable in this context. I don’t claim these points are answers to your question though. Just musing, really…


    • Ceri:

      When I say deltaT is not an observable, I do not mean it cannot be directly measured but rather that it cannot be measured. How would one measure it?

      • Thanks for taking the time to reply, Terry.

        I think it’s a valuable article, and everyone involved in climate modeling should be forced to read it, and think hard about what it is they’re actually doing versus what they claim to be doing. Saying that politicians should also read it is obvious, but equally obviously would never actually happen.

        Measuring deltaT: I’m not confident I know what deltaT actually is. My impression (from the article) is that it’s the change in average temperature that would occur if the atmospheric CO2 level was constant at some value, compared with some previous constant level. (Both maintained for the 30 year period you mention). Obviously, I may have misinterpreted the meaning of deltaT…

        But if that’s what deltaT is, then we can use ‘some method’ to estimate it, even though we can’t measure it directly (because we can’t run an experiment with the world at two different constant levels of CO2 maintained for 30 years each, with everything else held constant). For example, we can take the CO2 level in 1900-1930 and the temperature record for 1900 to 1930 and (by some calculation) estimate one value of equilibrium temperature. And we can take some other period (1970 to 2000, say) and estimate another value of equilibrium temperature. We can use a range of statistical techniques to come up with a range of values for the equilibrium temperatures. And from those, we can estimate deltaT (again assuming that my understanding of deltaT is correct). No-one would say that we had a very accurate estimate of deltaT, but it would be an estimate and it might be a useful one. (And we could repeat the process for other 30 year periods and come up with other values).

        I think the criticism I’m making (albeit based on not really being sure what deltaT actually is!) is that just because something isn’t directly observable doesn’t mean we can’t obtain a useful estimate of it; and from that estimate (and especially the error bars on that estimate) decide if the climate sensitivity equation is actually a useful tool.

        It occurs to me that perhaps you mean something like: ‘Although we could observe changes in global mean temperature in an attempt to estimate deltaT, in fact those changes are driven by multiple factors that we can neither identify nor understand, so trying to obtain an estimate for deltaT in this way will lead to a meaningless value’. But if that’s what you mean, then aren’t you begging the question a bit? – I think some climate scientists would claim that they do understand how global average temperatures change, and that they can identify deltaT by removing the contamination of the temperature by other (natural) sources. If you’re asserting that they can’t do this, then maybe that assertion should be in the article.

    • I think the point here is that they are assigning specific probabilities to unfalsifiable projections, which doesn’t make any sense.

      We have been inundated with this in the media when attempts are made to link the latest flood, drought, snowstorm as somehow a “successful prediction” of global warming theory.

      This is the whole “consistent with” hand waving.

      Then of course there is the “Some scientists say…” meme where one paper becomes the consensus position overnight in hindsight after its prediction is trending correctly (such as hurricane frequency and strength).

      It’s basic science. Create theory, test theory. But in order to have credibility you have to predict the specific outcomes of your test before the test is run. And you have to state exactly how these outcomes are measured so others can reproduce your test. Validity of the test is another subject entirely.

      Climate science has a huge case of fear of failure now, due to the overstatement of certainty. The lack of falsifiable predictions given the billions spent on research is a bit distressing.

      Hurricane season predictions have been embarrassing themselves regularly lately, but I give them credit for making clear predictions, and then discussing what went wrong (or right) after the season is over. They do not pretend they are something they are not (to satisfy an agenda).

  8. “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…”

    Well, maybe when an ensemble of several models all show similar anomalies it might seem that way.
    When you see the runs as absolute temperature and the several models vary by as much as 5C…. not so much.

  9. “This question has no answer, for the equilibrium temperature ΔT is not an observable. ”

    In stating that ΔT is not an observable, please show your work!

    You know we can wait until CO2 concentration has doubled and then observe the temperature change.

    Or we can look at paleo data and find where a doubling of CO2 concentration has occurred and look at the change in temperature.
    I think the Ootong-Java event caused CO2 to come at least close to doubling, and there are others, so there are at least some observables.

    I don’t buy your handwaving, and just because something is hard to falsify, doesn’t mean it is impossible to falsify.

    • delta T is not an observable since the earth never reaches any equilibrium temperature. If it can’t be measured directly, it isn’t an observable. That’s pretty much what observing means.

      • That means earth can’t have any observable temperature, which means we can’t even have the discussions we are trying to have, which means were back to square one.

        How is that for the fallacy reductio ad absurdum?

        Or how about that nothing is measured directly. One does not measure temperature, one measures the height of a liquid against a scale or the resistance of a circuit.

        You are just handwaving.

        The earth does have an average temperature whether or not it is at equilibrium. If you can measure the temperature of a kilogram of material, then you can measure the temperature of 10 ^ 24 kilograms of material, since temperature is just an average anyway.

      • Phillip Bratby

        Temperature is an intensive variable, so how would you measure the temperature of the earth?

      • You are right that temperature is an intensive variable, so it doesn’t depend on the amount of what you are measuring, thus you can measure the temperature of the surface of the earth the same way you measure the temperature of anything else.

        For example, wrap the earth with a wire and measure the resistance, then convert to temperature.

    • bobdroege:

      That a quantity is an “observable” signifies that it can be measured in the real world by some sort of instrument. In the real world, surface air temperatures fluctuate but the equilibrium temperature does not fluctuate. Thus, the equilibrium temperature is not an example of an observable.

      • Do you mean the equilibrium temperature of the earth does not fluctuate?

        Maybe you should look up the definition of a Kelvin, which is defined in terms of something in equilibrium, which is an observable, as we observe it, or else every time we measure something in Kelvin, we are observing something not observable.

        Or you can explain what you mean by equilibrium temperature.

      • bobdroege:

        I understand that “equilibrium temperature” is a term that is used among planetary astronomers. One can get a variety of descriptions of what they mean by the term by Googling on “equilibrium temperature” and “planet.” In doing so, I’ve reached the conclusion that what they mean by the term is that the temperatures at referenced space points (e.g., at Earth’s surface) are unchanging.

      • I think your conclusion is in error.

        What the planetary astronomers are doing is measuring temperature remotely, by using other available parameters that they can measure more directly, like albedo and the amount of incident radiation from the nearest star.
        Clearly, at least to me, if the parameters they are measuring are changing then the resulting temperature would be changing.
        Much better than a blog post would be to look at any college first course in astronomy to get more specific information.

        But that equilibrium temperaure of a planet is certainly an observable, but usually gives an inaccurate answer except on planets that have little greenhouse effect.

      • bobdroege:
        Given that the term “equilibrium temperature” signifies an unchanging temperature, the equilibrium temperature is not an observable, for in the real world temperatures fluctuate. Do you agree?

      • That is where your error is, in stating that the term equilibrium temperature signifies an unchanging temperature.

        No one ever said that measuring the temperature of a planetary surface by measuring albedo and incident irradiation would lead to an unchanging number.

        Are you saying that since a value is not changing, therefore it is unobservable?

        And by the way, the equilibrium as an adjective is not modifying temperature, it is refering to the radiative balance of incoming and outgoing radiation from the planetary surface.

      • bobdroege:
        Please describe an instrument by which the equilibrium temperature could be measured.

      • Take a look at this paper, possibly a work in progress, but it has pictures of an instrument that could measure the temperature at radiative equilibrium.

        Are you still claiming that it is temperature that is at equilibrium in the face of evidence that it is something else that is at equilibrium?

        You know that if the axioms presented in a proof are shown to be false, then the proof is invalid.

      • bobdroge:
        I skimmed through the paper which you cited at in search of a description of an instrument for measuring the equilibrium temperature. I didn’t find such a description. However, I did find fodder for discussion.

        It looks as though the author has the idea of backing bounds on the numerical value of TECS out of time series from radiometers carried on a satellite. Lindzen, Spencer and their colleagues are trying to do something similar. If one could measure TECS in this way then one would have a measuring device for the equilibrium temperature, for the current CO2 level would determine the equilibrium temperature via Equation (1) of my article. However, there is a catch. The catch is that in measuring TECS, one has to assume a non-falsifiable modèle. For this purpose, the author uses the differential equation numbered (25), page 15. Though he does not directly address the non-falsifiability of his Equation (25), the author does touch on this issue. He states (page 15) that “there is no reason to expect the dependence of any of the feedback effects on temperature to be close to linear, but to obtain more precision would be beyond the scope of this paper.”

        In getting to Equation (25), the author has assumed linearity. To assume linearity is a technique that can be used in reducing a complex system to cause and effect relationships. However, in making this reduction, one fabricates information. This is a logical flaw in the proposition that one can measure the equilibrium temperature in this particular manner.

      • Ok,
        There is a picture of the instrument to measure the radiative equilibrium temperature of earth on page three with a drawing and description on page 30.

        This equilibrium temperature is calculated after measuing the earth’s albedo, the solar radiance, the area of the solar disk, the temperature of the sun’s surface, the distance from the sun to the earth, and earth’s emissivity. Then plugging it all into an equation that can be found here.
        which leads to this, which can be found in many places.

        That source specifies what is meant by “equilibrium” and supports the idea that temperature is not what is in equilibrium.

        Your equation (1) only calculates a temperature change due to a changein CO2 concentration, and you could not calculate a temperature with it if you knew the climate sensitivity.

        Your whole scheme of classifying TECS as a modele, rests on the equilibrium temperature being a non-measurable entity and this is patently false.

        And does your concept of falsifyability, if that is a word even, do you require more than an imaginable experiment that would falsify a hypothesis?

      • bobdroege:

        With reference to the first of your two remarks, the instrument at page 30 is a radiometer. I doesn’t measure the temperature ( equilibrium or otherwise ) but rather the intensity of the electromagnetic radiation.

        With reference to the second of your two remarks, I assume this remark references Equation (X.4) in the book Planetary Science. Equation X.4 is a modèle of a fictional planet which, unlike the planet Earth, has a spatially invariant surface temperature.

      • You are still failing to grasp the concept of equilibrium temperature and how it is measured. Equilibrium temperature refers to the temperature of a body at radiative equilibrium, not that the temperature is at ewuilibrium. Failure to properly grasp this concept has lead you down a long path of faulty reasoning.

      • bobdroege:
        Your definition of “equilibrium temperature” is not the one that is the basis for my conclusion. As I stipulated earlier, mine makes the “equilibrium temperature” the equivalent of the temperature that, in engineering heat transfer, is called the “steady state temperature.” I’ve gotten the impression that this is the definition among planetary astronomers by surfing their Web sites. For example, the Web page at states, in reference to “The equilibrium temperature,” that “Equilibrium means no change with time.” Thusly defined, the “equilibrium temperature” at Earth’s surface is not an observable.

        Rather than being an observable, the equilibrium surface temperature is an abstraction from the real world. I’m not exactly sure of what you mean by “the temperature at radiative equilbrium” but it sounds as though there is the additional abstraction that the heat transfer is entirely radiative.

    • It is unclear how we would separate out the delta-T that was due to CO2 doubling versus that caused by all the other influences on temperature. It looks like a case of trying to unscramble scrambled eggs.

      The entire AGW theory rests on CO2 having a larger than natural forcing due to other positive feedbacks, yet it is unclear to me how this forcing is measured even if we can jump forward 30 years. How is this done? Modeling?…ugh.

      This area would be ripe for abuse if it was allowed to be calculated according to a future defined formula. Because the theorized CO2 signal in global temperature has such low SNR, this is not a trivial exercise.

  10. “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…”

    Based on demonstrated success at estimating future climate? Not likely. Even the IPCC changed the wording from “prediction” to “projection” last time around.

  11. John Costigane


    Thank you for this logical/illogical insight into climatology. This approach will form a sound basis for the future of climatology, which can be of far greater value than the current system. Have you an algorithm to cover all this detail?

    • John:

      When you ask whether there is an algorithm, I’m not certain what you mean. To take a stab at an answer, in the optimization of each of the inferences that are made by a model a vast amount of computational effort is required. In practice, virtually all of this effort is expended by a process that is running on a computer, using a computer program as the associated procedure.

      • John Costigane


        Climatology, at present, suffers from a poorly designed computer algorithm, which has put into doubt the “Hockey Stick”: a visual representation of the supposed unprecedented average temperature rises. A proper algorithm, the basis for an improved version of climatology, can be critically analysed by others and readily adapted to new knowledge. Climatology can be a great science but based on the scientific method.

      • John:
        Yes, there is a better algorithm. This algorithm is logic. Modern climatology suffers from illogic.

      • Really?

        How far do you think that we get on logic?

      • Pekka:
        Through the use of logic as an algorithm, climatologists would get to generalizations that were formed under the principles of reasoning. Currently, these generalizations are formed under the method of heuristics with consequent violation of the law of non-contradiction. Through their reliance on the method of heuristics, climatologists have made errors as serious as representing to policy makers that their generalizations about TECS convey information to these policy makers about the outcomes from their policy decisions when these generalizations convey no such information. By itself, this error nullifies the value of past climatological research for policy making. Errors of this type and many other types would not occur were climatologists to replace the method of heuristics with the principles of reasoning.

      • Terry,
        People make errors in logic. Those are worth correcting, but that is where the power of logic ends on matters important for science. And that is not far from the starting point.

        It is not one or two or three times that I have read or heard philosophers telling, how the things of the physical world really are, and exactly as Feynman put it – they are usually wrong.

        It is not enough to use logic or to argue, how the world should be. The real physical observations and physical theories that are often very different from those expectation are capable of telling about the real world.

        Scientists like Einstein have emphasized principles of philosophical (and aesthetic) nature and those had their share in the birth of general relativity, but similar arguments led Einstein also to thoughts about quantum mechanics that were found to contradict reality. One never knows, whether reasoning gives correct or wrong answers, until the conclusions are compared with the real world.

        There are similarities in the relationship of mathematics with physics and other theories of real world. It is more a rule than an exception that theoretical physicists have developed mathematical methods unknown to mathematics and lacking originally proper justification of some of their features. That has not led two their dismissal, but mathematicians have reached in their research those physicists years later.

        The scientific process is neither a precise logical process, it has its apparently illogical deviations. In time the discrepancies are removed, but insisting that everything should immediately fit nicely together would have slowed down the progress of science severely.

        While you can show specific illogicalities in the work of scientists, the idea that the whole climate science would in some relevant way contradict logic or that some of its methodologies replaced by logic, is just nonsense.

      • Infinite computer time solves nothing if you have the incorrect algorithm. Try writing a chess (or Jeopardy!) algorithm. It is the algorithm that matters, not a fast computer.

        I assign an unfalsifiable projection that chances of the current GCM algorithms being mature enough to be reliable and useful for policy as “very unlikely”.

      • John Costigane


        Terry seems to hold a top-down view of the GCM climate system, which assumes all matters are settled. This of course is an artificial setup since all relevant parameters, including unknowns, do not match their reality. The system is therefore incomplete. Is it possible to add more real events, of a cyclical nature for example?

      • John:
        You’ve guessed wrongly about my view. If you wish to locate it on the systems science map, mine is a holistic view. Such a view stands in contrast to the reductionistic view of the builders of Working Group 1’s climate modèles.

        Under the reductionistic view, all systems can be reduced to relationships between effects and their efficient causes with the consequence that the evolution of a system can be computed. In the reduction to cause and effect relationships, reductionists fabricate information but this treachery cannot be observed because the claims that are made by the resulting modèles are non-falsifiable.

        Under the holistic view, rather than fabricating information one deals with the missing information by optimizing it. This view leads to statistically validated models of complex systems. Processes using these models as their procedures convey the maximum possible information about the outcomes from decisions to decision makers. Processes using reductionists’ modèles as their procedures convey no information to decision makers. However, decision makers who know little about logic are prone to thinking these processes give them information.

      • actually, the confirmation holism issue that i discussed previously is relevant to this

      • John Costigane


        After further consideration, the value of your approach seems to undermine much of the alarmist message.

        Take the 30 year period for data collection. This possibly matches the climate cycles better than the linear approach, which focusses purely on rising CO2. Scare-mongering can continue without break in the alarmist version while a 30 year break would likely reduce this to a maximum 3-year period around the data collection.

        The actual use of the term ‘modeles’ can be used by sceptics ie climate models become climate ‘modeles’. This undermines the consensus view and unbiased readers would readily understand that there was another perspective to be had. Maybe alarmists would eventually accept the holistic version.

  12. Weather is a chaotic system (not random) and the law of large numbers does not apply. You cannot average chaos over time and expect convergence. Only the noise (random) will converge.

    As Laplace first demonstrated, the orbits of the planets are stable as a result of resonance, not because their chaotic orbits converge on long term averages. Without this resonance the orbits are unstable and thus no convergence.

  13. Excellent treatment of the subject. Some of the severe limitations of work to date are exposed.

  14. Neoclassical economics, the sort one finds in standard microeconomics textbooks, is almost entirely about equilibrium situations brought about by rational economic agents. At prices p, every supplier chooses to produce her most convenient amount of output, and every consumer chooses to purchase his most convenient amount. As the aggregate supply and aggregate demand are not likely to coincide for a given arbitrary price, the market is supposed to adjust price and quantities until an equilibrium is reached, whereby supply equals demand at an equilibrium price p*. But, alas, neoclassical microeconomic theory does not even have a precise theory of how economic agents behave in a non-equilibrium situation. It can only do static comparisons of different equilibria, all of them theoretical: as supply and demand conditions change continuously, markets are never at equilibrium, so nobody has actually observed a market equilibrium.
    This, in fact, has led neoclassical economics into a cul-de-sac, from which several new schools of economic thought are now trying to extricate it, such as, yes, experimental economics among others. It formulates propositions about economics based on observation and experiment, rather than abstract reasoning about rational agents.
    Just a possible parable about climate models.

  15. “In the course of his service to the IPCC as an expert reviewer, Vincent Gray…”

    Vincent Gray, coal chemist, is a self-described ‘expert reviewer’ for the IPCC. He was not asked to review and was not ‘in service’ to the IPCC — he asked for and was provided with a draft copy. It’s not hard to get a draft copy, one only needs to agree not to publish it. No qualifications or experience in climate research were needed to appoint himself an ‘expert reviewer’.

    If you are going to appeal to authority, it is worth making it an appeal to an actual authority. Appeals to authority are not in and of themselves an error in reasoning; and practically speaking, we can’t do without appeals to expert opinion, a lot of the time in modern life. But I think we should try to make some kind of pragmatic assessment about the quality of information, and sources.

    “This mistake is to confuse a model built by scientists with a scientific model. A scientific model makes predictions. A model that makes no predictions is not a scientific model even when built by scientists.”

    It’s puzzling to me why, despite the availability of so many science papers that illuminate how scientists actually view the models and why they have confidence in aspects of the models, you have cited none; and instead, choose to reference not only the discredited nonsense of non-authority Vincent Gray, but also the debunked Lomborg, and Monckton ( icecap).

    Anyways, there are many different types of models that all together, attempt to address many different questions about physical systems and processes. Presently, they are good enough to be used to estimate trends and to have an understanding of the climate system. They have been found to be able to accurately represent current and past climate. In other words, contrary to your claims, the models are based in a theory that is falsifiable (testable) against observation. Prediction is not just about the future, but also about the past, when gathering evidence of the strength of climate models. The literature widely discusses uncertainties and the need to build the computational ability of models, especially in relation to issues of local adaptation. However, in addition to predicting trends, many ‘events’ are occurring as anticipated by models e.g. warming of ocean surface waters. The variables that continue to be refined might well be what policymakers begin to identify they want refined.

    Popper did not promote a naive falsifiability. Unfortunately, you have narrowed the role and concepts of prediction and theory to suit yourself. You are also wrongly assuming that nothing is being learned from prediction.

    I agree, however, that psychology might help us consider questions of decision-making and interpretation of risks, and is beginning to be more widely cited to explain some features and implications for communication to address climate change denial.

    “whether regulation of carbon dioxide emissions would have the desired effect of controlling global temperatures is also unknown.”

    Models do not predict a warming trend caused by human activity but you want to argue that reduction of CO2 emissions would not have the desired effect of controlling this warming trend, which is not predicted?

    Your essay is impressive in its technical strivings, but you make too many basic factual, conceptual, research and reasoning errors (there were many more) and I don’t think it is wise to rely on the opinion of someone who demonstrates such problems.

    I hope things can improve.

    • Martha,

      you make a very strong pleading except for one important FACT. The IPCC themselves have chosen to call the outputs of their models PROJECTIONS. Due to this and the apprent logic in the presentation, I think there is much validity here.

    • Martha:

      I wish you would address the substance of Gray’s remarks (and mine) rather than wasting our time with ad hominem arguments. Your characterization of Gray as a “coal chemist” is irrelevant to the topic under discussion.

    • Vincent Gray, coal chemist, is a self-described ‘expert reviewer’ for the IPCC. He was not asked to review and was not ‘in service’ to the IPCC — he asked for and was provided with a draft copy. It’s not hard to get a draft copy, one only needs to agree not to publish it. No qualifications or experience in climate research were needed to appoint himself an ‘expert reviewer’.

      Really?! That must be why Dr. Gray’s reviewer comments were included with all the other reviewer comments on the Second Order Draft of AR4.

      Furthermore, to the best of my recollection, there was no scarlet letter attached to his reviewer comments, nor to those of anyone else. The “Chapter Team” responses to his comments are, of course, another story – although very much part of a disconcerting pattern of behaviour on the part of the “Chapter Team” in several chapters of AR4.

      As a consequence, of course, it is impossible to ascertain – from the thousands of Reviewer Comments by those the IPCC designated as “Expert Reviewers” – those whose comments (you falsely claim) were not invited from those whose comments were.

      Some might say that such a task is as impossible as ascertaining the effects and influence of human generated CO2 vs that which occurs naturally.

      Btw, Martha, you might want to take a look at:

      Annex III: Reviewers of the IPCC WGI Fourth Assessment Report

      Expert reviewers are listed by country. Experts from international organizations are listed at the end.

      [scroll down to New Zealand where you will find (inter alia):]

      GRAY, Vincent
      Climate Consultant

      I really don’t think it’s wise to rely on the opinion of someone who has demonstrated an inability to comprehend the inordinate shortcomings of the logical fallacy of the appeal to authority argument (particularly her own when, as above, she has been demonstrated to be incorrect)

      But … speaking of “appeal to authority”, scientists, CO2 and “communication” of climate science … while I’m here …

      Dr. Curry:

      I wonder if I might ask you to take a look at a brief survey I’m conducting in which I seek the responses of scientists (of any and all persuasions). I’m interested in learning about the views of scientists regarding the role of human generated CO2 – and in how scientists would communicate their views on climate change to the general public. Background and link to the survey can be found at:

      Calling all scientists – an invitation to speak for yourself!

      If you believe it would be of interest to your readers, perhaps you would be kind enough to mention it in your next non-technical general discussion thread. Thanks.

  16. @Martha

    ‘Prediction is not just about the future, but also about the past, when gathering evidence of the strength of climate models’

    This is a joke, right? Your American sense of humour sometimes catches me out.

    I guess predicting the past must be a real real tough thing to be able to do for climate modellers. And I’m sure that they really need lots of gold stars and encouragement for being able to do so.

    I’m a bit rusty on programming but I think even I could manage to input some data that said ‘it was 27C last Thursday’, and retrieve the data to answer the question ‘what was the temperature last Thursday?’ Wow. Gosh. What a staggering insight!

    With my super duper past predictor I could even tell you who won the FA Cup last year. Or the Epsom Derby.

    The more I learn about these fantastic climate models, the more impressed I am by their amazing capabilities. Predicting the past – who’d ever have thought it?

    • Latimer,

      what makes it tough for Martha and the Modelers is that even KNOWING the cyclicity and the Volcanoes, etc, they have still have done a poor job of backcasting. The only thing we are shown is temperature which somewhat follows the squiggles. We are NOT generally show the precipitation which is not even as close as the temp. Funny thing is, the same problems is seen in the projections. They aren’t completely out of the ballpark on the temps, but, the precip isn’t close. Without clouds and precipitation it is simply not possible to model as opposed to modele our climate.

      • kuhnkat,

        I think this picture tells all there is to tell about climate model ‘hindcasting’:
        (picture is from Bob Tisdale)
        There is no multidecadal dynamics in the output. No 1940 blip. Some slight cooling episodes but that’s it. And the comparison is bad even when the comparison is done with gistemp (which is far less bumpy than Hadcrut3, I wonder why…).

    • Thanks Latimer! You made me laugh :-).

    • And next from the climate modellers – exciting news!

      ‘In a massive advance for climate modelling we plan to move from our Phase 1 (predicting the past) into our next challenging assignment. Brave and intrepid modellers around the world will be attempting the next step on our road to full prediction of the weather future.

      Yes folks – we are going to try Phase 2. Looking out of the window and seeing what the weather is doing! and in real time!

      Brave (but anonymous) modeller ATFC told us

      ‘It was tough all that predicting the past stuff, but now we have actually managed to write the program to tell us it was 27C last Thursday, we feel that we are on top of our game and are pushing for the next level.

      It’ll be hard for the young lads who maybe haven’t seen a window before and have really only seen weather on their commute to work, but I know they’ll give it their best shot. First we need some structural modifications to the programmingbunker. And we’ll get those outside awareness facilities out in as soon as we can, then just take it all one day at a time.

      Hopefully, if the gaffer has got it right we’ll be starting to see some results flow pretty early. We can only improve by getting on with the job and with time gazing out of the window. There is huge optimism in the Team and we ain’t scared of nobody’

      (With thanks to the management, staff and players of Aldershot Town FC, whose prose is always an inspiration :-) )

  17. Latimer, believe it or not, some climate models are not quite able to predict the past (retrodicting, in the jargon).

    • Hector, see my reply to Latimer.

    • Not much frigging good at anything then.

    • The most basic of polynomials can predict the past. There is no skill in this whatsoever. As soon as you try and use them to predict the future they do no better than chance.

      This is a very well explored field for such topics as stock market predictions. For years the DOW followed the length of women’s dresses, which was great when the market it up. Even better market correlation was found with the depth of winter snowfall in a small town in the mid-west.

  18. To all

    Is the following a “LOGICAL” statement?

    In science, if recent observation is similar to past observation, we don’t need a new theory to explain the recent observation.

  19. In constructing models, people commonly fabricate information that is not available to them. The penalty for fabrication is for the model to fail in service or when it is tested.

    Example: IPCC

  20. Why was this series of posts not titled “Slaying the Induction Sky Dragon”?

    What’s next to promote, Judith? The phrenology of IPCC lead authors? The climatilogical ramifications of the recent increase of phlogiston production?

    Post Normal Science? Conditional Entropy Minimization?

    Good gawd. What about science and logic?

  21. In constructing models, model builders commonly waste information that is available to them by failing to discover patterns that are present in their observational data.

    Global Mean Temperature Pattern:

  22. Nothing in my post constitutes an ad hominem. There was not even a reference to your hominem, let alone any argument based thereon. If I were going to make an ad hominem arguement against your ideas, I would probably do something along the lines of “Dont accept philosophy promoted by a man that doesnt know what simple philosophic concepts like ‘ad hominem’ mean.” But I wouldnt do that. Promise.

    Judith – Hume dealt with induction a couple hundred years ago, taking it off the table. Several others have expanded on that since. Goodman’s is fun – lots of pretty colors. Others have attempted to rescue induction, as by appealing to premises, but that just circles back on the premises. Some others,most notably Popper, have shown us how to function around the problem of induction, but not to ‘solve’ it.

    The claim to have ‘solved the problem of induction’, if true, would revolutionize the field of philosophy. It would certainly make quite a celebrity of whomever actually pulled it off. Way bigger than, say, proving Fermat’s last theorem was to mathematics – that one was at least considered solvable.

    Solving the problem of induction would certainly be no less a revolution in philosophy (and science) than disproving IR backradiation would be to radiative physics. Arguments based on such claims are not helpful to the global warming debate, for they require that the person to be persuaded not only accept the climate related argument, but they must also first accept some revolutionary change in non-climate specific knowledge.

    “The god Bobo says that the IPCC is full of crap!” is an excellent argument, provided that you first convert the world to Boboism. How many things ya trying to do here?

    Whatever the merits of the above arguments, they would be more productive if framed in terms of regular science and philosophy, without reference to revolutionary concepts that do not appear to have any acceptance outside of their authors.

    If Claes can genuinely demonstrate that the current formulation of radiative physics is wrong, or if Oldberg can vett “Christensen’s theory of knowledge”, then use those concepts for arguing climate issues.

    • Why does the thread break so easily?

      The above response is to Oldberg.

    • JJ:

      The following quote from you qualifies as an ad hominem argument:

      “What’s next to promote, Judith? The phrenology of IPCC lead authors? The climatilogical ramifications of the recent increase of phlogiston production?”

      As you may know, one avoids making an ad hominem argument by attacking the allegedly bad argument rather than attacking the person who makes this argument. Here, you’ve attacked Dr. Curry personally.

      • Sorry, but no. That quote is not an ad hominem argument. It is a complaint.

        A complaint to Judith is not an attack on Judith. Even if it were an attack on Judith, that could not possibley be an ad hominem against your presentation. As you correctly state but apparently do not understand, an ad hominem aguement is one that attacks the person making the argument. Judith is not the person making your case, you are.

        Once again, to make an ad hominem arguement against you, I might suggest that others not believe your rather complicated and convoluted logic presented above, because you cant even manage to get the reasoning straight on very simple concepts such as what constitutes an ad hominem argument. Please note that I have resisted that temptation.

        Instead, I chose to address the utility of your arguement, in light of the fact that you are approaching it from a philosophical position that is novel, well outside accepted understanding, and untested within the field of philosophy. I note that you ignored that point of mine, in favor of round two in your failed attempt to accuse me of ad hominem. Please appreciate the hypocrisy of that act, as well as the irony.

        It may very well be that you have a solution to the centuries old problem of induction. That is an extraordinary claim – one that if true, would probalby have the folks over at the Nobel foundation scrambling to figure out how to award you a prize.

        That extraordinary claim needs to be demonstrated, before you move on to using it and associated logic to address any other issue, such as global warming. The analogy to the revisionist physics of the ‘Sky Dragon’ team is appropriate. Are they correct in their claim that all of the physics schools have got radiative physics wrong? Are you correct in your claim that all of the philosophy schools have got reasoning wrong?

        Maybe on both counts. But also, maybe one or both should be on the sale rack at CrockPottery Barn. Prove up the method before you apply it to working problems.

  23. “The cost from abatement of CO2 emissions would be about 40 trillion US$ per year [15].” Sourced in an opinion piece in the Wall Street Journal by a spokesman for a think tank..


    So. As a skeptic, I generally require an argument to work symmetrically.

    “..the model that is constructed by this inquiry expresses all of the available information but no more.”

    Where’d that $40 trillion figure really come from? Is it real information, from a rigorous and validated framework, or is it puff designed to scare people into a panic?

    Why do you, “waste information that is available [to them] by failing to discover patterns that are present in [their] observational data,” such as the example of countless experiences we have with the efficiency of the marketplace finding correct solutions by the individual action of participants paying the open real price for scarce resources?

    You know fossil fuels are steeply subsidized and government-supported; you know (if you use your own methods evenhandedly) there’s a limit to how much CO2 the air can absorb; you waste this information to come to a contrary conclusion, self-falsifying your own claims.

    So, can anyone apply the above principles to the above WSJ claim of $40 TRILLION costs per year for simply, y’know, sealing cracks in drafty houses, insulating attics, switching to more efficient vehicles, telecommuting and where conservation and brains won’t do it, going with Anything But Carbon?

    I guess you can get to that estimate if you fudge a few factors, make some stuff up, and pretend there’s no free market mechanism to address the present subsidies to coal burning and inefficiencies of excess CO2 emission from poorly planned and poorly priced energy markets. Which would be sort of typical of how shoddy WSJ’s gotten on economic topics lately.

    Speaking of shoddy, why are you preaching about logic, again?

    • I didn’t know air absorbs CO2. I though CO2 was a consituent of air. What is the upper limit to how much CO2 air can absorb?

      • Steeptown

        What a great question!

        However, we don’t need to answer that question; we can let the marketplace provide it for us.

        We just need to know if there is a limit, to meet the principle that the price mechanism of a market free of subsidy and bias is the most efficient allocator of a limited resource.

        We know there must be an upper limit or CO2 budget before the amount of CO2 in the air is too much, whether this maximum incursion into that CO2 budget is 400 ppmv or 4,000 ppmv (at which level we’re sure of negative biological effects) or 40,000 ppmv (at which level we’d all be dead), it’s got to be there somewhere.

        We don’t need to know if this ceiling is going to be reached at our current rate of CO2 use, we know we have no market mechanism to prevent it and we know human nature with regard to the word free in the market.

        All we need to know to know it is not an infinite resource is that the limit is there somewhere.

        So, when people ask what that limit is, I ask what do you have against free market capitalism?

      • BartR,
        Do you have any scenario that shows CO2 going to 4,000ppm?
        1,000 ppm?
        do you have any evidence that 4000 ppm is harmful?
        not there
        not there
        not there, either.
        There, they use burners to increase plant yields by raising CO2 to 4000ppm.
        Please post links that show adverse bilogical impacts at 4000 ppm.
        More significantly, for the market process you claim to want to protect, show the credible scenarios of how high CO2 may go and how it gets there.

      • hunter

        To apply the logical method as recommended by Terry Oldberg, let’s look at the minimal information necessary and what information we do not wish to waste.

        For instance, I didn’t claim 4000 pmm lethal (and by implication of the 40000 ppmv not in the range where lethality ought be considered, ie toxicity), nor mention its effects on humans.

        So let’s limit ourselves to plants, the subject of your last link.

        We can take the very facts you present – not wasting therefore the information we have that the facts you offer are by definition acceptable to you – to demonstrate that CO2 has specific biological effects on plants.

        Now, the premise that a mere 4000 ppmv counts as a fertilizer or nutrient for plants is a bit suspicious, so when we look it up, we find CO2, like Ethylene, is categorized in plants as a hormone analogue ( or hormone inhibitor (

        Why does this distinction matter?

        Contrary to the situation with fertilizers, where Liebig’s Law of the Minimum ('s_law_of_the_minimum) applies, plant hormones are both more potent in small quantities, and more varying in effect depending on circumstances and species.

        So, your own link itself proves my contention, unless you think unregulated and uncontrolled application of hormone/hormone-inhibitors globally could be construed as anything but biologically negative. Try that argument with testosterone someplace.

        More significantly, every CO2 level could credibly be reached by the free market mechanism.

        The point being, that it would be the democracy of the market that determines the outcome, not the subsidy of governments or the corruption of corporate charities.

        Please, by all means show the failings of free market capitalism, as you see them.

    • Bart R:
      If I understand you correctly, it is shoddy of me to preach about logic when I cited Lomborg and he is guilty of overlooking information that, if recognized, would result in a cost for carbon dioxide abatement that would be lower than his WSJ figure. If this is the gist of your message, this message erects a strawman and knocks it down, for whether Lomborg’s figure is accurate or inaccurate is immaterial to the conclusions that I make in my article.

      • Terry

        Isn’t your whole message a string of strawmen?

        There’s a difference between what I’m setting out to do, and what you are. This holds us to two different standards, conventionally, though I will do my best not to stoop to the lowest expectation of the standard expected of an interlocutor.

        See, I’m not setting out to preach logic in a series of articles claiming a revolutionary and definitive approach to the subject upending almost all that has come before and is being done by almost all others currently.

        You, in brief, pretty much characterize your message that way.

        This places a heavy onus on you by Sagan’s principle that for extraordinary claims, extraordinary evidence is required.

        If for extraordinary merit of logic one cannot demonstrate the ordinaries of commonplace logicians, such as input checking, estimation of input data ($4,000 for every man, woman and child on the planet every year is what $40 trillion amounts to, no?).

        You use this figure as an input to your logical argument.

        Regardless of the benefits of your logical method, you do not show how your method prevents the well known effect of Garbage In, Garbage Out.

        In this way, your logical method as you demonstrate its application lacks transitivity, ie a logical decomposition of inputs.

        Further, you do not address the problem of agenda management, ie of what order to take information in when synthesizing the problem and the solutions.

        Let’s look at the rejection of the temperature record by thermometer. You begin with the WMO’s (again, not logically decomposed) figure of 30-year average as a definition of climate.

        From there, with the completely non-explicit underlying assumptions of that number — some of which may falsify your particular logical application of it when taken out of context — you construe a case by illustrating that 5 climates is insufficient for statistical purposes, therefore the temperature record of the past 160 years must be dismissed.

        Take the same facts in another order, decompose the definition of climate only slightly, and you could come up with a definition that treats the 365 days a year distinctly, giving not five, but over 1800 distinct climate inputs for statistical comparison purposes.

        The validity or invalidity of this basis of comparison aside (it is no more or less supported in this brief space than your 5-climate claim) for the moment, as 1800 >> 130, your reason for dismissing the thermometer record in place of some proxy records falls apart, and the contrary logical conclusion can be easily made.

        So your logic fails on this test.

        Whatever the merits of the tool, if it can’t be used even by its chief (only?) advocate, how is it to be used by others?

        Further, your logical presentation lack both the clarity and associated brevity most formal logical developments acknowledge as necessary.

        You seem to prefer the ambiguity-ridden approach of English over the generally more concise symbolic-logic presentation some favor, which both makes longer and less clear what you say.

        When technical experts on this blog repeatedly comment on having to re-read what you say, it isn’t a compliment to how advanced your idea, but a strong indication of something amiss with your messaging.

        So, yes, darn right I’m saying shoddy.

        As you are supporting the entire weight of your opinion and conclusions on the merit of your method, and your method has such serious deficiencies, even if Lomborg is immaterial ultimately, your inclusion unquestioned and uncommented of Lomborg is fatal to your outcome.

  24. To use a more common form for climate sensitivity:

    dT = λ*dF

    Where ‘dT’ is the change in the Earth’s average surface temperature, ‘λ’ is the climate sensitivity, usually with units in Kelvin or degrees Celsius per Watts per square meter (°C/Wm-2), and ‘dF’ is the radiative forcing.
    dF is a net estimate from a dozen or so factors – not just carbon dioxide. dT is determined by an ‘ensemble’ of numerical models in the range of 2.1 to 4.4 degrees C. The formula can thus be solved for lambda.

    Numerical climate models are by their nature non-falsifiable – to simulate or to reproduce is the operative concept. They are tuned to hindcast past temperatures and are projected into the future. Mind you – the tuning is all different and hence projections diverge and they are at any rate chaotic – but that was covered elsewhere.

    ‘The search for patterns might be unsuccessful. In this case, the maximum possible information that could be conveyed to a policy maker would be nil. The chance of success at discovering patterns would grow as the size of the statistical population grew.’

    I will return to this – but pattern recognition is largely visual and not logical. The reverse is true as a result of intuitive leaps of the educated variety. The task is to winnow to exclude the chaff.

    ‘Thermometers were invented only 400 years ago. A lesson that can be taken from these facts is that temperature has to be abandoned as an independent variable and a proxy for temperature substituted for it.’

    Where is Samuel Johnson when you need him? You would reject 400 years of data based on a definition? And replace it with an impossibly inaccurate proxy? You don’t have the sense you were born with.

    ‘A consequence from this inquiry was for the significance to be discovered of the El Niño Southern Oscillation (ENSO) for mid-to long-range weather forecasting. Exploitation of information theory made it possible to filter out the vast amount of noise in the underlying time series and to tune into the signal being sent by the ENSO about the precipitation that was to come in the Sierra Nevada.’

    Er…ummm…A true history of ENSO –

    They were looking at a known cause and a known effect – something very clear in the text and the references. They trained their model and managed a trifle better than tossing a coin. We haven’t done any better yet – – we don’t have any basis for predicting the intensity or duration of ENSO events.

    And yes of course they do better than persistence – it is a 4-mode system. They would do better than persistence with a random walk. In fact I have read a paper where a random walk did better than the models after three months.

    In North America the 4 patterns are:

    Warm PDO + El Niño – high rainfall
    Warm PDO + La Niña – moderate rainfall
    Cool PDO + El Niño – moderate rainfall
    Cool PDO + La Niña – low rainfall

    The ENSO patterns were first observed by Peruvian fishermen – and this pattern recognition ability is a very special attribute of the human brain. It has little to do with logic.

    Overall – I have never seen a post and replies so lacking in any semblance of a reality check.

    ‘After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — “I refute it thus.”‘

    • Tomas Milanovic


      I will return to this – but pattern recognition is largely visual and not logical. The reverse is true as a result of intuitive leaps of the educated variety. The task is to winnow to exclude the chaff.

      One of the most complex and difficult questions in AI (Artificial Intelligence) research is pattern recognition.
      There have been megatons of studies done with slow and painful results.
      While computers are world champions in matters of logics and speed of adding numbers, they are unable to do what a 4 year old child does – recognize a nail when seen head on and understand a word regardless of its spectral properties.
      This shows that the brain is operating on a mode which can’t be emulated by logics and by adding numbers even if they are added extremely fast.

      Actually even a cat will recognize the pattern that makes a bird regardless how much of it is hidden by leaves and what is the observation angle.
      And its brain will send correcting orders to micromanage dozens of muscles so that the angle will be optimized and the probability of a successful jump maximized.

      Computer models are much dumber than a cat’s brain in this respect.

      So in a sense you are right. Pattern recognition is a systemic (emerging) property which can’t probably be understood by the reductionnist approach.
      Logics and arithmetics alone won’t help much.

      It looks like synchronization emerging in large systems in spatio-temporal chaos despite the number of oscillators and despite their different chaotic oscillation modes.
      Rings a bell, doesn’t it ;)

      • Tomas

        The brain is the ultimate non-linear system. It takes us to the periphery of the consciousness problem.


      • Look at what insects can do to avoid/evade you when you try and catch/kill them. We may some day get there in computer science, but we are not even close.

      • ferd berple

        There’s a (now deprecated, even by its originator — therefore I feel quite free to cite without source) argument that computers will never get there.

        It goes something like this.

        Human language decomposes into three dimensions – Utterance, Context-1 and Context-2.

        Each dimension is on the Order of Natural Numbers (or N).

        Per Turing, a computer is a Turing machine; it’s processes decompose to the Order of Natural Numbers.

        As it can be shown one cannot map a lower order onto a higher order (so the originator said), no computer can ever fully apprehend human language processing to the same degree as a human.

        This brief excursion I hope not too far from our technical topic.

        Bonus points for the first to tell me why the argument was subsequently deprecated.

      • >>As it can be shown one cannot map a lower order onto a higher order (so the originator said), no computer can ever fully apprehend human language processing to the same degree as a human.<<

        Nature solves it by trying all possibilties using exponential population growth, and weeding out sub-optimal solutions.

      • ferd

        So, we should build fleets of computer-controlled self-replicating robots and wait for Nature to generate a satisficing solution?

        (I say satisficing, as clearly Nature frequently settles for sub-optimal, and even demonstrates levels of preference for inferior outcomes.)

    • Cheif Hydrologist writes “To use a more common form for climate sensitivity:

      dT = λ*dF

      Where ‘dT’ is the change in the Earth’s average surface temperature, ‘λ’ is the climate sensitivity, usually with units in Kelvin or degrees Celsius per Watts per square meter (°C/Wm-2), and ‘dF’ is the radiative forcing.
      dF is a net estimate from a dozen or so factors – not just carbon dioxide. dT is determined by an ‘ensemble’ of numerical models in the range of 2.1 to 4.4 degrees C. The formula can thus be solved for lambda. ”

      I am confused by this. My understanding is that the formula
      dT = λ*dF
      refers to the no-feedback sensitivity. The value for this is normally quoted as being 1.2C for a doubling of CO2. The values from 2.1 to 4.4 C normally refer to sensitivity including feedbacks. Is there some confusion between no feedback sensitivity and sensitivity with feedbacks?

      • ummm… dF is not fussy – you can plug in any ‘forcing’ you like.

        The point that delta T is the outcome of numerical models – from which sensitivity is calculated. Now there are a world of problems with this – but the non observability of delta T is not relevant.

      • Here is the assumption inherent in mainstream climate science:


        The assumption is that if we average weather over time, this should return a constant average, and any unexplained deviation from that average must be due to CO2.

        This is a fundamental flaw, because the average of chaos cannot be a constant. It makes no sense mathematically. The average of chaos must be chaos, or it is not chaotic.

      • ferd berple


        You have disproved Thermodynamics.

        Well done sir.

      • Its about time. Next week quantum gravity.

      • Woot!

        But can you beat your own record for brevity?

    • Chief Hydrologist:

      On the replacement of temperature by a proxy, I didn’t mean to suggest that a proxy was a desireable alternative to temperature but rather to highlight the fact that a proxy was a necessity for the construction of a model with a 30 year event period, for in the construction of a model it would be necessary to have far more than than the five independent observed statistical events that are afforded to us under a 30 year event period when the temperature is a dependent variable of the associated model. Thus, if a model were to be constructed, either the event period would have to be shortened or a proxy for temperature would have to be substituted for the temperature.

      On the remainder of your remarks, I gather from them that you’re not up to speed on the content of Parts I and II. For example, the logical issue that I’ve raised is not of how to recognize patterns but of how to discover them. In Parts I and II I lay out a process for discovery of them that is entirely logical; also, I point out that when a model is built in this way, it consistently excels. How about reading or re-reading Parts I and II and reporting back with questions or comments?

      By the way, you’ve worked some ad hominem arguments into your comments. If for the future you were to refrain from this practice, our exchange of ideas would be more pleasant and productive.

      • I suggest that there is a difference between robust expression and ad hominem argument. My considered opinion is that throwing out the temperature record on the basis of an idea about 30 year long climate events is literally nonsensical.

        The definition of a 30 year period as a climate ‘event’ is both arbitrary and nonsensical. Climate is a continuum – rather like a river or space and time. People will, and quite reasonably, continue to use many different series in the instrumental record with quite useful results.

        My point with ENSO is that it was not a logical ‘discovery’ of pattern – like modelling clouds that would require vastly more computing resources than are available. The ENSO pattern was known already and they selected areas of the Pacific and areas of the continental US with a known association of Pacific SST and rainfall. What we had was a very much more modest exercise then that – a simple association of a very limited number of factors – the practical usefulness of which remains uncertain.

        I was thinking more of this other dimension of people where pattern recognition (or discovery?) is entirely illogical. Humans are adept at discovering pattern – unfortunately we are also adept at recognising false patterns.

        The antidote to false patterns, Scientific American suggests, is the scientific method. Without data – logic is a fool’s playground.

        You proceed from a misunderstanding of how the concept of sensitivity is derived to inflated claims for information theory. You are far from alone in pursuing an inflated and unprovable thesis. I can only advise you to follow Dr Johnson prescription of a painful, self inflicted reality check.

        However, if you wish not to pursue an exchange of ideas further – I will quite understand.

      • I should add that a fool’s playground is not necessarily a bad place to hang out – but it must be tempered with humour and humility.

      • Chief Hydrologist:
        I believe you misunderstand the idea of a “statistically independent statistical event” (SISE) and the import of this idea for the statistical significance of a study’s findings. While you state that “Climate is a continuum,” the SISEs of a climatological study are discrete. As they are discrete, the SISEs which, in such a study, are observed can be counted. This count is among the few factors that determine the level of statistical significance of a climatological study’s findings.

        If the period of each SISE is set at 30 years, we have only 5 observed SISEs. In contrast, many epidemiological studies have more than 100,000 observed SISEs. The epidemiological studies that have more than 20,000 times more SISEs than a climatological study have 20,000 more because the designers of these studies seek statistical significance for the findings of the studies. The findings of a climatological study that features 5 observed SISEs have a vanishingly small statistical significance. Your argument neglects the discreteness of the SISEs and the consequences for the statistical significance of the findings from the associated study.
        Currently, the count of observed SISEs supporting WG1’s conclusions is nil. Thus, the level of statistical significance of WG1’s findings is nil. Nonetheless, WG1 expresses a high level of confidence in its conclusions. WG1’s expression of a high level of confidence is contradicted by the nil level of statistical significance.

  25. Tomas Milanovic

    Regardles whether the induction problem had been solved, the work on disambiguation is not only useful but necessary.
    It is quite clear or at least should be that a process that doesn’t formulate its inferences in terms of observables, can’t follow any rules of logics be they “revolutionary” or “classical”.

    I would stress this quote which lies at heart of what brought me to look at the climate science from a point of view of dynamical systems.

    A “prediction” is an assignment of a numerical value to the probability of each of the several possible outcomes of a statistical event; each such outcome is an example of a state of nature and is an observable feature of the real world.

    When thinking about the real Earth system, one has to ask the question what are the STATES of the nature and how to measure them.
    As the thread about the spatio-temporal chaos has shown, many people don’t even understand what the question means.
    Yet it is paramount for every scientific theory to know without any ambiguity how many INDEPENDENT numbers one has to know to identify a state at a given time.
    This is clearly the most important question because, as it is explained by T.Oldberg if there one uses not enough numbers, then the dynamics of the system will be wrong and if one uses too many then the process will be inconsistent by assuming independence
    It is only after that that one may ask how those numbers will vary in time.

    So now ask the question “How many numbers are necessary to know whether 2 climate states are equal or not?”
    You may wade through dozens of peer reviewed papers about climatology, talking about “climate states” on every page but never answering the question.

    Oldeberg is right when he writes in discussion about Equation (1):

    This question has no answer, for the equilibrium temperature ΔT is not an observable. As ΔT is not an observable, one could not observationally determine whether Equation (1) was true or false.

    Equilibrium deltaT is indeed trivially not an observable because it is imposible to observe the Earth system in equilibrium, hence it is impossible to observe any equilibrium value.
    If one drops the word equilibrium, then deltaT may become a strange observable but the Equation (1) is tautologically true – TECS being renamed now TCS, it is not a constant and there exists always a number A such as Y=A.X for any non zero X.

    Why strange?
    Well an observable is generally understood as being a parameter to which is associated a single number after a single measure. In QM this is rigorously defined as being the eigenvalue of some operator.
    In the case of deltaT a single measure is not enough, theoretically an infinite number of measures is necessary what makes the parameter non observable.
    In the case of a finite approximation we may substitute to the true non observable deltaT a deltaT resulting of a finite but large number of measures which asks farther questions about the validity of the approximation.

    Clearly, f.ex in the case of ice cores, when one substitutes a single measure to the true deltaT necessitating an infinity of measures , the error is maximal.
    An additional modèle is necessary to evaluate this error with the help of other parameters which are by definition again … non observables.

    But there are more problems beyond the observable question.
    If Equation (1) is to be “dominating” to describe the dynamics of the system, then it means that to describe the states of the system only 2 numbers are necessary.
    But it is in contradiction with the observation of the system which shows that a very large set of numbers is necessary .
    If one considers that the numerical models correctly simulate the dynamics what is an unproven conjecture, then typically millions of numbers are needed to define a climatic state at every single moment.
    From that follows that for millions of very different states with different dynamics there will be a single deltaT.
    Hence Equation (1) with its 2 variables contains dramatically too few informations to be relevant in any way for the observed dynamics.

    • I believe the delta T may be a distraction. It is nonsensical to start with – it is not possible in a chaotic system to have linear (or even predictable) sensitivity – even one between limits. And then we need to keep in mind how those limits are derived. They are the outliers of the results of multiple models. The result presented for each of the models is a single run of a chaotic system that has many plausible outcomes. There is such an element of the arbitrary and qualitative in all this – pick a result that meets expectations and is not too far from those of your peers.

      ON another note – we can, however, completely define the planets dynamic energy disequilibrium in 3 terms.

      All planetary warming or cooling in any period occurs because there is a difference between incoming and outgoing energy, an energy imbalance. The imbalance results in changes to the amount of energy stored, mostly as heat in the atmosphere and oceans, in Earth’s climate system. If more energy enters the atmosphere from the Sun than is reradiated back out into space – the planet warms. Conversely, if less energy enters the atmosphere than leaves – the planet cools. Thus Earth’s energy budget can be completely defined in three terms. In any period, energy in is equal to energy out plus the change in the amount of stored energy.

      This can be expressed as:

      Ein/s – Eout/s = d(GES)/dt.

      By the law of conservation of energy – the average energy in (Ein/s) at the top of atmosphere (TOA) in a period less the average energy out (Eout/s) is equal to the rate of change (d(GES)/dt) in global energy storage (GES). The most commonly used unit of energy is Joules. Energy in and energy out is most commonly reported in Watts (or Watts/m-2) – and is more properly understood to be a radiative flux or a flow of energy. A flux of one Watt for one second is one Joule – which is known as unit energy. Most of the stored energy is stored as heat in the oceans which is measured in Joules (or Joules/ m-2).

      The God’s eye perspective is sometimes very useful and it may help to know whether the planet is warming or cooling – and why – before diving into the maelstrom below.

  26. To me the value of this approach is not determined by the formal correctness of what Terry Oldberg writes. Going through the long text better than cursorily but less than carefully I didn’t notice reason to suspect logical errors. The real question is, has this revealed something that we didn’t already know well enough for practical purposes or will the text be used only as an additional excuse for perpetuating erroneous prejudices.

    It is certainly true that many arguments used in interpreting results of climate science have logical weaknesses, with real influence on the conclusions. On the other hand many strong and completely valid arguments may involve imprecise formulations. It is also clear that logical proof is in general impossible and that this universal fact is often used against valid arguments. I do not claim that Terry Oldberg would make contrary claims, but I can see in this thread applauds from people who are happy to misuse the arguments.

    One example is the discussion on the observability of ΔT and in particular of equilibrium ΔT. It is formally true that these values are not observable, but their determination from actual observations requires use of models and that what we then get is defined more precisely as the result of the model used than as a real property of the Earth system. But is this new? Who did not already know that? Does it make the number any less or more useful than it was without this analysis?

    I cannot really see that this text would clarify any issue. The limits of logic are too far from the practical situations where real problems occur. The logic has the same validity on most those situations, where no significant problems exist. The logic cannot help much in separating clear cases from problematic. The separation is done more efficiently by common sense and additional quantitative analysis, where needed.

    • Tomas Milanovic


      One example is the discussion on the observability of ΔT and in particular of equilibrium ΔT. It is formally true that these values are not observable, but their determination from actual observations requires use of models and that what we then get is defined more precisely as the result of the model used than as a real property of the Earth system.

      But this is precisely the point!
      You should have read the whole post more carefully.
      ΔT being non observable (useless to add “formally”, it is non observable in any sense), Equation (1) cannot be falsified.
      It is here that the logics kicks in.

      What you are saying is something else altogether and I am not sure at all that any orthodox climate scientists would admit this interpretation.

      You are saying that there is some metatheory, by definition correct, and ΔT is a pure mathematical construct which happens to have a temperature dimension and is defined such as ΔT = k.X with k constant and X whatever you want.
      You say that of course ΔT cannot be observed and nobody actually knows what it is but it is proportional to X because it is how it is defined in the metateory.
      Don’t you see the circularity of this statement?
      ΔT = k.X is correct because that is how ΔT is defined.

      Of course in reality it works exactly the other way round – Equation 1 is the basis of the metatheory per se. So either one considers that it is an axiom or one tries to validate it by observation. This post eliminates the latter possibility because no observation can show ΔT.
      What stays is to admit it as an axiom and the metatheory will vitally depend on the validity of the axiom.

      Is that new? Well it depends. I am sure that for people like Hansen or Pierrehumbert it would certainly be new because it implies that the AGW theory as it is formulated today may still be either right or wrong.

      • Tomas,
        I read carefully enough to understand the point made and to disagree totally about its value.

        It is really annoying that valid scientific argumentation is attacked by irrelevant formalism – should I say sophistry.

  27. Terry,
    “Observable Science” can play tricks on your eyes. Climate science has a flawed mindset of not even understanding logic and physical evidence.

    So by climate science logic…We observe the sun larger in the morning and evening, so the sun must be closer to the planet in that time period.
    I can see that actually be published as absolute science as the current logic is set up this way.

  28. Terry – Just a question and a thought – could the term “fabricate information” be replaced with extrapolate or emulate? I ask as there are excellent statistical methods for emulating or extrapolating, but the use of the word “fabricate” seems to elide good statistical methods with the dishonest invention of data unrelated to the data sets that exist. Do you link the two (a “trick” is always a trick, as it were) or do you use different terms for these two approaches (i.e. fabrication used for unjustified creation of data passed of as real data)?

    One final thought, you seem to have a Popperian approach to falsification, do you think there is any value in counterfactual projections (scenario development)?

    Thanks a lot for any clarifiation or thoughts you might have on this and best wishes with the project.

    • In information theory, “information” is the measure of a certain kind of relationship between a pair of state-spaces. As a measure, information is a real-valued mathematical function. To misrepresent this value on the high side is the practice that I referenced by the phrase “fabricating information.” You should understand that “to fabricate information” is not the equivalent of “to fabricate empirical data,” for data and information are different concepts.

      Regarding your question on the value of counterfactual projections, the cognitive psychologists Daniel Kahneman and Amos Tversky describe these projections as a consequence from the heuristic which they call the “simulation heuristic.” There is a summary at .

      A “heuristic” is an intuitive rule of thumb that people use in deciding which of several possible inferences that are candidates for being made by a model is the one correct inference. The simulation heuristic assigns a numerical value to the probability of an event based based on how easy it is to mentally picture this event.

      The method of heuristics suffers from the logical shortcoming that it fails to uniquely identify the one correct. Thus, for example, under the simulation heuristic, different people may assign different values to the probability of a particular projection.

      In identifying more than one inference as the one correct inference, the method of heuristics violates the law of non-contradiction; the law of non-contradiction is a principle of reasoning. Under a logical method for the construction of a model, the principles of reasoning replace the method of heuristics with the consequence that the law of non-contradiction is not violated.

      • Terry,
        thanks for this. I wasn’t aware of Daniel Kahneman and Amos Tversky’s work, which I will have a look at.
        I work in the area of Economic modelling (your notion of a model would be very alien to the social sciences) and, in this sense, our models are of necessity heuristic (we have the expressive “all models are wrong” for the reason that a model is used to draw out something or illustrate reationship(s) that are lost in the complexity of reality).

        Thanks once again and best wishes.

      • Paul:
        Three points in reponse:

        1) Back in the 1960s, Ron Christensen and Tom Reichert built built some economic models under the principles of reasoning and used them in successful day trading of commodities futures contracts. ( see “Commodity Futures Prices Prediction” in Entropy Minimax Sourcebook, Vol IV Applications ISBN 0-938-87607-4 1981 p 643.)
        2) People who build models under the method of heuristics tend to fabricate information. When they fabricate it, their models are wrong. Perhaps this phenomenon inspires the expression that “all models are wrong.”
        3) One of the ways in which scientists fabricate information is by making distributional assumptions. Financial economists are prone to fabrication of information by the assumption that the data are normally distributed. As I understand it, prior to the financial meltdown banks made this assumption in assessing the risks of their loan portfolios. In the meltdown, they discovered that the tails of the distribution function were not normally distributed but rather were “fatter” than those of a normal distribution function. In this sense, the financial meltdown can be viewed as a consequence from the fabrication of information by financial economists. The way out of this quagmire is to avoid making distributional assumptions. Currently academic economists don’t know how to do this. I’ve found no interest among economics professors in learning how to do this. They seem to be quite happy to continue in the practice of fabricating information by distributional assumptions. Thus, we may be in for many more avoidable financial meltdowns.

      • “Thus, we may be in for many more avoidable financial meltdowns”

        Or a rise in temps of +7C.

  29. Thank you for demonstrating that the IPCC and its work products are not about logic. That explains a lot of the problem.

    • In many ways the IPCC has just been reviewing and assessing scientific, technical and socio-economic information related to the understanding of climate change.

      • paul,
        Do you imply that the larger climate science enterprise is lacking in logical processes?

      • Hunter,
        thanks for your comment. The IPCC are involved in a complex assessment of existing science, which involves selection, emphasis, judgement, expertise and experience that are NOT neutral, but disinterested.

        The climate science enterprise is not a unity, but involves a wide range of methodologies, methods, models and data sets.
        We must be careful about “logical processes” as this expression is ambiguous. I studied formal logic as an undergrad AND work (at a distance) with climate scientists – so I probably have some insight on this. If you mean does it use explicit syllogisms as part of its analysis – then no. Does climate use induction and deduction? the answer is yes. Do their models use formal equations? then yes, of course. If you mean in a technical process about how models are developed, as understood in mechanical engineering (and I’m reading this blog by Terry to learn more – so excuse my ignorance on the topic), then the answer is probably no, but models come in all types and as long as there is transparency about what the model is doing, what its unit of analysis is, how its relates to actual data/analysis, then that is fine by me (and contemporary philosophy of science!). So, to sum up, in the sense of mechanical engineering, much of climate science does indeed lack specific logical processes. There are though more serious problems with climate models.

  30. Tomas Milanovic


    This is not really at the right place on this thread but I can’t find where I read your short remark that I wanted to comment.
    You have written somewhere (concerning SST I think) a scathing comment about the use of EOF.
    I enthousiastically agree and applaud!

    During one important work many years ago, I was studying the behaviour of a complex probably chaotic system.
    After having spent a long time trying to tackle the system analytically I had to give up. There was no useful theoretical framework and all the approximations I came up with were horribly unstable.

    So guess where I finished? Yes EOF.
    I followed the advice of a professor “If you don’t know what to do, do EOF”.
    This had a negative result, I learned EOF.
    Fortunately this had also a positive result, I learned that in 90% of problems (optimistic evaluation) EOF results are spurious and/or impossible to interpret clearly.

    I am glad that I have found another person who is wary and distrustful towards an indiscriminate use of EOF.
    I am not saying that it is useless but it takes a REAL experimented expert to avoid the traps and to be able to interpret the results at least in a semi physical way.
    I guess you don’t tell to your students what my professor once told me and that’s good for them.

  31. A non-technical thank-you for your technical Part III Parts I and II now mean much more. You have a knack for building bridges. Thanks again!

  32. I have forgotten much of my logic lectures but one thing I seem to remember is that for an aswer to be a logical answer it must be necessary and sufficient. CO2 fails that logic test. It is neither necessary nor sufficient to change climate. Observe the ice core samples being 800 year different in temperature vs CO2 level. At the point of temperature going back down CO2 still was on the rise. It was not sufficient. When at the bottom CO2 was low but the temperature went up for 800 or more years so it was not necessary.

    Gravity is both necessary and sufficient to explain a falling apple.

  33. By 1963, the problem of induction had been solved. Solving it revealed three principles of reasoning…. Few scientists, philosophers or statisticians noticed what had happened.

    Terry Oldberg: So you say, but count me in with JJ in doubting that the problem of induction has indeed been solved.

    If “few scientists, philosophers, or statisticians” have noticed that this problem of induction has been solved, then perhaps it has not been solved.

    Your claim sounds to me on par with saying that the “P versus NP Problem” was solved back in 1963 but few mathematicians or computer scientists noticed.

    The “Problem of Induction” is a classic, fundamental challenge. I have great difficulty believing it was solved or close to being solved decades ago and few noticed.

    Your article is interesting and pertinent; it is also long and technical. I’m sure I could spend a couple weeks or more mulling it over. However, when you start out — bang! — with such a dubious claim, I’ve got to wonder whether it would be worth the effort.

    • huxley:
      Actually, though you and JJ are unfamiliar with them the ideas were published in the peer reviewed literature more than 25 years ago and have been thoroughly vetted in the interim without being refuted. Successful applications of these ideas have been made on a score or so of occasions.
      The ideas were not published in the philsophical literature but rather in the literature of systems science and cybernetics. In this literature, entropy maximization and conditional entropy minimization have been described as “principles of reasoning under uncertainty.”
      If you’re surprised that within a period of 25 years philosophers did not pick up on these colossally important ideas even though they were published in the literature of a different field, I’m with you.

      • Terry Oldberg: Thanks for your gracious response.

        I believe you believe that the problem of induction has been solved outside philosophy. However I don’t know what I can do with that information without spending a great deal of time assessing your claim for myself.

      • huxley:

        You can get a grasp of the fundamentals by repeating the mantra that logic is about the ideas of: “measure, inferences, optimization and missing information. Repeating this mantra reminds one that, in the probabilistic logic, each inference has a unique measure; this measure is the missing information in the inference for a deductive conclusion, per statistical event, the so-called “conditional entropy” or “entropy.” In view of the existence and uniqueness of the measure of an inference, the problem of induction can be solved by optimization. In particular, that inference is correct that either minimizes the conditional entropy or maximizes the entropy under constraints expressing the available information. What, in building models, one can do with the grasp that is provided by repetition of the mantra is either to: a) call in a firm that knows how to build models in accordance with logic b) devote the necessary time and expense to learning to build them in accordance with logic or c) elect to build them in accordance with illogic. Faced with this choice, researchers overwhelmingly choose c). I don’t why they choose c) but observe that they do.

  34. Steve Milesworthy

    The equilibrium temperature is a rule of thumb figure that is derived from things such as studies of palaeo and current climate, and from models. It’s a device to simplify the discussion: “My model is high sensitivity – 4.5C. Your model is low sensitivity 1C”.

    The thing that *can* be measured is temperature (and other climate variables) of various parts of the earth and modelled temperature of various parts of the model earth.

    The modelled temperatures are called projections because we do not know what will happen to emissions and so forth. That does not mean that the projections cannot be tested.

    Putting it simply, in three different ways:

    1) Policy makers and people could choose a particular scenario to follow, and the projection that goes with it becomes a prediction.
    2) Assuming the scenarios have a wide spread, the future reality will follow one or two of the scenarios reasonably closely. This will allow you to compare the future scenario with the closest guess.
    3) In the future, you can rerun your model with forcings equal to the real observed forcings to see if your model matches real temperature/climate.

    • Shock horror. Hold the front page. Alarmist suggests testing climate models against reality!!!

      Many programmers fall down with convulsions and shortness of breath. Revolution in climate modelling predicted. ‘

      Doing Experiments’ is the new buzz word. Retired engineers, chemists and physicists recruited to teach this arcane and nearly forgotten art. Climate Etc website becomes key resource in future of the planet.


      Closer examination shows Alarmist suggested this revolutionary idea should be adopted ‘in the future’. Kicked into the long grass until a suitable opportunity arises. Climate modellers recover their composure- but what a nasty fright for them!

    • Steve:
      Based upon your remarks, I gather that you and I agree on the following propositions:
      1) the “models” which you reference by your remarks are examples of modèles and,
      2) the traits of the WG1 inquiries that led to the claims referenced by my article are those of the illogical “left-hand” list.
      In short, the methodology by which WG1 reached its conclusions was illogical. If you agree, please signify same. Otherwise, please share your grounds for disagreement.

      • Steve Milesworthy


        I think you are saying that because there is no probability associated with each scenario, the set of projections fails to be a “predictive inference”. But (in an “ideal” world) we choose (via our “policy makers”) the scenario to take, so the probability of the chosen scenario becomes 1 and the particular scenario run becomes our “model”. The falsifiability of this approach depends on past success of using the same method.

        Also the claims will be falsifiable in the future (when one of the scenarios turns out to be closest to reality than all the others). A bit like the Higgs boson. Noone has seen it yet. But people have spent billions looking for it.

        Would you agree with me that your quoted figure of $40 trillion is, at best, likely obtained through an analysis of modèles?

      • Steve:
        To assign a numerical value of 1 to the probability of a projection is insufficient to create a predictive inference. In the context of a longitudinal study, the idea of a “predictive inference” references a time sequence of independent statistical events. Each event in this sequence has a start time and an end time. At the start time, the state of nature called the “condition” is susceptible to being observed. At the end time, the state of nature called the “outcome” is susceptible to being observed. In a prediction, observation of the condition assigns a numerical value to the probability of the outcome.

        A climatological “projection” is a function that maps the time to the spatially and temporally averaged temperature at Earth’s surface. Assignment of a numerical value to the probability of a projection does not convert this projection to a prediction. The two entities have differing descriptions.

        A projection could be converted to a sequence of predictions by dividing the time-line into equal length segments of, let us say, 30 years, by associating an independent event with each segment and, at the start time of each event assigning 1 to the projected average temperature at the end time. If this were to be done, the resulting predictions would surely be falsified by the evidence.

        The predictions would be falsified in 2 ways. First, the projected condition of each event would fail to match the observed condition. Second, the projected outcome of each event would fail to match the measured outcome. If one were to iron out these and other wrinkles in the methodology that I’ve characterized as “illogical,” I believe one would arrive at the methodology that I’ve characterized as “logical.”

        Regarding the origins of Lomborg’s estimate, I’ve not looked into the matter but suspect it to be the product of a modèle. Like climatologists, economists are prone to failing make the model/modèle disambiguation, thus confusing non-falsifiable modèles with falsifiable models. Economists cover their tracks by muttering the erudite-sounding but unfamiliar Latin phrase ceteris paribus (other things being equal) at appropriate points in their arguments. By muttering this phrase, economists fabricate information thus reducing complex economic systems to cause and effect relationships. By muttering phrases such as “the equilibrium climate sensitivity,” climatologists accomplish the same end.

      • Steve Milesworthy

        I do not understand your *logical* basis for saying that the prediction will fail in two ways. You say the prediction would “surely” be falsified by the evidence. “surely” does not cut it for me. Why must the projected condition fail to match the observed condition?

        In respect of your initial apparent acceptance of the modèles used in economic forecasting do you not see that we humans are able to judge these projections ourselves, so your logic seems to have gone haywire somewhere. A lot of your logical arguments appear to be equally applicable to many every day experiences, such as “Should I buy insurance?”, and they certainly apply to such decisions as “Should we allow this oil field to be drilled?” or “Should we go to war in Iraq to defend our Middle East interests?”.

        This thread certainly does seem to be an example of what another thread claimed; that climate science is being held to a higher standard.

  35. Steve Milesworthy

    Being British (I assume) you should know that sarcasm is the lowest form of wit. Since this is an article looking at the logicality of arguments, my post is over-simplified – though perhaps not simple enough for you? If you don’t understand what an article is about you don’t *have* to comment.

    • ‘Sarcasm is the lowest form of wit’.

      We have a long tradition in UK of ‘taking the p**s’ of self-righteousness and pomposity. No respecter of persons, we. (A cultural artefact and tradition taken to even greater heights by our cousins in Oz)

      And climate modellers with their stubborn belief in the rightness of their models – but absolutely no experimental proof – are ideal targets. I name no names but regular contributors here amaze me with their arrogance and self-importance.

      I make no apology for exercising the rights of my heritage. Since ‘global warming ‘ is, by definition, a global problem , you can reasonably expect contributions from all over the world. Expressed in a way that comes naturally to the writer.

      • To prevent any doubt, I do not mean to include Steve Milesworthy in my remark:

        ‘I name no names but regular contributors here amaze me with their arrogance and self-importance’

      • “And climate modellers with their stubborn belief in the rightness of their models – but absolutely no experimental proof – are ideal targets.”

      • Steve Milesworthy

        Why do you criticise others for wasting time with ad hominem comments, and then advocate this sort of vague, but inaccurate, generalisation.

        Climate modellers are well aware of many shortcomings of their models. And I bet you can’t find a climate modeller who believes that his or her own model is definitely giving better answers than all of the other competitor models.

      • Steve:
        Thank you for pointing out my error.

  36. One of the great minds of the 20th century on the problems of logic in probability theory was Komologorov assistant Director Nalimov.In his some what prescient essay on the Eschatological Problem,he summarizes the arguments very well in his introduction.(Nalimov 1980)

    In recent years, the ecological problem has acquired apocalyptic overtones.To solve this problem, science must not only study a phenomenon, but must also learn to predict its evolution on a large time-scale. Furthermore, the solution to the ecological problem cannot but change the direction of our cultural progress. Never before has science been faced with problems of such global significance. Is it ready to solve them? To answer this question we must understand whether scientific forecasting is possible, whether scientific ideas can influence social behavior, whether the historical origin of this crisis can be scientifically analyzed, whether a scientific approach to setting a global goal is possible. Below I shall not try to answer these questions but only to discuss them. The problem is so serious that it should be discussed freely and objectively.

    My sole aim is to demonstrate that there may be another approach to The problem, different from the existing one. This chapter is written in an axiomatic-narrative style. Illustrations and arguments serve to elucidate the axiomatic statements. It is not my purpose to prove anything or to convince the reader. It is up to the reader to regard my approach as legitimate or not on the basis of his own experience and on the facts as he
    knows them.

    We have suddenly become aware of the threatening aspect of the ecological
    crisis as a result of the forecasting of its future development.Therefore, it seems natural to start our discussion with a logical analysis of forecasting.Formally, forecasting is nothing more than extrapolation. However, we do hope to get precise and definite ideas of the future on the basis of fairly vague notions of the mechanisms which have been operating in the past.

    Is scientific forecasting of this type possible? Strictly speaking, the
    answer is no. In the natural sciences, only those constructions are considered scientific which can be verified by experiment. Here lies the demarcation line between scientific knowledge and non-scientific constructions. This is not “positivism” but the standpoint of a naturalist, which determines his everyday life. All of us are well aware of the difficulties connected with a formal definition of what constitutes the experimental testing of a hypothesis. Verification is logically inept. Falsification, in the terms of Popper, is logically clear but far from universally applicable. One could describe numerous natural scientific constructions which have not been tested by falsification. The American study mentioned earlier is an ecological illustration. An extensive study of five large ecosystems (the steppe, the tundra, etc.) was carried out, and mathematical models were built that included up to 1,000 parameters. In building the models, experimental data were used, but the model as a whole could not be subjected to direct experimental testing. Thus, as mentioned above in the context of statistical inference and its problems,
    these models do not inspire confidence (Mitchell et a]., 1976).

    In forecasting,the comparison with reality can be made only at the moment when the prediction comes true. At the time of its formulation, it cannot be tested and, therefore, in its most general form, it has no scientific status.

    • Maksimovich,
      The writing of Nalimov appears very good, but to end in the middle of the argument. Thus I am not at all certain that it will be widely interpreted in the way Nalimov intended. Either I misinterpret his intentions or many other will almost certainly do (basing this guess on earlier observations of the discussion on this site).

      For me it is impossible to believe that the earlier paragraphs of Nalimov’s text could lead to the short conclusion of the last paragraph as more than one side of the answer.


    In its Fourth Assessment Report the IPCC states: “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenario.”

    IPCC explains this projection as shown in chart below where Global Mean Temperature Anomaly (GMTA) trend lines were drawn for four periods from 2005 to 1856, 1906, 1956 & 1981. These trend lines give increasing warming rate from a low value of 0.045 deg C per decade for the RED trend line for the first period from 1856 to 2005, to a greater value of 0.074 deg C per decade for the PURPLE trend line for the second period from 1906 to 2005, to a still greater value of 0.128 deg C per decade for the ORANGE trend line for the third period from 1956 to 2005, and to a maximum value of 0.177 deg C per decade for the YELLOW trend line for the fourth period from 1981 to 2005.

    IPCC then concludes, “Note that for shorter recent periods, the slope is greater, indicating accelerated warming”.

    Is this climatology logic of the IPCC valid?

    • John Costigane


      Using a fact of moving averages, shorter periods having steeper slopes in a longterm rising trend (same works in falling values – Ice Ages), to pretend unprecedented events are occurring, shows the IPCC in a bad light. This organisation carries much of the blame, and should face investigation.

    • Girma:
      Though the IPCC makes projections, it sounds as though it mades no predictive inference. It would follow from the lack of a predictive inference that the IPCCs claims were illogical.

  38. The problem with making any argument about basic logic in a scientific context is that basic logic is only relevant tot he hypothesis building proces, after the hypopthesis is built one must depend solely on the data and the statistics. This argument that the outcome is logical in a clearly non-scientific approach as it cannot falsified, and thus makes climate science a “soft science” like philosophy and not a hard science like physics. This entire argument and the want to participate in it mans we should not trust climate science models much, logically.

  39. BTW Terry, no need to mark with copyright anymore, unless you are just trying to scare folks off.

    “Copyright law is different from country to country, and a copyright notice is required in about 20 countries for a work to be protected under copyright.[32] Before 1989, all published works in the US had to contain a copyright notice, the © symbol followed by the publication date and copyright owner’s name, to be protected by copyright. This is no longer the case and use of a copyright notice is now optional in the US, though they are still used.[33]”

  40. ΔT = TECS * log2 (C/Co) equilibrium climate sensitivity.

    Ceri Reid:
    This equation is the basis for the global warming and was demonstrated in the laboratory nearly a century ago. Simply expose two gas samples with different levels of CO2 to an infrared source and follow the temperature of the gas until it equilibrates. The sample with higher CO2 concentration will come to a slightly higher equilibrium temperature.

    This relates to the Stephan-Boltzmann law governing the absorption and emission a radiation:
    j* = (sigma)T^{4} Again, at equilibrium, the amount of electromagnetic energy radiated is a function of absolute temperture to the fourth power..

    The problem is that the “climate” is not an equilibrium process. The conditions in the atmosphere and in and on the earth’s suirface are never in equilibrium. So it is impossible to measure ΔT. Measureing an “average” ΔT does not work because of the Stephan-Boltzmann equation. The amount of radiation varies drastically(T^4) with temperature, making it impossible to calculatean accurate temperature for every square inch of the earth’s surface. Add in all the heat transfer processes going on in the atmosphere(clouds, wind, moisture transfer, evaporation, condensation, ocean heat and CO2 absoption and desorption, ocean currents,etc., and the interactions between all the know and unknow heat transfer mechanisms and ΔT simply is meaningless.

    In any process more complicated than than a thermometer in a glass of water or combusting some chemicals in a calorimeter the rate of change is as important, or more important than the equilibrium value. The equilibrium value sets a boundary, but doesn’t say anything about how the process may eventually get to that boundary. In a complex system like the climate, which does not have fixed boundary conditions(the solar flux is not constant) and the internal processes can change the boundaries(Top of Atmoshpere) you have a problem that simply can’t be even approximated except through some kind of MonteCarlo method.

  41. I do not believe one can so easily dismiss a parameter as unobservable. There are many scientific quantities which must be calculated from theory plus obs rather than by direct measurement. This does not mean that climate sensitivity or delta T are currently measurable, and Spencer makes good arguments that current attempts are erroneous. But that does not mean they are in principle unmeasurable.

    • Craig:
      How would one measure deltaT?

      • Steve Milesworthy

        “How would one measure deltaT?”

        Step 1, deltaT would be defined as the difference in a particular temperature metric (such as the area averaged temperature based on a wide network of sensors, or the satellite obs of lower troposphere).

        Step 2, one constructs and validates an accurate model of the climate.

        Step 3, one runs it out to equilibrium with a certain amount of CO2. DeltaT is T at end minus T at start.

        (obviously Step 2 is particularly hard but not *logically* impossible).

      • Steve:
        In terms of the disambiguated language, are you thinking of a “model” or a “modèle”?

      • Steve Milesworthy

        Forgot to click the Reply button:

        Since we are talking about a climate model that has been validated by checking that it successfully reproduces past climate, this is a model.

  42. Steve Milesworthy

    Since we are talking about a climate model that has been validated by checking that it successfully reproduces past climate, this is a model.

  43. Steve:
    As I understand it, you contend that a statistically validated model could be built that extrapolates from the current, non-equilibrium state to the future equilibrium state of the Earth. There are some barriers to the validation of such a model. I address some of these barriers below.

    The non-equilibrium state is an example of a condition. The model is deterministic with the consequence that the number of possible conditions is very large.

    The equilibrium state is an example of an outcome. There are an infinite number of possible outcomes, each corresponding to a different equilibrium temperature.

    The number of condition-outcome pairs is infinite. For statistical significance of the validation, many observed events must correspond to each condition-outcome pair; in each of these events, the state of the Earth must have gone from non-equilibrium to equilibrium. Thus, observed events of infinite number are required for the validation, in each of which the temperature went from a state of non-equilibrium to a state of equilibrium. However, prospects for observing even one such an event are dismal as the state of the Earth has never been observed to go to equilibrium.

    The equilibrium temperature lies an infinite period of time in advance of the non-equilibrium condition. Thus, the event period is infinite. In a finite period of time, the number of events that may be observed is nil. Thus, there is a mismatch in which the requirement is for observed events of infinite number but the number that can be observed is no more than nil.

    In order for the model to be statistically validated, the numbers of conditions and outcomes must be made finite and small. The event period must be made finite and small. However, when these steps are taken the ability of the model to extrapolate to the equilibrium temperature is lost.

    • Steve Milesworthy

      “As I understand it, you contend that a statistically validated model could be built that extrapolates from the current, non-equilibrium state to the future equilibrium state of the Earth.”

      No. A statistically validated model could be built that extrapolates from the past state to the current state. This validates it for providing projections based on scenarios (your modèle idea). Policy makers and people choose a scenario to follow.

      Calculating equilibrium climate sensitivity is not the aim of the process – since the climate will likely never reach an equilibrium state. The equilibrium sensitivity number is simply a useful number to know.

      Your argument seems to be that it is difficult to statistically validate a model, rather than it being logically impossible.

      • Steve:
        It sounds as though I failed adequately to explain what I meant by a model. Thanks for giving me the opportunity to clarify. In the following remarks, when the word model is placed in quotes, it ambiguously references a model and a modèle.

        From your response of February 18, 2011 at 11:54 am, I gather that our discussion references a model. From your response of February 19, 2011 at 11:51 am, I gather that this discussion references a modèle. Your idea seems to be of a kind of hybrid that is a model in reference to observed states of the past and a modèle in reference to unobserved states of the future. I’ll call this hybrid a “model.”

        Your “model” is consistent with the idea of a “model” as a parametric form with a value assigned to each parameter by reference to a portion of the observational data. In reference to states of the observed past, the “model” is a model. In reference to the states of the unobserved future, the “model” is a modèle.

        However, the idea that I reference by the word model has no parametric form and only a single parameter, representing the noise level. To assume a parametric form has the logical shortcoming of fabricating information. A consequence from this fabrication is for the “model” to work well enough in the vicinity of the data on which it was trained but to be statistically invalidated in service.

        Models rarely fail in service. The rarity of failure is a consequence from the lack of fabrication of information.

        A model references a long sequence of independent statistical events. Each such event has a definite time period. Each prediction is associated with a particular event and extends precisely over the period of this event. While a projection has no period, a prediction has one.

        As the period of each event increases, model builders observe that the noise level increases. Finally, a period is reached at which the noise overwhelms the signal making it impossible to predict. While beyond this period a model cannot predict, a parametric “model” seems to the user of this model to make perfect predictions; it seems to make perfect predictions because, unknown to the user, the builder of this “model” has fabricated all of the missing information.

        From the research in meterology that I referenced in Part III, we know that if the number of outcomes exceeds 2 or the event period exceeds about 1 year, we cannot predict because the noise has overwhelmed the signal. With outcomes of infinite number and an infinite event, period, if your “model” seems to predict this is because you, the model builder, have fabricated information that is entirely missing.

      • Steve Milesworthy

        Terry, when planning for the future it is reasonable to consider a set of realistic scenarios over which one has some degree of control. The fact that there are a number of scenarios does not invalidate the usefulness of the projection.

        Sure if you add in the future unpredictable events, the projection will be incorrect. But it is not unreasonable to assume that these events will not change the equilibrium sensitivity inherent in the climate (and the validated model).

        In short, you are overstating the importance of future uncertainty and overstating the impact of likely future events.

        Real world choices clarify this problem. When does a person decide to start investing in a pension, for example? What proportion of the money goes in “speculative” investments, and what proportion goes in apparently safer investments (not even the safest investments are guaranteed)? How does the age, health and outlook of the person change the choice?

        People make such decisions without the benefit, even, of a model that has shown to be correct in the past (Unlike physical laws past performance is not a guarantee of future.)

        The point is we *have* to make choices. Doing nothing is a choice. In short you appear to be arguing that all choices are equally (ir)rational. But I think you are basing that belief on the overstatement of uncertainty and not on logic.

      • Steve:

        Thanks for taking the time to respond!

        It sounds as though you are promoting the use by policy makers of the heuristic which the cognitive psychologists Daniel Kahneman and Amos Tversky call the “simulation heuristic.” There is a description at .

        A heuristic offers an approach to deciding which of several inferences that are candidates for being made by a model or modèle is the one correct inference. While the decision that is made under the principles of reasoning is optimal, the decision that is made under a heuristic is sub-optimal.

        The degree of sub-optimality depends upon the model or modèle builder’s luck in the selection of the heuristic. Thus, the degree might be maximally slight or maximally great.

        In reference to the simulation heuristic and the assignment by it of values to the probabilities of various projections from a modèle , it is possible to determine the degree of sub-optimality. In assigning values, the policy maker conveys no information to himself/herself about the outcome from his/her policy decision. This conclusion follows from the fact that the observable states of nature called “outcomes” are not a property of a modèle but the information that is conveyed by the model to the user of this model is the measure of relationship between a model’s conditions and outcomes. Thus, in conclusion, the degree of sub-optimally is maximally great with the consequence that the projection is useless for policy making. When climatologists promote this informationless approach to policy making on their authority as “scientists” and “scientists” from other disciplines join them, this promotion maximizes the disutility of this approach for the people who bear the costs of the resulting policies. It maximizes the disutility by making it seem to the policy makers that they have information when they have none.

        Rather than depriving policy makers of information and maximizing the disutility from this deprivation, climatologists have the option of adopting the policy of building models under the principles of reasoning. A consequence would be for the maximum possible information to be conveyed to policy makers about the outcomes from their policy decisions.

      • Steve Milesworthy

        Interesting concept.

        The Simulation heuristic is not what I’m talking about, but the concept is useful for expanding my point. As the Simulation Heuristic is described, people make decisions and have opinions based on their own, sometimes limited, common sense and experience. The idea of running a “model” for a range of scenarios is, however, intended to extend policy makers’ experience and knowledge beyond their own preconceptions.

      • Steve Milesworthy

        PS. I think many arguments against AGW and CAGW (catastrophic AGW) come from the basis of the Simulation Heuristic you discuss. For example, concerns about the idea of spending money and resources to fix a problem that is not yet apparent or cannot be comprehended, or the idea that disasters will happen anyway so trying to stop CAGW is just setting us up for a fall.

  44. Having read some recent comments in this thread I have a strong feeling that this quote from Richard Feynman must be repeated here:

    “Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong.”

    • Pekka:

      It sounds as though you are accusing people who have commented (possibly including me) of empty philsophizing. You should understand that the theory of logic which I have espoused is observationally falsifiable thus belonging to the sciences. This theory has been tested in events of enormous number without being falsified. Thus, the theory can be said to have been validated.

      • Yes, I do indeed claim, that you overstate very strongly the power of philosophy and logic.

        I am certainly not a philosopher but I have a very longstanding interest in the philosophy of science. I know that Feynman liked joking, but this joke is not an empty joke.

  45. Fabulous paper Terry.

    I was wondering if you could proffer an opinion on the following, correct any linguistic errors and critique the logic.

    A unit model of any patch of earth, say 100 square meters for illustration, and its projection to the edge of the atmosphere can be said to exhibit all the characteristics of a climate – sunlight, temperature, humidity, clouds, precipitation, wind, cycles (daily, seasonally, annually), etc. A model of the behavior of this unit model of climate would demonstrate the behavior of all of the measurable parameters of climate over time and demonstrate a relationship to previous parameters in a manner that may make prediction possible, for at least some parameters. It is feasible that a unit model with enough observations from sites all over earth may still provide the ability to predict the behavior of climate for any given patch of earth.

    The proposition is that a unit model has greater predictive power than a global circulation model. This remains to be tested….

    • BLouis79:

      Thanks for taking the time to respond. You seem to suggest that the whole of the climate system can be reduced to the sum of the parts of this system, each part being associated with a different 100 square meter patch of ground. The notion that the whole can be reduced to the sum of its parts is the doctrine that is called “reductionism.” This doctrine is generally false but there is an exception. This exception, that the system can be described by linear equations, does not arise in the case of the climate system.

      • What I am trying to suggest is that the atmosphere/earth system behaves according to the laws of physics. Does the observation of the variables and changes of those within a unit provide enough information to infer what will happen next within a unit without needing to know what is happening in surrounding units? (it may be complex patterns of change rather than simple variables) Certain combinations of temperature, humidity and wind will result in the presence of rain clouds and thunderstorms. The thought is to properly study what humans both lay and professional observe all over earth. Perhaps a set of units along a line of longitude would be more appropriate.

        I understand that reductionism is the core of modern science. So I would be to differ from the statement that reductionism is generally false. Certainly I agree that reductionism is a long way from understanding the behaviour of complex biological organisms like humans. Complexity is required to explain what reductionism cannot, but reductionism has gotten us a very long way.

      • blouis79

        While Terry Oldberg is not doing such a terrible job, I’d like to step in to offer an observation extending his reference to systems of linear equations.

        There is a paper that I’ve referenced in past threads that discusses the requirements for nonlinear systems to demonstrate predictive power. Your intuition about the ‘granularity’ of measurement down to the 100 meter square level with many ‘dynamical’ measures of climate is in line with the conclusions of that paper.

        With sufficient well-located dynamical observation points, even some chaotic systems can become predictive.

        This is a special, rather than a general, case, and in the case of the Earth Climate system, the largest granularity that might allow this utility is approximately 10 km square (though it may be that 1 km, or 100 m is needed, or that there is no solution at all) samples of multiple dynamically relevant measurements.

        It would take many times the space-based observation platforms we now have to provide such data.

        The computational power required to process this data in realtime, much less for look-ahead prediction, would be expensive bordering on not yet feasible.

  46. Incredible! Never have I seen an argument that was so unreadable. Modèle vs model? Give us a break!

    Earth calling Terry – where are you?

    • DCC:
      You argue that there is not a meaningful difference between a modèle and a model but the form of your argument is ad hominem and hence your argument is logically illegitimate. Do you have an argument that you believe to be logically legitimate? If so, what is it?

  47. I’m really impressed with your writing skills as smartly as with the structure to your weblog. Is that this a paid subject matter or did you customize it yourself? Either way stay up the excellent quality writing, it’s rare to see a great weblog like this one today.

  48. Yes! Finally something about local seo marketing.

  49. I would definitely not recommend these as ways to break the ice however.
    Celebrity impersonations, musical gags, and the use of props
    are just some of the things that you could do in your
    routine. These surgeries are a matter of personal preference, peer pressure from fellow entertainers and the need to maintain
    an unrealistic image that is associated with them.

  50. Hi my loved one! I want to say that this post is awesome, nice written
    and come with almost all vital infos. I’d like to look more posts like this .