by Terry Oldberg
copyright by Terry Oldberg 2011
As originally planned, this essay was to end after Part II. However, Dr. Curry has asked me to address the topic of logic and climatology in a Part III. By the following remarks I respond to her request.
I focus upon the methodologies of the pair of inquiries that were conducted by IPCC Working Group 1 (WG1) in reaching the conclusions, in its year 2007 report, that:
- “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…”  and
- the equilibrium climate sensitivity (TECS) is “likely” to lie in the range 2oC to 4.5oC .
I address the question of whether these methodologies were logical.
This work is a continuation from Parts I and II. For the convenience of readers, I provide the following synopsis of Parts I and II.
A model (aka theory) is a procedure for making inferences. Each time an inference is made, there are many (often an infinite number of) candidates for being made of which only one is correct. Thus, the builder of a model is persistently faced with the identification of the one correct inference. The builder must make this identification, but how?
Logic is the science of the principles by which the one correct inference may be identified. These principles are called “the principles of reasoning.”
While Aristotle left us the principles of reasoning for the deductive logic he failed to leave us the principles of reasoning for the inductive logic. Over centuries, model builders coped with the lack of principles of reasoning for the inductive logic through use of the intuitive rules of thumb that I’ve called “heuristics” in identifying the one correct inference. Among these heuristics were maximum parsimony (Occam’s razor) and maximum beauty. However, each time a particular heuristic identified a particular inference as the one correct inference, a different heuristic identified a different inference as the one correct inference. In this way, the method of heuristics violated the law of non-contradiction. Non-contradiction was the defining principle of logic.
The problem of extending logic from its deductive branch and through its inductive branch was known as the “problem of induction.” Four centuries ago, logicians began to make headway on this problem. Over time, they learned that an inference had a unique measure; the measure of an inference was the missing information in it for a deductive conclusion per event, the so-called “entropy” or “conditional entropy.” In view of the existence and uniqueness, the problem of induction could be solved by optimization. In particular, that inference was correct which minimized the conditional entropy or which maximized the entropy, under constraints expressing the available information.
By 1963, the problem of induction had been solved. Solving it revealed three principles of reasoning. They were:
- the principle of entropy maximization,
- the principle of maximum entropy expectation and,
- the principle of conditional entropy minimization.
Few scientists, philosophers or statisticians noticed what had happened. The vast majority of model builders continued in the tradition of identifying the one correct inference by the method of heuristics.
When an inquiry is conducted under the principles of reasoning, the model that is constructed by this inquiry expresses all of the available information but no more. However, when people construct a model, they rarely conform to the principles of reasoning. In constructing a model, people exhibit “bounded rationality” .
A consequence from an illogical methodology can be for the constructed model to express more than the available information or less. When it expresses more than the available information, the model fails from its falsification by the evidence. When it expresses less than the available information the model deprives its user of information.
In constructing models, people commonly fabricate information that is not available to them. The penalty for fabrication is for the model to fail in service or when it is tested. If the model is tested, its failure is observable as a disparity between the predicted and observed relative frequencies of an outcome. In particular, the observed relative frequency lies closer to the base rate than is predicted by the model; the “base rate” of an outcome is the unconditional probability of this outcome.
In intuitive assignment of numerical values to the probabilities of outcomes, people commonly fabricate information through neglect of the base-rate . Casino gamblers fabricate information by thinking they have discovered patterns in observational data that the casino owners have eliminated by the designs of their gambling devices . Physicians fabricate it by neglecting the base-rate of a disease in estimating the probability that a patient has this disease given a positive from a test for this disease . Physical scientists fabricate information by assuming that a complex system can be reduced to cause and effect relationships, the false proposition that is called “reductionism”  . Statisticians fabricate it by assuming the existence in nature of probability density functions that are not present in nature. Dishonest researchers fabricate information by fabricating observed events from observations never made, a practice that is called “dry labbing the experiment.”
In constructing models, model builders commonly waste information that is available to them by failing to discover patterns that are present in their observational data. Statisticians waste information by assuming the categories on which their probabilities are defined rather than discovering these categories by minimization of the conditional entropy; to waste information in this way is a characteristic of the Bayesian method for construction of a model.
Inquiries and logic
If an inquiry was conducted, then how can one tell whether its methodology was logical? Using tools developed in Parts I and II, it can be said that a logical methodology would leave a mark on an inquiry in which:
- the claims that came from the inquiry would be based upon on models (aka theories) and,
- processes using these models as their procedures would make inferences and,
- each such inference would have a unique measure and,
and so on and so forth.
Seemingly, by following this line of reasoning to its endpoint, one could compile a list of traits of an inquiry that was conducted under logical methodology for use in determination of whether the methodology of any particular inquiry was logical.
Need for disambiguation
In reality, the task is not so simple. Ambiguity of reference by terms in the language of climatology to the associated ideas may obscure the truth. Thus, there may be the necessity for disambiguation of this language before the truth is exposed.
Thus, in going forward I’ll work toward 3 complementary ends. First, as needed, I’ll disambiguate the language of climatology. Second, using the disambiguated language I’ll identify traits of an inquiry that was conducted under a logical methodology. Third, I’ll compare these traits to the traits of WG1’s inquiries; from this comparison, I’ll reach a conclusion on the logicality of each inquiry.
Once I’ve judged the logicality of each inquiry, I’ll reach some conclusions about the claims that came from the two inquiries. The evidence on which I shall base these conclusions will include the text of the 2007 report of WG1 and the reports of the several researchers who have written on ambiguities of reference by terms in the language of climatology.
In the language of climatology, the word “model” ambiguously references: a) the procedure of a process that makes a predictive inference and b) the procedure of a process that makes no predictive inference. As making judgments about the correctness of inferences is what the principles of reasoning do, this ambiguity muddies the waters that surround the issue of the logicality of a methodology.
To resolve this ambiguity while preserving the semantics of “model” that I established in Part I, I’ll make “model” my term of reference to the procedure of a process that makes a predictive inference. I’ll make the French word modèle my term of reference to the procedure of a process that makes no predictive inference.
By my definition of the respective terms, a model provides the procedure for making “predictions” but not “projections.” A modèle provides the procedure for making “projections” but not “predictions.” Later, I’ll disambiguate the terms “predictive inference,” “prediction” and “projection” among other terms.
First, however, I’ll summarize the findings of past studies. The linguistic ground that I am about to cover was previously covered by the studies of:
- Green and Armstrong and,
The language in which the findings of these studies were presented left the ambiguity of reference by the word “model” unresolved. This ambiguity left the findings unclear. In the following review of these findings, I clarify the findings by making the model/modèle disambiguation. By the evidence that is about to be revealed, WG1’s claims were based entirely upon climate modèles and not at all upon climate models, for the latter did not exist for WG1.
In the course of his service to the IPCC as an expert reviewer, Vincent Gray raised the issue with the IPCC of how the IPCC’s climate modèles could be statistically validated . No modèle could have been validated because no modèle was susceptible to validation. However, at the time Gray raised this question the waters surrounding the issue of the validation were muddied by the failure of the language of climatology to make the model/modèle disambiguation.
According to Gray, the IPCC un-muddied the waters by establishing an editorial policy for subsequent assessment reports. Under this policy: 1) the word “prediction” was changed to the word “projection” and 2) the word “validation” was changed to the word “evaluation”; the distinction between “prediction” and “projection” that was made by the IPCC under this policy was identical to the distinction that I make in this article.
The IPCC failed to consistently enforce its own policy. A consequence was for the words “prediction,” “forecast” and their derivatives to be mixed with the word “projection” in the subsequent IPCC assessment reports. In Chapter 8 of the 2007 report of WG1, Green and Armstrong  found 37 occurrences of the word “forecast” or its derivatives and 90 occurrences of the word “predict” or its derivatives. Of 240 climatologists polled by Green and Armstrong (70% of whom were IPCC authors or reviewers), a majority nominated the IPCC 2007 report as the most credible source of predictions/forecasts (not “projections”) though the IPCC’s climate modèles made no predictions/forecasts.
For the reader who assumed the IPCC’s pronouncements to be authoritative, it sounded as though “prediction,” “forecast” and “projection” were synonymous. If made synonymous, though, the three words made ambiguous reference to the associated ideas. By failing to consistently enforce its policy, the IPCC muddied the very waters that it had clarified by its editorial policy.
In establishing the policy that the word “validation” should be changed to the word “evaluation” the IPCC tacitly admitted that its modèles were insusceptible to statistical validation. That they were insusceptible had the significance that the claims of the modèles were not falsifiable. That they were not falsifiable meant that the disparity between the predicted and the observed relative frequency of an outcome that would be a consequence from fabrication of information was not an observable.
In retrospect, it can be seen that no modèle could have been statistically validated because no modèle provided the set of instructions for making predictions. Every modèle could be statistically evaluated but while “statistical validation” and “statistical evaluation” sounded similar, they referenced ideas that were logically contradictory. They were contradictory in the sense that a model’s claims could be falsified by the evidence but a modèle’s claims could not be falsified by the evidence.
On the evidence that most polled climatologists understood the IPCC’s climate modèles to make predictions, Green and Armstrong concluded (incorrectly) that these modèles were examples of models. They went on to audit the methods of construction of the modèles under rules of construction for models. The modèles violated 72 of 89 rules. In view of the numerous violations, Green and Armstrong concluded that the modèles were not models hence were unsuitable for making policy.
In response to this finding of Green and Armstrong, the climatologist Kevin Trenberth  pointed out (correctly) that, unlike models, the IPCC modèles made no predictions. According to Trenberth, they made only “projections.”
A “prediction” is an assignment of a numerical value to the probability of each of the several possible outcomes of a statistical event; each such outcome is an example of a state of nature and is an observable feature of the real world.
Disambiguating “predictive inference”
A “predictive inference” is a conditional prediction. Like an outcome, a condition is a state of nature and an observable feature of the real world.
In a predictive inference, a numerical value is assigned to the probability of each condition and to the probability of each outcome AND condition, where by “AND” I mean the logical operator of the same name. By these assignments, a numerical value is assigned to the conditional entropy of the predictive inference. The conditional entropy of this inference is its unique measure in the probabilistic logic.
The “predictive inference”/”NOT predictive inference” pair
In the disambiguated language, a “model” is a procedure for making inferences, one of which is a predictive inference. While a modèle is a procedure, it is not a procedure for making a predictive inference. Thus, the idea that is referenced by “predictive inference” and the idea that is referenced by “NOT predictive inference” form a pair.
A “projection” is a response function that maps the time to the value of a dependent variable of a modèle.
The “predictions”/”projections” pair
Using a model as its procedure, a process makes “predictions.” Using a modèle as its procedure, a process makes “projections.” Thus, the idea that is referenced by “predictions” and the idea that is referenced by “projections” form a pair.
Disambiguating “statistical population”
The idea of a “statistical inference” references a time sequence of independent statistical events; the description of each such event pairs a condition with an outcome. An event for which the condition and outcome were both observed is called an “observed event.” A “statistical population” is a collection of observed events.
Disambiguating “statistical ensemble”
A “statistical ensemble” is a collection of projections. This collection is formed by variation of the values that are assigned to the parameters of the associated modèle within the ranges of these parameters.
The “statistical population”/”statistical ensemble” pair
The idea of a “statistical population” is associated with the idea of a model. The idea of a “statistical ensemble” is associated with the idea of a modèle. Thus, the idea that is referenced by “statistical population” and the idea that is referenced by “statistical ensemble” form a pair.
Disambiguating “statistical validation”
“Statistical validation” is a process that is defined on a statistical population of a model and on predictions of the outcomes in a sample of the observed events from this population. Under this process, the predictions collectively state falsifiable claims regarding the relative frequencies of the outcomes in the sample. A model for which these claims are not falsified by the evidence is said to be “statistically validated.”
Disambiguating “statistical evaluation”
“Statistical evaluation” is a process that is defined on a statistical ensemble and on a related observed time-series; there is an example of one at . The example features projections that map the time to the global average surface temperature plus a selected observed global average surface temperature time series.
The projections belonging to a statistical ensemble state no falsifiable claim with respect to the time series. Thus, a modèle is insusceptible to being falsified by the evidence that produced by its evaluation.
The statistical validation/statistical evaluation pair
Statistical validation is a process on the predictions of a model. Statistical evaluation is a process on the projections of a modèle. Thus, the ideas that are referenced by “statistical validation” and “statistical evaluation” form a pair.
In common English, the word “science” makes ambiguous reference to two ideas. One of these references is to the idea of “demonstrable knowledge.” The other is to the idea of “the process that is operated by people calling themselves ‘scientists’.“
Under the Daubert rule, testimony is inadmissible as scientific testimony in the federal courts of the U.S. if the claims that are made by this testimony are non-falsifiable . In this way, Daubert disambiguates “science” to “demonstrable knowledge.”
The “satisfies Daubert”/”does not satisfy Daubert” pair
A model is among the traits of an inquiry that satisfies Daubert. A modèle is among the traits of an inquiry that does not satisfy Daubert. Thus, “satisfies Daubert” and “does not satisfy Daubert” form a pair.
The “falsifiable claims”/“non-falsifiable claims” pair
A model makes falsifiable claims. A modèle makes non-falsifiable claims. Thus, “falsifiable claims” and “non-falsifiable claims” form a pair.
Grouping the elements of the pairs
I’ve identified a number of pairs of traits of an inquiry. Each trait can be sorted into one of two lists of traits. The traits in one of these lists are associated with a model. The traits in the other list are associated with a modèle.
By this sorting process, I produce a “left-side list” and a “right-side list.” They are presented immediately below.
left hand list right-hand list
predictive inference NOT predictive inference
statistical population statistical ensemble
statistical validation statistical evaluation
falsifiable claims non-falsifiable claims
satisfies Daubert does not satisfy Daubert
The traits of an inquiry under a logical methodology
A remarkable feature of the two lists is that each member of the left-hand list is associated with making a predictive inference while each member of the right-hand list is associated with NOT making a predictive inference. That predictive inferences are made is a necessary (but not sufficient) condition for the probabilistic logic to come to bear on an inquiry. Thus, I conclude that elements of the left-hand list are traits of an inquiry under a logical methodology while the elements of the right-hand list are traits of an inquiry under an illogical methodology.
The logicality of the first WG1 claim
An inquiry by WG1’s produced the claim that “There is considerable confidence that Atmosphere-Ocean General Circulation Models (AOGCMs) provide credible quantitative estimates of future climate change…” An AOGCM is an example of a modèle. The traits of the associated inquiry are those of the right-hand list. Thus, the methodology of this inquiry was illogical.
The logicality of the second WG1 claim
WG1 defines the equilibrium climate sensitivity (TECS) as “the global annual mean surface air temperature change experienced by the climate system after it has attained a new equilibrium in response to a doubling of atmospheric CO2” . In climatology and in this context, the word “equilibrium” references the idea that, in engineering heat transfer, is referenced by “steady state”; the idea is that temperatures are unchanging.
From readings in the literature of climatology, I gather that TECS is a constant; this constant links a change in the CO2 level to a change in the equilibrium temperature by the relation
ΔT = TECS * log2 (C/Co) (1)
where ΔT represents the change in the equilibrium temperature, C represents the CO2 level and Co represents the CO2 level at which ΔT is nil.
Using Equation (1) as a premise, WG1 claims it is “likely” that, for a doubling of C, ΔT lies in the range of 2oC to 4.5oC. That it is “likely” signifies that WG1 assigns a value exceeding 66% to the probability that ΔT lies in this range.
If Equation (1) is false, WG1’s claim is baseless. Is it false? This question has no answer, for the equilibrium temperature ΔT is not an observable. As ΔT is not an observable, one could not observationally determine whether Equation (1) was true or false. As the claims that are made by Equation (1) are non-falsifiable, Equation (1) must be an example of a modèle.
As ΔT is not an observable, it would not be observationally possible for one to determine whether ΔT lay in the range of 2oC to 4.5oC. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an observable. However, an outcome of an event is an observable. Thus, the proposition that “ΔT lies in the range of 2oC to 4.5oC” is not an outcome. WG1’s claim that it is “likely” that, for a doubling of C, ΔT lies in the range of 2oC to 4.5oC references no outcome. As the claim references no outcome, it conveys no information about an outcome from a policy decision to a policy maker.
As the “consensus” of climatologists plus some of the world’s most prestigious scientific societies seem to concur with WG1’s claim, a policy maker could reach the conclusion that this claim conveyed information to him/her about the outcome from his/her policy decision. The signers of the Kyoto accord would seem to have reached this conclusion. The issuers of the U.S. Environmental Protection Agency’s “endangerment” finding would seem to have reached the same conclusion. However, as I just proved, this conclusion is false.
Moving to a logical methodology
The cost from abatement of CO2 emissions would be about 40 trillion US$ per year . Before spending that kind of money, policy makers should have information about the outcome from their policy decisions: the more information the better the decisions. By moving to a logical methodology for their inquiries, climatologists would convey the maximum possible information about the outcome from a policy decision to a policy maker.
It might be helpful for me to sketch out a path by which such a move can be accomplished. Under a logical methodology, one trait of an inquiry would be a statistical population. The data of Green and Armstrong suggest that the majority of WG1 climatologists confuse the idea of a statistical population with the idea of a statistical ensemble. This confusion, perhaps, accounts for my inability to find the idea of a population in WG1’s report.
On the path toward a logical methodology for a climatological inquiry, a task would be to create this inquiry’s statistical population. Each element of this population would be an observed independent statistical event.
A statistical population is a subset of a larger set of independent statistical events. On the path toward a logical methodology, this set must be described. In describing it, it is pertinent that, in a climatological inquiry, the Earth is the sole object that is under observation. Thus, a climatological inquiry must be of the longitudinal variety.
That an inquiry is longitudinal has the significance that its independent statistical events are sequential in time. That these events are statistically independent has the significance that they do not overlap in time. Thus, the stopping point for one event must be the starting point for the next event in the sequence. The state that I’ve called a “condition” is defined at the starting point of each event. The state that I’ve called an “outcome” is defined at the stopping point.
I understand that the World Meteorological Organization (WMO) sanctions the policy of defining a climatological observable as a weather observable when the latter is averaged over a period of 30 years. I’ll follow the WMO’s lead by specifying that each independent statistical event in the sequence of these events has a period of 30 years. For the sake of illustration, I stipulate that the times of the starts and stops of the various events in the sequence are at 0 hours Greenwich Mean Time on January 1 in years of the Gregorian Calendar that are evenly divisible by 30; thus, they fall in the time series …, 1980, 2010,…
A model has one or more dependent variables, each of which is an observable. At the end of each observed event, a value is assigned to each of these variables. For the sake of illustration, I stipulate that the referenced inquiry’s model has one dependent variable; this is the HADCRUT3 global temperature, when the HADCRUT3 is averaged over the 30 year period of an event. There is a problem with this choice; later, I’ll return to this problem.
For the sake of illustration, I stipulate that the model has two outcomes. The first is: the average of the HADCRUT3 over the period over the period of an event is less than the median in the model’s statistical population. The second is NOT the average of the HADCRUT3 over the period of the event is less than the median in the model’s statistical population.
A model has one or more independent variables, each of which is an observable. At the start of each observed event, a value is assigned to each of these variables. Practical considerations prevent a model from having more than about 100 independent variables. In the selection of these 100, one looks for time series or functions of several time series that singly or pair-wise provide a maximum of information about the outcome. Some possible time series are:
- CO2 level at Mouna Loa observatory,
- time rate of change of CO2 level,
- HADCRUT3 global temperature,
- HADCRUT3, lagged by 1 year,
- HADCET (central England temperature),
- Precipitation at Placerville, California,
- Time rate of change of sea surface temperature off Darwin, Australia
- Zurich sunspot number,
- Jeffrey pine tree ring index, Truckee California
If one or more of the AOGCMs were to be modified to predict the outcomes of events, the predicted outcome would be a possible independent variable. Internal variables of the AOGCM, such as the predicted spatially averaged temperature at the tropopause, would be possible independent variables. If an AOGCM were adept at predicting outcomes, it would generate theoretical or “virtual” events, the count of which would add to the count of the observed events in the assignment of numerical values to probabilities of outcomes.
If an independent variable space (perhaps containing some of the above referenced time series) were to be identified then this space could be searched for patterns. Under a logical methodology, this search would be constrained by the principles of reasoning. If successful, the search for patterns would create the maximum possible information about the outcomes from policy decisions for conveyance to policy makers.
The search for patterns might be unsuccessful. In this case, the maximum possible information that could be conveyed to a policy maker would be nil. The chance of success at discovering patterns would grow as the size of the statistical population grew.
The need for more observed events
Earlier, I flagged the 30 year period of the events in my example as a problem. The problem is that, with a period this long, the temperature record supports the existence of very few observed events. The HADCRUT3 extends backward only to the year 1850 thus supporting the existence of only 5 observed events of the type I’ve described. That’s way too few.
In long-range weather forecasting research, it has been found that a population of 130 observed events is close to the minimum size for pattern discovery to be successful. To supply at least 130 events of 30 year periods each, the HADCRUT3 would have to extend back at least 3900 years. In reality, it extends back only 160 years.
Thermometers were invented only 400 years ago. A lesson that can be taken from these facts is that temperature has to be abandoned as an independent variable and a proxy for temperature substituted for it.
Logic and meteorology
A logical methodology has been employed in at least seven meteorological inquiries. Experience gained in this research provides a bit of guidance on the designs of logical climatological inquiries. The degree of this guidance should not be overblown, for the problems that are confronted by meteorological and climatological research differ. In particular, the number of observed events that are required for pattern discovery to be successful might be quite different.
The results from seven meteorological inquiries that were conducted under a logical methodology are summarized by Christensen . In brief, the aim of the research was a breakthrough that would facilitate forecasting of seasonally or annually averaged surface air temperatures and precipitation in the far Western states of the U.S. at mid- to long-range. This aim was achieved.
In the planning of the inquiries, the original intent was to exploit the complementary strengths of mechanistic and statistical models by integrating an AOGCM with a statistical pattern matching model. The AOGCM would contribute independent variables to the pattern matching model and possibly would contribute to the counts of the statistical events that were the source of information for the projects. However, a role for the AOGCM was abandoned when it was discovered that the AOGCM grew numerically unstable over forecasting periods longer than a few weeks. As the goal was to forecast with statistical significance over periods of as much as a year, the instability rendered the AOGCM useless as an information source.
In one of the inquiries  , one-hundred-twenty-six observed statistical events were available for the construction of and statistical validation of the model. In the construction, 100,000 weather-related time series were examined for evidence of information in them about the specified weather outcomes. Additional time series were generated by running the 100,000 time series through filters of various kinds; for example, one kind of filter computed the year-to-year difference in the value of an observable. From these data, the 43 independent variables of the model were selected for their relatively high information content about the outcome and relatively high degree of statistical independence.
Using a pattern discovery algorithm as its procedure, a process searched for patterns in the Cartesian product space of the sets of values that were taken on by the 43 independent variables. This process discovered three patterns.
One of these patterns predicted precipitation 36 months in advance, a factor of 36 improvement over the prior state of the weather forecasting art. This pattern was:
- normal or high Pacific Ocean surface temperatures 2 summers ago in the western portion of the ±10o equatorial belt AND,
- normal or high sea surface temperatures 3 springs ago in the northeastern portion of the equatorial belt AND,
- moderate or low precipitation at Nevada City 2 years ago.
A match to this pattern assigned 0.59±0.11 to the probability of above median precipitation in the following year at a collection of precipitation gauges in the Sierra Nevada East of Sacramento.
A consequence from this inquiry was for the significance to be discovered of the El Niño Southern Oscillation (ENSO) for mid-to long-range weather forecasting. Exploitation of information theory made it possible to filter out the vast amount of noise in the underlying time series and to tune into the signal being sent by the ENSO about the precipitation that was to come in the Sierra Nevada.
A pair of WG1’s inquiries were examined for evidence of whether or not the methodologies of these inquiries were logical. It was found that traits which were the mark of a logical methodology were absent. However, that they were absent was obscured by ambiguity of reference by climatology’s language to the associated ideas. A consequence from the obscurity could be for it to appear to a policy maker that the IPCC conveyed information to him/her about the outcome from his/her policy decisions. It was shown that under the illogical methodology of WG1, the IPCC conveyed no such information.
With a move to a logical methodology, it would become conceivable for the IPCC to convey information to policy makers about the outcomes from their policy decisions. A path toward such a methodology was sketched.
Moderation note: this is a technical thread, will be moderated for relevance.
 Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter 8. Executive Summary URL = http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-es.html
 Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter 10. Mean temperature.” URL = http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-es-1-mean-temperature.html
 Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Frequently Asked Question 8.1: How Reliable Are the Models Used to Make Projections of Future Climate Change?” URL = http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-8-1.html
 Kahnman, Daniel “Maps of Bounded Rationality.” Nobel Prize lecture, December 8, 2002, http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahneman-lecture.html
 Kahneman, Daniel and Amos Tversky, “On the reality of cognitive illusions.” Psychological Review 1996 Vol 103 No. 3, 582-591 URL = http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kahnemantversky1996.pdf
 Ayton, Peter and Ilan Fischer, “The hot hand fallacy and the gambler’s fallacy: Two faces of subjective randomness?” Memory & Cognition 2004, 32 (8), 1369-1378. URL = http://mc.psychonomic-journals.org/content/32/8/1369.full.pdf
 Casscells, W, A Schoenberger and TB Graboys, “Interpretation by physicians of clinical laboratory results.” N Engl J Med Nov 2,999-1001.
 Scott, Alwin, “Reductionism revisited.” Journal of Consciousness Studies, 11, No. 2, 2004, pp. 51–68. URL = http://redwood.berkeley.edu/w/images/5/5e/Scott2004-reductionism_revisited.pdf
 Capra, Fritjof, The Turning Point: Science, Society and the Rising Culture, 1982.
 Gray, Vincent: Spinning the Climate. URL = http://icecap.us/images/uploads/SPINNING_THE_CLIMATE08.pdf
 Green, Kestin and J. Scott Armstrong: “Global Warming: Forecasts by Scientists vs. Scientific Forecasts,” Energy and Environment, Vol 18, No. 7+8, 2007. URL = http://www.forecastingprinciples.com/files/WarmAudit31.pdf
 Trenberth, Kevin. URL = http://blogs.nature.com/climatefeedback/recent_contributors/kevin_trenberth
 “Daubert Standard.” Wikipedia 18 January 2011. URL = http://en.wikipedia.org/wiki/Daubert_standard
 Solomon, Susan et al, Climate Change 2007: Working Group I: The Physical Science Basis, “Chapter 220.127.116.11 Definition of climate sensitivity.” URL = http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-6-2.html
 Lomborg, Bjorn, “Time for a smarter approach to global warming.” Wall Street Journal, Dec. 15, 2009. URL = http://online.wsj.com/article/SB10001424052748704517504574589952331068322.html
 Christensen, Ronald, “Entropy Minimax Multivariate Statistical Modeling-II: Applications,” Int J General Systems, 1986, Vol. 12, 227-305
 Christensen, R., R. Eilbert, O. Lindgren and L. Rans, “An exploratory Application of Entropy Minimax to Weather Prediction. Estimating the Likelihood of Multi-Year Droughts in California.” Entropy Minimax Sourcebook, Vol 4: Applications. 1981. pp. 495-544. ISBN 0-938-87607-4.
 Christensen, R. A., R. F. Eilbert, O. H. Lindgren, L. L. Rans, 1981: Successful Hydrologic Forecasting for California Using an Information Theoretic Model. J. Appl. Meteor., 20, 706–713. URL = http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281981%29020%3C0706%3ASHFFCU%3E2.0.CO%3B2