# Principles of Reasoning. Part I: Abstraction

By Terry Oldberg

Introduction

In building climate models, climatologists generalize. Can the means by which they generalize be improved?

Yes they can. The means can be improved by replacement of intuitive rules of thumb called “heuristics” by the principles of reasoning.

A model is a procedure for making inferences. In each instance in which an inference is made, there are many candidates for being made; often these candidates are infinite in number. Which inference among the candidates is the one correct inference? The model builder must decide, but how?

In theory the one correct inference is identified by the principles of reasoning. Logic is the science of these principles.

In reality there is a stumbling block. While Aristotle left us the principles of reasoning for the deductive branch of logic he failed to leave us the principles of reasoning for the inductive branch of logic.

The inductive branch contains the principles that govern generalization. In building a model, the model builder must generalize.

The problem of extending logic from its deductive branch and through its inductive branch is called “the problem of induction.” Many scientists and philosophers believe this problem to be unsolved. In 2005 the professor who taught Logic I at M.I.T. told his students that the problem was unsolved.

Model builders who hold this belief cope with the apparent absence of the principles of reasoning; they cope through the use of intuitive rules of thumb called “heuristics” in discriminating the one correct inference from the many incorrect ones. Maximum parsimony (Ockham’s razor) is example of a heuristic. Maximum beauty is another example. However, in each instance in which a particular heuristic identifies a particular inference as the one correct inference a different heuristic identifies a different inference as the one correct inference. In this way, the method of heuristics violates the law of non-contradiction. Non-contradiction is a principle of reasoning. Thus, the method of heuristics is inconsistent with the principles of reasoning.

I’m going to argue that those scientists and philosophers who believe the problem of induction to be unsolved are wrong in this belief. I’ll argue that they are among the victims of a long-running lapse in communications, for the engineer-physicist Ronald Christensen solved the problem 47 years ago.

After solving the problem, Christensen published his findings but few people recognized their significance. When he submitted a paper to a philosophical journal, it was rejected. When he submitted a book for review to a journal of mathematical statistics, the reviewer stated that he couldn’t think of a single person who would profit from reading it.

A few years ago, while in the library of a renowned research university, I stumbled upon the set of books by which Christensen had tried to communicate some of his findings. In the period of 25 years over which the books had lain on a shelf each of them had been checked out only a few times.

Later, I delivered an offer of help in the replacement of the method of heuristics by the principles of reasoning to most of this university’s scientists; 100% of the recipients spurned my offer. When I offered to address the university’s philosophy department, this department spurned my offer. A professor at the same university spurned my offer of help by stating that he could conceive of no use for my ideas in his field of research. Another professor explained the behavior of his colleagues by suggesting that, like mules, his colleagues were “set in their ways.”

I’ve gotten similar responses from the faculties of many additional universities. The evidence seems clear that, like other scientists and philosophers, academic scientists and philosophers have not yet received Christensen’s message.

In meteorology, replacement of the method of heuristics by the principles of reasoning has produced great advances in predictability. In climatology, advances of a similar magnitude may be available.

With this essay, I begin a two part series on the principles of reasoning. In part I, I set up the problem of induction for solution. In part II, I solve this problem. Solving it identifies the principles of reasoning.

Abstraction

Part I features the idea that is implied by the semantics of the word “model.” This is that a model is not reality itself but rather is an abstraction from reality.

Abstraction is a logical process. It operates on two or more of the states of nature that I’m going to call “constituent states” to produce the state of nature which I’m going to call an “abstracted state.” Abstraction produces an abstracted state by placing the constituent states in inclusive disjunction. An illustrative example follows.

In the example, the constituent states are male and female. Placement of these states in inclusive disjunction produces the abstracted state male OR female where “OR” designates the inclusive disjunction operator.

A “state” is a description of an entity in the real world. To attach the adjective “abstracted” to a state that is formed by abstraction is apt because this state is abstracted (removed) from the differences among the constituent states; for example, the abstracted state male OR female is removed from the gender differences between its constituent state male and its constituent state female. In this way, abstraction removes detail from the description.

In the construction of and testing of a model, abstraction has the merit of increasing the level of statistical significance. It increases this level because the descriptions of more observed statistical events are apt to match an abstracted state than to match either of its constituent states. For example, the descriptions of about twice as many people match the abstracted state male OR female than match the constituent state male or the constituent state female.

To continue I need to reveal what I mean by a “model.” Associated with every such model is the set of this model’s dependent variables and the set of this model’s independent variables. Each such variable is an observable feature of nature or can be computed from one or more observable features.

In making the following description, I use the words tuple, Cartesian product space and partition as they are used in mathematics. Each dependent variable takes on the values in a specified set of values. The set that contains one of the values of each dependent variable is an example of a tuple. The complete set of these tuples is an example of a Cartesian product space.

This space can be divided into parts. The complete set of parts is an example of a partition. Each element of this partition contains a number of tuples. By placement of these tuples in inclusive disjunction, one forms an abstracted state. I’ll call this state an “outcome.”

A complete set of outcomes can be formed by abstraction from the tuples belonging to each element of the partition. This set is a kind of state-space. An example is {rain in Milwaukee in the next 24 hours, no rain in Milwaukee in the next 24 hours}.

By a process that is similar to the one described previously, a complete set of abstracted states may be formed from a model’s independent variables. I’ll call this set the “conditions.”

The set of conditions is a kind of state space. An example of one is {cloudy in Milwaukee, not cloudy in Milwaukee}.

The complete set of descriptions of the statistical events for the model is generated by taking the Cartesian product of the set of conditions and the set of outcomes outcomes contains the complete set. For example, taking the Cartesian product of the set of conditions {cloudy in Milwaukee, not cloudy in Milwaukee} with the set of outcomes {rain in Milwaukee in the next 24 hours, no rain in Milwaukee in the next 24 hours} generates 4 event descriptions; one of them is {cloudy in Milwaukee, rain in the next 24 hours}.

An “observed” event is a consequence from an observation that was made in nature. The complete set of a study’s observed events is an example of a statistical population.

From a population of this kind, a sample may be drawn and the observed events in this sample which match a particular event description may be counted; for example, observed events matching the description {cloudy in Milwaukee, rain in the next 24 hours} may be counted.   This count is called the “frequency” of the description. The ratio of the frequency of a particular event description to the sum of the frequencies of all of the various event descriptions is called the “relative frequency” of the event description. The relative frequency in the limit of an infinite number of observations is called the “limiting relative frequency.”

A “prediction” is an extrapolation from the known condition of an event to the uncertain outcome of this event in which the outcome lies in the future of the condition. A prediction states a falsifiable claim about the numerical value of the limiting relative frequency of each of outcome given each condition.

Posing the problem of induction

I’ve completed my description of the mathematical entity which I call a “model.” The completion places me in a position to pose the problem of induction.

The problem is a consequence of the fact that partitions of the Cartesian product space of the model’s independent variables are of infinite number. Each of these partitions is associated with a different set of conditions. Among the various sets of conditions, a single set is associated with a model that makes no incorrect inferences. Conditions belonging to this uniquely determined set are called “patterns.” The problem of induction is to discover the rules under which the patterns are discovered. These rules are the principles of reasoning.

### 161 responses to “Principles of Reasoning. Part I: Abstraction”

2. Mac

EH!

3. A good start, but part two will have some serious ground to cover if we are going to get to an accurate method of modelling climate that really does cover all the variables and integrate them in a way which correctly mimics nature.

Getting the right answers isn’t just a matter of applying the right method. It also relies on asking the right questions to start with. Does Terry have a method for determining which are the right questions. In his scheme, is this equivalent to selecting th right pairs to make abstractions from?

Does the scheme deal with cause and effect in any way, or just use the statistical relationships in the relative frequencies of the outcomes? If the former, how does it deal with the behaviour of systems, where feedbacks alter the inputs which alter the feedbacks?

“wait for part two” is a legitimate answer. :-)

• Rog:

Excellent questions!

To mimic nature is the aim of reductionistic modeling. While this kind of modeling can play a role, the overall aim is to create the algorithm for an optimal decoder of a “message” from the future. This “message” is the sequence of outcomes of statistical events that have not yet happened. The model is this algorithm. The principles of its construction are the principles of reasoning.

4. Mac

Here is question. Why does the UK Met Office’s Unified Model (weather + climate) continually fail to predict seasonal weather and extreme weather events even though it has publicly stated it has the computational power to do so?

Here is another. What inferences can we deduce from that public failure?

• Predictions are tough. Especially ones about the future.

• Mac, they have now stopped doing seasonal predictions (after they were widely ridiculed for predicting a ‘barbecue summer’ in 2009 which turned out to be a washout). The Met Office have at last realised that since weather is a chaotic system, any prediction beyond a few weeks is worthless.

• Andrew Dodds

Here is the actual forecast:

http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20090430.html

Does not look particularly bad if you read the caveats. It’s only when the media paints it as a certainty that you have problems.

• hunter

andrew,
If all it was to take was to get the media to accurately report, they would not have balied on the forecasts.
Afterall, the AGW academic community runs whole seminars on how to reach the great unwashed. Certainly the UK Met, devoted to promoting AGW, would not wither in the face of a bad seasonal forecast would they?

• HR

If you’re going to give more value to the caveats than the actual prediction then I think we can probably all go do something more worthwhile now.

5. Latimer Alder

Is it even theoretically possible to come up with accurate predictions more than a few days away?

• Click the name link a map will appear showing today’s forecast for the USA I generated 35 months ago, if you click the calendar icon in the blue search bar you may look fore ward or back 3 whole years, at other daily forecasts Generated in November posted in December of 2007.

On the web site you will find complete instructions on how it can be done in your locale, if you wish to replicate the efforts I have done I will be glad to support your local forecast attempt with any support you need, including the simple programs I use to sort the data, and generate the maps.

• It depends on what you are predicting. The predictability of various things decay at different rates. That’s why folks have hope in making skillful seasonal climate predictions even though it is firmly established that the predictability of weather decays to negligible on the order of days and weeks.

• Latimer:
Through replacement of the method of heuristics by the principles of reasoning in the construction of a weather forecasting model it has become possible to predict whether precipitation will be above or below average in the Sierra Nevada East of Sacramento as much as 3 years in advance. I discuss this advance in Part II.

• Actually you do not discuss it Terry. You just claim it. Nor do I believe it. Show us the successful results. By the way, Will Alexander also claims to be able to do this for local rainfall, based on time series analysis. Is your method similar or different?

6. Joe Lalonde

Welcome Terry!
Models should not be based on temperature. Temperature are after effects to what has happened in the general region.
Our two hemisperes can be quite opposite in many areas and factors from the different population densities and landmass differences to cloudcover and disaster occurances.
Many theories are traditionally taught in school and whatever a “scientist says” is absolute fact.
The whole structure does not allow anyone who is a “free-thinker” past the walls to generate a greater knowledge base.

• Joe Lalonde

Of course, you would notice the word opposite should have been “different”.

7. Brian H

Latimer;
Sure. I predict it will rain somewhere on this date in 2011, 2012, 2013, and 2222. I also predict it will not rain on those dates in other locales.
:)

8. Terry, as someone who works on the nature of reasoning I find that I do not understand what you are saying, not on first read anyway. Your key concepts like model, heuristic and reasoning in ways that are very different from my own. Nor do you explain these terms in useful ways. My impression is therefore that you have coined a technical language, which is common in science, but you need to explain it in formal detail. You seem to be working on a specific problem in statistical reasoning, not reasoning (or modeling) per se. More on this later.

• Apologies for the typo. The second sentence should be “Your key concepts like model, heuristic and reasoning are very different from my own.”

For example in science a model is usually a set of equations, not a procedure. I use the SIR disease model to study the diffusion of knowledge, but there are several other well known models, with different equations.

In problem solving a heuristic is a simplification that reduces the computational complexity of a problem. For example, in a chess playing algorithm just looking at the next 5 mores (or 10, etc.) instead of all possible sequences. Occam’s razor is not a heuristic. Nor are heuristics in general alternatives to rules of reasoning, if the latter even exist. More generally, there is a great deal more to reasoning than inference. The vast bulk of reasoning is concerned with understanding, which generally precedes inference.

• Dave and Steve:

Thanks for the feedback. My use of language is the result of a project which I undertook several years ago to reduce dense mathematically described concepts to common English. I undertook this project after forming a focus group and finding that neither of the two PhD level physicists who were members of the group could understand these concepts when they were described mathematically. I also found that there was an inverse relationship between the level of technical erudition of a member of the group and his/her ease in grasping the concepts. It appeared that the less erudite members were skipping the math while the more erudite were trying to understand it and getting lost. One of the physicists, a retired professor, advised me to avoid the use of mathematical symbolism and terminology as much as possible and tell my story in common English.

• That would be fine Terry if you actually used common English. But if you use English sounding words that mean something in math, not what they mean in English, then what you say is simply incoherent. I think you would have been better off skipping the whole lengthy 2-part philosophical and historical derivation and just explaining your forecasting procedure. An engineering approach perhaps. As it is I can’t see that you have explained it at all, much less given an example.

I take it you are talking about some kind of time series analysis, is that true? If so then “messages from the future” is not helpful. Many of the people on this list have a basic understanding of statistical analysis, but this is barely mentioned.

• David:
If there are members of this audience who have statistical expertise, I’m surprised. A couple of years ago I addressed a group of 30 statisticians with a message that was similar in language and content to the one which I’ve delivered to this audience. In the course of my remarks, I exploited the fact that precisely defined mathematical ideas were referenced with relatively little ambiguity by the common English words “measure,” “inferences,” “missing information” and “optimization.” Using these terms, I argued that the unique measure of an inference was the missing information in this inference for a deductive conclusion. I went on to argue that the existence and uniqueness of the measure of an inference had the significance that the problem of induction could be solved by optimization. I got the impression that every member of the audience understood and agreed with this argument. Here, the audience seems baffled by the very same use of language and argument. From this experience and from reading documents such as AR4, I’ve gotten the impression that the vast majority of climatologists are babes in the woods when it comes to statistics. One piece of evidence for this naivite is that I’m unable to locate the statistical population that underlies the IPCC’s claims. In a scientific study, the statistical population provides the evidentiary basis for the claims that are made by the model. Here, there seems to be no such population.

• Steven Mosher

I second that. I read knowledgetothemax as well a bit ago. That essay and this one seem to be struggling to invent a new set of terminology for concepts that we already have a functioning lexicon for. There also appears to be an fundamental failure to understand the problem of induction:

“The problem is a consequence of the fact that partitions of the Cartesian product space of the model’s independent variables are of infinite number. Each of these partitions is associated with a different set of conditions. Among the various sets of conditions, a single set is associated with a model that makes no incorrect inferences. Conditions belonging to this uniquely determined set are called “patterns.” The problem of induction is to discover the rules under which the patterns are discovered. These rules are the principles of reasoning.”

Well, that’s not a very clear nor accurate statement of the problem of induction. Further, he really confuses two different problems. The problem he starts out with is really the problem of underdetermination, which while related to the problem of induction is decidely different.

9. Welcome to Judiths community, your essay has some very interesting points, and from a very quick read through it will generate some interesting results. It does not however describe a way to solve the biggest stumbling block in climate science, the fact that we still don’t know what we don’t know about systems that drive the climate .
Until that happens any real discussion regarding the inferences from the modeling systems is moot.

Please don’t get me wrong , I look forward reading tehe follow up to this essay and discussing with you and theothers on the blog. However the bottom line is still, you can not make accurate models for disscussion and forcasting until you know what all the variables for the algorithims are.

Once again interesting essay , looking forward to the next instalment

• Jim

From what I have read on his web site so far, his method is one that should reveal patterns that may illuminate causes of climate. I’m not done reading yet, however. I also wonder how you know if you have all the “observed” and “non-observed” states, it may not matter in the final analysis.

• Knowing when you ahve all the bits and pieces in order is difficult especially with something as chatic as climate, small things ( like CO2 for instance) may have effects far beyond what you would think they should have. The problem we are having at the moment is that a group of people have decided that a specific thing is responsible for the final effect, ie., temperature rise. That deduction is based on the models that they have produced , without the benefit of a full knowledge of the system. I would be far more enthusiastic regarding the results of the models if they could input historical data and reproduce what has happened based on the observed data. If they can produce those results , at that point i think they have most of their variables in order. The inconsistancies between the actuall historical records ie., River fairs on the Thames and Washington dragging cannon across rivers in the recent past , that we do have historical records of if not the actual temperatures and what their models had produced leads me to believe that they have some workm to do.

• Bob Ludwick

@ Glen Shevlin

“However the bottom line is still, you can not make accurate models for disscussion and forcasting until you know what all the variables for the algorithims are.”

Not only that, Climate Science still can’t agree upon the SIGN to apply to some of the variables that are KNOWN to affect the climate.

Yet the models continue and their outputs are used to justify political action.

10. Mac

We can test assumptions that models are based on, but how can we test a human construct, i.e. inference?

Heuristic modelling tells us more about the human mind than the complex physics and chemistry of this planet.

11. Michael

Maybe the 2nd part will clarify the point.

It seemed promising starting with talking about inductive reasoning, but then veered off into examples of abstraction , finishing with some general points about models. Yes, they are heuristic devices. And?

Maybe someone could help me out here.?

• Michael:
Part I of my essay, which you’ve read, sets up the problem of induction for solution. Part II, which you’ve not read, solves the problem. Solving the problem reveals the principles of reasoning.

12. Labmunkey

Welcome.
A very interesting read, however i feel i’m left on tenter-hooks waiting for the next part.

Don’t get me wrong, i appreciate and thoroughly understand the need for a post to lay the ground rules as it were before delving into the more complicated aspects- and think of it as a compliment to your post- but i am no quite impatient for the second part!!

re:” Maximum parsimony (Ockham’s razor) is example of a heuristic. Maximum beauty is another example”

In the instance of climate models, Ockham’s razor would suggest that all things being equal (and until otherwise proved), the recent changes are natural and nothing to do with CO2- especially given the number of ‘fudge’ factors required to get them to ‘work’.

A general question to the learned audience. Are models being explored to look into natural variability? Are they been given as much ‘time’ as the co2 derived ones are to develop? Or have the establishment just decided that it’s co2 and are now setting out to prove it??

It may be a very simple question but it never occured to me to ask it before….

• It is a simple and good question and as far as i know, the answer is no. This seems to be to be gross negligence on the part of the climate modeling community. A failure to test the null hypothesis, (or at least a failure to share the results if they have) .

Words fail…

• Labmunkey

Surely this can not be the case?? If they haven’t tried to model the natural variability, how the dickens can they claim that it cannot explain the recent trends?

This would make a mockery of any claims that they even have the first clue to what’s going on. They simply cannot be that stupid- they MUST have done some work on this. Surely???

Dr Curry, do you have any info on this perchance?

• Craig Loehle

Some earlier threads at this site discussed this issue. One line of argument has been circular: since climate models do not show long climate cycles (past a couple of years of ENSO type oscillations) then longer cycles can not exist. It is then asserted that the warm period in the 1940s did not exist–it was really cooling in the 1950s-1970s caused by aerosols that existed. Thus the PDO 60 yr cycle does not exist, even if past records seem to show it existing. Unbelievable but true. My experience with reviewers is that they get nasty when you suggest there could possibly be any 60 yr cycle of climate. Nasty.

• Craig, there is peer reviewed stuff out there dealing with 60 year cycles in climate. Nicola Scafetta has a couple of papers which deal with it for example.

• Craig Loehle

The climate modelers do not recognize Scafetta’s work or allow for such cycles in their attribution studies, as Judith has noted here. I have a paper submitted with Scafetta, and we have had a hard time with reviewers so far.

• I’m starting to take a serious look at Scafetta’s work.

• ianl8888

Good

I posted several questions based on his work in previous threads … went unanswered, of course

• In their defense the modelers are looking for mechanisms, not statistical cycles. There are all kinds of cycles in the statistical climate literature. 1500 years for example. It is an entire genre of climate research. How do you put cycles into a model? Just have it oscillate for no reason? Might be fun! But if we put in emerging from the little ice age simply as an unexplained oscillation there is nothing for AGW to explain.

• David L. Hagen

David Wojick
However are not statistical cycles clues that there may be underlying mechanisms? This gives a clue to look for phenomena with similar cycles. That may then give clues to underlying mechanisms. e.g., weather cycles matching Jovian planet cycles may suggest gravity affecting the sun which affects the heliosphere which affects galactic cosmic rays which affect clouds which affect weather. Then to test for statistical significance and causation.

• Raving

Why resort to extraterrestrial causes of entrainment? There are plenty of regular terrestrial mega events.

Examples: Great earthquakes and tsunamis caused by the Cascadia Subduction Zone. Flank failure of the volcanoes of Hawaii creating megatsunamis. These events are quasi periodic and stir up huge amounts of ocean sediments. There are major geological upheavals that repeat themselves over 100s, 1,000s and perhaps 100,000s of years periodicity. What’s wrong with this sort of activity entraining climatic and biotic responses?

• Labmunkey

Wow, so they actually DO seem to have just dismissed the natural cycles without trying to model them with any level of accuracy. Words fail me.

“But if we put in emerging from the little ice age simply as an unexplained oscillation there is nothing for AGW to explain.”

Which is kind of my point.

• David L. Hagen

Raving
I was not excluding terrestrial events nor “resorting to” asatrology. Just highlighting extra-terrestrial as exhibiting cycles, some of which are more predictable: earth’s orbital parameters, Jovian planetary orbits, Milky Way galactic rotation rate etc. and whose influence on earth’s climate is now being modeled and quantified.

Volcanoes and earthquakes are nominally “predictable.” Some global temperature do account for volcanic aerosols.
For one EIS, I was told by the geologist at the local volcano monitoring station that the local volcano erupted about every 125 years and there was about another 90 years or so before it would erupt again (approximate numbers from memory and to protect the innocent). On duly reporting this “fact”, the volcano promptly blew within a few years!
Burnt once, twice shy.

• Brian H

Their argument from ignorance is that since the understanding of aerosols and clouds and volcanoes etc. is low, they are unable to explain the temperature rise using them. Therefore the explanatory power of CO2 is all that’s left, and must handle it all.

See, there was this drunk looking around at night under a streetlamp, and a passerby asked what he was looking for …

• David Wojick | November 22, 2010 at 12:45 pm |
“But if we put in emerging from the little ice age simply as an unexplained oscillation there is nothing for AGW to explain.”

And we couldn’t have that could we?

It seems faintly ridiculous to me. We have no explanation for this ~900 year cycle through the medieval warm period, Roman warm period, Mycean warm period, and now, so we’ll use this hockey stick to knock it into the long grass and put it on co2 instead…

Is this scientific?

• Derecho64

Funny how certain folks consider proxies unassailable when they interpret them to mean what they want them to mean (Minoan warm period, Roman warm period, MWP, etc), but when proxies show hockey sticks, they’re garbage, and/or the scientists who show hockey sticks are frauds and criminals. Amazing doublethink.

• I’ll take it ‘certain folks’ applies to me since you are replying to my comment.

The evidence for those warm periods doesn’t come only from proxies, but also from archeology. Go read a bit before you accuse others of doublethink. Fool.

Now show where I have used the words fraud or criminal or retract your words please. Idiot.

• Excellent news Craig! All the best with it.

• TomFP

My – no doubt imperfect – understanding is that “they” have created a number of models which hindcast from various points in the past to a recent “present” with superficially impressive accuracy, but that thereafter their predictions diverge. This divergence comes like the thirteenth stroke of the clock, casting doubt on all its earlier chimes. I understand that the hindcasting “success” is due to “parametrisation”, and their subsequent disagreement is attributable to the fact that different modelers chose different pairs of parameters to vary, leaving no way to determine which, if any, of the models have genuine skill. Have I got this right?

• Brian H

Yes, and of course different models do better with different aspects or time periods. Averaging them (!) produces a kind of spaghetti chart of so-so half-arfed projections that isn’t glaringly wrong up to a particular date, after which they ride off madly in all directions.

• Jim

It appears Mr. Oldberg is proposing a superior method for elucidation of the workings of climate. I think he deserves to be heard. After all, it seems this method has worked well for weather, so logically an extension of that would be the climate. The problem the method might run into is a lack of input data both in terms of quality and quantity. But even with that restriction, the method might illuminate aspects of climate that are not highlighted by the current climate establishment.

• Derecho64

Climate models are indeed used to explore the internal variability of the climate system.

Some argue that such investigations are all that climate models should be used for – process discovery and analysis.

I see a lot of folks making statements without knowledge of the literature. Trust me, there’s lots of it out there. Try Google Scholar.

• I have been independently studying the cyclic patterns of the natural variability in the weather over the past 28+ years, the complete results of what I have found posted to website with forecast derived from applying the methods to archived data to test as an analog forecast method.

http://research.aerology.com/aerology-analog-weather-forecasting-method/

More recent conversations in blogs also newly added to further up date the progress in “extended peer review prototype testing” already going on.

13. Craig Loehle

There are fundamental difficulties in science that this presentation glosses over. The identification of objects is not so simple. For even simple classes like male or female, there are individuals who by chromosome number are neither or both, or by appearance are indeterminate. In the other example of rain or cloudy in Milwaukee, it is entirely ambiguous on any given day if it is cloudy. How cloudy counts as cloudy? If I feel 1 drop, did it rain? I am not just picking nits because in climate science the inability to model clouds turns on this problem of vagueness of the object. Clouds are not discrete like a bear is a discrete object. Just because we can use the word “clouds” does not mean we know how to abstract it. In physics it is often the case that simplifications are made, but then these are tested, not assumed. For example, Newton posited that one can treat the planets as point masses for calculating planetary motion–when this is tried it works quite well. If you want to understand tides and length of day on Earth, however, you must abandon this simplistic model. In climate models all sorts of simplification/abstractions are made, but how do we test these? The land heterogeneity (for albedo calcs) is represented by a single number, even though the albedo of a forest changes when wet or dry or damaged by insects or due to fire etc etc. The initial abstractions can help us discover or hinder us, and can not be taken for granted. This goes likewise for how we represent processes like heat transport or forces like “feedback”. The climate models do not represent individual thunderstorms-does that matter? If the abstractions are wrong, the proper use of inference is nice but will not lead to correct results.

• Labmunkey

My feeling is that these will be addressed in part two. I’d certainly caution evaluating his entire thought process on the first half of a two-part post.

• Great summation of the problem IMO.

Just one example. I recently looked at the longwave emissivities used by the satellite data modelers. Soils vary in emissivity enormously between being wet and dry, and differ from unity by up to 20%. Do the modelers integrate rainfall models to try to overcome this issue?

The stunning complexity of the surface-atmosphere coupled system is what makes climatology the most fascinating knowledge puzzle of all time. To claim we have it nailed after a couple of decades is hubris, we still have learner plates tied to the climate cart bumper.

• Luis Dias

So the question is, are the models suffering from bad abstraction quality or not? Do they suffer for not showing off thunderstorms, or is that unproblematic? And if we have this question, how do we go ahead and resolve it? How do we measure the quality of our abstractions? Not as if they should be “True”, in an absolute sense, but rather if they “Work” or not.

I like the entropy method (or smth) that Judith pointed us out, it’s like a perfect inference machine that tries to resolve information as it gathers and codes it. But my taste is pointless here, I’m not convinced we could quantize “unkonwn unknowns”.

• Raving

OTOH it is also called sexual dimorphism. There is good reason for it to be that way. The binary complementary coupling is a functional attractor. In such functional circumstances where the functional emphasis is on the binary aspect, the gradation of possibilities is unimportant. (I.E. the impetus is towards complementary pair off)

• Steven Mosher

Quine:
“As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer’s gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits.”

14. alistairmcd

I use James Clerk Maxwell’s two volume treatise as inspiration. It’s the 4th edition, published in 1934. The first edition was published in 1873, 6 years before his death: http://www.archive.org/stream/electricandmagne01maxwrich#page/n21/mode/2up

So, he was inspirational for 70 years or so. His inspiration was the collected works of Michael Faraday, starting in the 1810s. I’m attracted to that as well because I’ve done a lot of work on electrochemistry. Faraday founded nanoscience in 1847 when he proved the properties of gold colloids were different from those of the bulk metal! I too worked on nanoscience when we added hidden chemistry from the nuclear programme to other stuff. That was 25 years’ ago. It all comes round again.

The key thing to remember is that science oscillates between theory and experiment. Maxwell provided the theory to explain the experimental work of Faraday; it took 30 years. Lorentz took Maxwell’s work further but what really got physics going was the vacuum pump so Einstein could prove the photoelectron effect. He went into theory as well.

The relevance to climate science is that it is heading towards experiment calling the shots. For 36 years, the theory used to make the key correction to the presently less than statistically significant ‘CO2-AGW’ signal, that for ‘cloud albedo effect’ cooling, was essentially derived by Van de Hulst in the 1950s. He observed that backscattered light from sols was greater than the 50:50 you’d expect for symmetrical diffuse scattering, and as far as I can tell, developed the idea that it was all due to the high Mie asymmetry factor, g, the proportion of the energy forward scattered, biasing the process so much of the light turned backwards.

Put in lumped parameterisation and you get a relationship that shows the correct trend between backscattered energy and apparent optical depth as determined by Beer’s law. But the fit is fortuitous because there must be two processes, the second being direct backscattering. Therefore the upper bound of the true optical depth must be obtained using [1-g] energy entering the sol. I suspect there’s more direct backscattering than this as the wave sheds photons to be diffusely scattered.

And how did I come to this conclusion? Last February, I happened to look at two layers of small cumulus clouds. The top layer was white all around. the bottom layer was dark underneath, white at the top and the sides. I made the very simple connection: clouds about to rain have larger droplets. Larger droplets backscatter more at the upper part of the cloud so less energy enters the cloud. This is called experimental observation!

Therefore, the theory in the climate models wrongly predicts the effect of pollution on albedo for thicker clouds. Put in the shielding idea and you get another AGW. If true, it’s a game changer and CO2-AGW must be much lower than the IPCC predicts.

I’m quite prepared to be proved wrong though and indeed I’d welcome it because that would prove present climate science was robust enough to survive. It helps that I don’t need to toe any party line.

• alistairmcd

Sorry: it’s =[1-g] being directly backscattered. Within the sol, for totally random photons, g is operationally 0.5.

It’s really more complex but the maths is hideous.

15. So far so good: models, conditions, observations, predictions.
Unfortunately, the IPCC does not make “predictions”. They have abolished the word and replaced it with “projections” (though they forgot to tell whoever wrote section 10.5). Unfortunately, they don’t say what is meant by a “projection”. Whenever skeptics point out the failure of IPCC predictions, the masters of spin and distortion like Gavin say that they are not predictions, they are “projections”, as if this is some kind of excuse.
If anyone can give a clear, precise definition of what a “projection” is and how it differs from a prediction, I would love to hear it.

• Projection: noun: The act of creating a mathematical model of the climate and believing that nature must conform to it.

• Brian H

Or: what the world would look like if it behaved the way I assume. As G&T say, video game illustrations of pre-cooked conclusions.

16. sharper00

I’m not really sure this multi-part approach to postings is working very well. Topics complex enough to requite multiple parts inevitably generate a lot of questions and people don’t know if they’ll be dealt with in subsequent posts or not.

It might better to write all parts together and then post them in and around the same time.

• Labmunkey

Or perhpas close the comments until the final part.

17. It is now clear that you are conflating your own technical language with ordinary language, so as to seem to say what you do not in fact say. For example you say “Abstraction produces an abstracted state by placing the constituent states in inclusive disjunction.” Well no it doesn’t or not that we know of. Abstraction is not logical disjunction, not even close so far as I can tell, so you are coining a new technical term which you call “abstraction.”

It would be clearer if you called it “state disjunction” or some such, so we do not confuse it with actual abstraction. You would then have to present a scientific argument for state disjunction having something to do with abstraction. In short you are trying to do semantically what can only be done scientifically. This is similar with Shannon’s unfortunate use of the term “information” to describe something that has little to do with information.

You also describe a state as a description, which is an odd kind of state. But male and female are not descriptions, they are descriptors or “predicates” in logic. A description is usually a sentence (or a proposition in logic). Male and female are just words, not descriptions. Moreover, you seem to be confusing the description with what it describes, which is fact a physical state.

• Michael

The whole thing needs some re-thinking, clarification of terms and consistency in their use.

• Michael:
You misunderstand my aim. My aim is not to coin new technical terms but rather to label concepts that have precise technical descriptions by common English terms for the purpose of making my description accessible to real people with all of their limitations.

In contrast to technical languages, common English makes ambiguous reference to concepts. Thus, I’m happy to respond to requests for clarification of precisely which technical concept is referenced by one of the common English words I’ve used. However, it seems to me that to argue over my choice of these words would be unproductive and time wasting.

This post seems very intriguing I just started reading, but I like the starting phrase: “a model is not reality itself but rather is an abstraction from reality” is very promising, as long as the scientific method is kept and any theory is tested in reality, not in something which is not reality.

Looking forward to this series.

• Brian H

Long, long ago General Semantics (Alfred Korzybski ) observed, “The map is not the territory.” A reminder that the territory can operate with unlimited levels of detail and complexity, but maps are simple things …

• Michael

And yet they are extremely useful.

Try getting around a new city or unfamiliar place without a map on the grounds that they are not sufficiently complex to describe reality.

Maps are a great analogy for climate models.

• TomFP

Michael – maps are a terrible analogue for climate models. They are the product of direct, replicable mensuration, and are schematic representations of an invariate system (in the case of road maps, they become invalid when someone puts in a one-way system, or builds a new road, or dams a valley), while climate is a dynamic, nonlinear system. Maps are limited in accuracy only by the instrumentation available to measure, and by the need to simplify (not approximate) for human usability. Such simplifications can be reversed at will, should the data they have concealed come to be needed. They provide, if anything, a great COUNTER-analogy for climate models!

19. Craig Loehle

As a further comment on fundamental objects, one unfortunate tendency of philosophers is what is known as “reification” which is the tendency to ascribe reality to words. In contrast, in real science it is important that terms be operational. We define “mass” in terms of the operation of measuring it. When we fail to do this, and define terms like IQ or biodiversity, everyone measures it differently and it is a tower of babble. Just using a word does not constitute science unless you can tell me how you operationally measured it, and in many cases this also involves the statistical process of estimating it.

• What is the operational definition of ‘surface temperature’ as applied to the various elements found on the surfaces of the Earth/oceans? Or shall we cop out and call it a ‘near surface temperature’. ;-)

• Brian H

Reminds me of “The Tyranny of Testing” (Banesh Hoffman), which defined IQ tests as measurements of IQuination. IQuination is the ability to do well on IQ tests.

20. Robinson

In order to home-in on a model with correct inferences (the needle in a hay-stack) you have to find some way of regularizing it. In a mathematical sense the problem is ill posed. Regularization involves choosing suitable priors. This where the prejudices of the observers enter into the equation (actually I don’t mean prejudices, I mean “assumptions”!).

I’ve worked on similar modelling problems (a couple of years ago I wrote a research paper in this area concerning computer vision). As Farisu points out in his paper (on Super Resolution):

When the fundamental assumptions of data and noise models do not faithfully describe the measured data, the estimator performance degrades. Furthermore, existence of outliers, which are deﬁned as data points with different distributional characteristics than the assumed model, will produce erroneous estimates. A method which promises optimality for a limited class of data and noise models may not be the most effective overall approach.

(Fast and Robust Multiframe Super Resolution – Farsiu, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 10, OCTOBER 2004)

I couldn’t agree more. This is a problem in general for ill posed problems and I will be interested to see what Terry’s approach to solving them is. Or at least how I should think about them in future..

• Craig Loehle

Ill-posed problems are an under-appreciated difficulty. My paper on ill-posed problems in Ecology will appear in Ecological Complexity, 2011 issue #1.

• Raving

http://www.complex-systems.com/pdf/01-1-11.pdf

Have moved away from this line of reasoning because nature cannot always be ‘well formed’.

Example: Consider a class of situations which are by definition indescribable Does one ignore the existence of such indescribable constructs because they can only be captured in an ill formed manner?

Some people would claim that ignorance is better than blundering speculation. That’s fine. The down side to that approach is that it shuts out that part of nature which is touched in an indescribable way.

The activities of nature which can be touched in the describable manner are part of the autistic domain.

Indescribable nature (situations that are unnameable to being well formed) exists in the NT ( Neuro Typical, non-autistic) domain. This alternative non-autistic perspective has yet to be rigorously developed.

• Robinson

I’m rather surprised that it’s under-appreciated. It’s been well appreciated in other domains for a long time. I suppose in something like computer vision/graphics it’s pretty obvious when your model is wrong – you can physically see it on the screen, whereas in something like weather forecasting or climatology, you send out a press release regardless.

• Brian H

Yes, when all current and recent weather and “global temperature” data are written off as “outliers”, one begins to suspect that the reach of the models and assumptions is somewhere between low and trivial.

• Robinson:
The problem isn’t ill posed but is very difficult to solve. In Part II my aim is to show that it has been solved and to reveal how this was accomplished.

21. It would be helpful if the relationship or overlap (if any) between “model” as Terry uses the term and the type of model used by climatologists to simulate the climate system was carefully and specifically identified.

As a climatologist, I am aware of models that I would call abstractions (such as the Hadley cell) and models that I would call approximations (such a GISS). They are apples and oranges. A discussion of models as abstractions seems to exclude the very models Terry seems to want to talk about.

• Richard S Courtney

John NG:

I think you make a good point.

There are many kinds of models that may each be useful for specific applications. Scientific models have many different forms, purposes and capabilities.

Climate models and weather models consist of algorithms in computer programs.

It is argued that climate models do not suffer from the same constraints on future predictions as weather models because – although similar in construction – the two types of model do different things: viz.
(a) Weather models extrapolate future weather from existing weather and, therefore, are prone to diverge from reality as a result of the present weather measurements being less than perfect.
But
(a) Climate models emulate the boundary conditions of weather events so do not extrapolate anything.

However, if these arguments are true then it is difficult to discern what use climate models can have.

A model can have practical and/or heuristic value.

A practical model makes useful predictions that assist understanding and/or decisions. A heuristic model need have no practical use but can test the understandings built into the model by comparing predictions made by the model against observed reality.

A climate model that emulates the boundary conditions that constrain weather has little or no practical use.

For example, natural variability exists within the boundary conditions of any climate system and, therefore, a climate model cannot emulate natural climate variability. And the MWP and the LIA are within the boundaries of natural weather, so a climate model would merely show the range of possible climate conditions which includes them: the climate model would not indicate whether the MWP or the LIA existed at any specific point in time. But useful information would be an indication that the global climate was about to move towards a condition like the MWP or a condition like the LIA. Accurate discernment of boundary conditions cannot indicate that.

A heuristic model informs as to whether the understandings built into the model are correct. But a model that only indicates the boundary conditions which enclose climate can only provide such information if the real climate is observed to be outside the boundary conditions indicated by the model. In that case the model would indicate the understandings are wrong but it would not indicate how and why they are wrong.

Furthermore, validation of such a model is unlikely to be possible within a human lifetime. A change to the system (e.g. by altered GHG concentrations) may cause the boundary conditions to change and the model may indicate this. But the model’s indication would not be discernible in the real world unless and until the real world’s condition was observed to have moved beyond the previous boundary conditions.

This latter point is demonstrated by the present arguments as to whether or not the mean global temperature stopped rising about a decade ago. Some say the slowing (perhaps cessation) of recent global temperature rise indicates the models are “wrong” while others say the models cannot indicate small fluctuations in a general rising trend. And, of course, the models cannot indicate any such fluctuations if they are emulating boundary conditions.

So, in my opinion, there needs to be much more specification of
1. What the climate models are attempting to emulate
And
2. How they are attempting to do it.
And
3. How their outputs can be validated.

Because, as you say,
“It would be helpful if the relationship or overlap (if any) between “model” as Terry uses the term and the type of model used by climatologists to simulate the climate system was carefully and specifically identified. ”.

Richard

• John:

Hopefully, Part II will clarify the relationship. If this doesn’t completely lift the fog, please get back to me with a request for clarification.

22. Derecho64

Edwards (in “A Vast Machine”) makes a good argument that the “divide” between models and data is an artificial one. The data we have is the result of a model.

• If producing the data involves applying a mathematical procedure then that procedure can be called a model. As I understand it the Jones global surface temperature data is produced by a very complex Fortran program, which embodies an elaborate statistical model. On the other hand this data may be input to another model, in which case it is just data. Economic models are used to generate future emissions data, etc. Climate science is a mass of interacting models. In the extreme case very measurement is an estimate, with assumptions, etc., so it can be called a model result. But the distinction between data and model is still very useful.

• Derecho64

Every single analysis of the state of the climate system is the result of a model. Even the satellite stuff. There’s really no such thing as the “raw data”.

• Robinson

Yes.

The satellite data model involves various calibration curves that are determined from experiment. Even if you have a reference source (in the case of IR, a black-body source), the calibration still may not constrain the model appropriately. For example, here on Earth in order to be correct, you would have to include atmospheric transmission, emissivity and background, as well as the response of the equipment itself (it also has a temperature). The first of these is hard to measure accurately and likewise the emissivity and background can only themselves be approximated. Even so, you can reach reasonable agreement with the black-body, if only to a few tenths of a degree accuracy.

• That is what I said, that every measurement is in its way a modeling exercise. But it is also true that every model has its raw data, so “raw data’ really means relative to a given model. The danger is simply that data is taken as somehow absolute, which is never the case. A striking example of this is when people present a surface statistical temperature curve as an established fact.

But on the other hand every specific scientific effort always operates by assuming a great deal is true, even though someone else may be exploring and testing it. It is like rebuilding a ship at sea, you have to leave most of it alone at any given time.

• Zajko

Edwards argues is that such data is not independent of modeling (“models all the way down”) but there is a distinction when it is treated as data and used as such for another model. If we can keep in mind the various levels and stages of modeling involved, that allows us to keep better track of our assumptions.

• I have to have a problem with model results being used as “data” for another model. The purpose of a model is to test an assumption or set of assumptions. If you are using assumptions to test other assumptions , your error creep is going to be significant.
At some point somebody needs to go outside with a thermometer and actually test the temperature, over a period of years, and decades.

I believe that a large part of the current dissagreements between scientists is the results of models being treated as data, that information is being used as media fodder by special interest groups to further their own goals.
This is similar to some temperature records being smoothed in various places, they are adjusting historical records by using models to account for the instruments inaccuracies based on what their models say it should read.
At some point the thermometer, tape measure , or level has tobe accepted as a data point, and the models adjusted from that

• Zajko

A single data point tells you next to nothing. At some point someone needs to read a thermometer, but without the use of models it remains just a reading from an instrument. Does the instrument provide an accurate measure of what we are interested in? How do you extrapolate the resulting time series over an area? Simply accepting the data point as accurate is not much of a solution.

• I agree, my point was that there appears to be a large amount of gospel according the model happening.

At some point we need to actually check what the real world is telling us, pronouncing that the world as we know it is going to end because my IPAd says so is not science.
Creating a thesis, dsigneing an experiment, collecting data, evaluating your collected data against your thesis , rinse wash repeat until either the thesis matches your observed data or you create a new thesis.

Models are an aid to to understanding your results, not a result in themselves.

• Brian H

Yes, “error creep” sounds like a bit of an understatement, there. More like error multiplication.

• being polite

• Richard S Courtney

David Wojick:

I agree with you when you say, “If producing the data involves applying a mathematical procedure then that procedure can be called a model.”

For several years I have been attempting to get re-evaluation of the various determinations of mean global temperature (MGT).

Given the importance of MGT time series for bodies such as the UN Intergovernmental Panel on Climate Change (IPCC), radical changes to their determinations are warranted against two possible understandings of MGT; viz.
(i) MGT is a physical parameter that – at least in principle – can be measured
or
(ii) MGT is a ‘statistic’; i.e. it is an indicator derived from physical measurements.

These two understandings derive from alternative considerations of the nature of MGT.

1. If the MGT is assumed to be the mean temperature of the volume of air near the Earth’s surface over a period of time, then MGT is a physical parameter indicated by the thermometers (mostly) at weather stations that is calculated using the method of mixtures (assuming unity volume, specific heat, density etc).

Alternatively

2. If the thermometers (mostly) at weather stations are each considered to indicate the air temperature at each measurement site and time, then MGT is a statistic that is computed as being an average of the total number of thermometer indications.

In either case, the present uses of MGT (in, for example, ‘attribution studies’) are mistaken.

If MGT is a physical parameter then the measurement sites are the measurement equipment, and the non-uniform distribution of these sites is an imperfection in the measurement equipment. Some measurement sites show warming trends and others cooling trends and, therefore, the non-uniform distribution of measurement sites may provide a preponderance of measurement sites in warming (or cooling) regions. Also, large areas of the Earth’s surface contain no measurement sites, and inferring/plotting temperatures for these areas require interpolation.

Accordingly, the measurement procedure to obtain the MGT for a year requires compensation for the imperfections in the measurement equipment. A model of the imperfections is needed to enable the compensation, and the teams who provide values of MGT each use a different model for the imperfections (i.e. they make different selections of which points to use, they provide different weightings for e.g. effects over ocean and land, and so on). So, though each team provides a compensation to correct for the imperfection in the measurement equipment, each also uses a different and unique compensation model.

The large differences between results generated by each team demonstrates that the compensation models used by all – except at most one of – the teams must contain large errors that generate
(A) spurious trends to MGT with time, and
(B) errors to MGT that are more than double the calculated 95% confidence limits.
But the fact that all the teams calculate their errors demonstrates that each of the teams thinks its particular model is correct.

MGT time series are often used to address the question,
“Is the average temperature of the Earth’s surface increasing or decreasing, and at what rate?”
If MGT is considered to be a physical parameter that is measured then these data sets cannot give a valid answer to this question because they contain errors of unknown magnitude that are generated by the imperfect compensation models.

The issues raised above might be resolved by considering MGT as a statistic (as described bove) which does not have a unique value. According to this consideration MGT is not measured – it is calculated from measurements – and, therefore, it is not correct to use measurement theory when considering MGT. Thereby, the above arguments become invalid because they are based on measurement theory.

However, if MGT is considered to be a statistic then it can be computed in several ways to provide a variety of results, each of different use to climatologists. (In this consideration, MGT is similar in nature to a Retail Price Index that is a statistic which can be computed in several ways to provide a variety of results each of which has proved useful to economists). So, if MGT is considered to be a statistic of this type then MGT is a form of average. In which case, the word ‘mean’ in ‘mean global temperature’ is a misnomer because – although there are many types of average – a set of measurements can only have one mean.

Importantly, if MGT is considered to be an indicative statistic then the differences between the values and trends of the data sets from different teams indicate that the teams are monitoring different climate effects. In this case there is no reason why the data sets should agree with each other, and the 95% confidence limits applied to the MGT data sets by their compilers may be correct for each data set. Similarly, the different trends indicated by the MGT data sets and the MSU and radiosonde data sets could indicate that they are also monitoring different climate effects.

However, this consideration of MGT as an indicative statistic has serious implications. The different teams each provide a data set termed mean global temperature, MGT. But if the teams are each monitoring different climate effects then they should each provide a unique title for their data set that is indicative of what is being monitored. Also, each team should state explicitly what its data set of MGT purports to be monitoring. The data sets of MGT cannot address the question
“Is the average temperature of the Earth’s surface increasing or decreasing, and at what rate?”
until the climate effects they are monitoring are explicitly stated and understood. Finally, the application of any of these data sets in attribution studies needs to be revised in the light of knowledge of what each data set is monitoring.

So, MGT is a model result or a statistic but each team that presents an MGT time series either uses a different model or presents a different statistic, and the implications of this have yet to be addressed.

Richard

23. The deep conceptional look at the word model does not begin to address the critically important aspects of the modern usage of the word. If we look at models that are to be applied to analyses of physical phenomena and processes, these additional aspects cannot be ignored. Instead, these generally ignored aspects are the most important aspects.

Modern models are comprised of the following individual domains;

(1) The continuous equation domain,

(2) The discrete approximations to the continuous equations,

(3) The numerical solution methods used to solve the discrete approximations,

(4) The software into which the solution methods are implemented,

(5) The procedures for applications of the software, and

(6) The user domain.

Any fundamental aspect that is established in a given domain can be completely annihilated by the handling of that aspect in all subsequent domains. In particular, the last domain, the users, can annihilate almost all fundamental concepts that might have survived in 1 through 5.

Maybe these will be addressed in other posts. Professor Curry can annihilate this comment at will.

• Brian H

Well spoke. As the ladder of assumptions is climbed, it’s comforting to imagine that the lower rungs are secure, but they may be attached with spit and baling wire. If too much pressure/weight is applied and any come unstuck, the ladder splits and “all fall down”.

• Craig Loehle

In my view, it is ok to climb the ladder of assumptions a few rungs to see “what if” and look around, but if the rungs of the ladder are imaginary (ie, not tested), I prefer not to go too high.

• Zajko

You can also try climbing different rungs (or the same rungs differently) and seeing if you end up in a similar place. If so, the ladder may be more robust than spit and wire (or it could be you’ve just built multiple ladders that lead to the same place).

• Yes, playing in the sandbox with research-grade models is a very necessary activity. And it’s fun; lots and lots o’ fun.

But, applications that have extremely important aspects require production-grade tools, application procedures, and users. Almost all such production-grade models have been subjected to deep and comprehensive investigations in all six domains. Comprehensive or thorough do not begin to describe the level of investigations. Exhaustive is more like it. For each and every domain; you don’t get to lump a few together and you don’t get to skip any.

• What you and many others are doing is objecting to the use of the climate models as forecasting tools to make policy. I agree but that is not a scientific use, it is applied science or engineering, which has very different standards. The people doing weather forecasts are not doing science, any more than the people building bridges or treating cancer are doing science. The fault is that these models transitioned out of science and into public operational mode when they should not have. But this happened because the environmental movement, including some scientists, picked the results up and carried them into the policy domain. Science per se is not to blame for this transition. Unfortunately it is taking a hit anyway.

• Brian H

As you will note by picking out and reading the comments and POV of engineers who get involved in this debate, their standards for verification are rather more demanding and rigorous than scientists’. They have a rather disrespectful and even contemptuous attitude towards scientists who attempt to use their abstract-reductionist models in the real world without “paying their dues” by subjecting each and every element to life-or-death fallibility testing.

• Derecho64

Are you familiar with Steve Easterbrook’s work, Brian H?

• Anything is possible, so “can be annihilated” is true. But modeling is now a central part of all the sciences. Check out the science job ads in Science for scientists who do “computational X” where X is any science. So pointing out that models can be wrong in general is not particularly relevant to the climate debate

• You omitted the physical science which underlies the model and against which the results can be tested.

• Nope, that falls under the first domain. It’s taken as a given that none of the continuous equations have been made up from thin air. An assumption that has been proven to be wrong, in more than one case.

Equally important, the mathematical sciences are fundamental to the first three domains. Continuum mechanics, for example.

• Craig Loehle

“physical science” ie physics can underlie a model and yet it might not be testable–an example would be earthquakes. There is too much heterogeneity and the scale and behavior of solids at that scale defie our lab experiments. Thus no predictions for earthquakes yet even though it is “just” physics.

24. John Whitman

Terry Oldberg

I appreciate the topic of your post. Thank you for providing an opportunity to discuss epistemology and metaphysics.

The problem of induction should be restated as the process of induction.

The process of induction is a multi-step process. First is observation; the proper identification of the evidence to the senses via perceptions. Next step is differentiating some group of observations by their context with other observations. Then integration occurs defining an abstraction (concept if you will) by selecting observations that provide a hypothetical necessity(s) or essential(s) of the abstraction. The abstraction is validated by testing the implications of the abstraction against its consistency with all aspects of reality and with other abstractions.

Statistics, if considered as a branch of epistemology (see William M. Briggs post at his blog on the relationship between epistemology and statistics) could deal with the uncertainty of an induced abstraction and relations deduced from abstractions.

Note: I feel more comfortable using the word concepts instead of abstraction.

Logic can be then used to properly identify correct identification of reasoning involving deduction of new knowledge from propositions involving combinations of the abstractions.

Terry, I do not see how modeling as you construct it presents the opportunity to apply the process of induction. I would appreciate it if you can explain how you think models can be inductive.

John

25. GaryW

My problem at this point is when folks define a set of instrument measurements as a “model” of something. They then appear to be claiming that data set is equivalent to everything else they label as a “model.” That may be in interesting mental exercise but when applied to physical processes, the uncertainty as to the accuracy of underlying numeric values must always be carried forward. A “model” that was not derived from physical measurements must be identified as such. A very good name for this specific type of “model” is “Guess.” I see no logical reason to obscure the difference between measurements and guesses.

26. Welcome, Terry!

Principles of Reasoning or lack of the same is not the problem.

Climatologists followed the same template used in other disciplines that succeeded in getting large portions of government funds: Report data that supports government policy. Manipulate or hide data that would contradict government policy.

That is the way to keep government grant funds flowing. Former President Eisenhower warned in 1961 that public policy might become captive to a government financed “scientific-technological elite.”

There is no lack of reasoning ability in those who manipulate or hide experimental data in exchange for government funds. They are mere pawns in the attempt to use science as a tool of proraganda.

With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo

27. E O'Connor

PaulM – “If anyone can give a clear, precise definition of what a “projection” is and how it differs from a prediction, I would love to hear it.”

The use of projection to replace prediction for model outputs happened in the ‘Supplementary Report 1992’, published between the FAR and SAR Reports.

An IPCC ‘Insider’ wrote in 2005:
“Secondly it started to dawn on the scientific community that the Convention negotiators (and the media) were interpreting the word ‘prediction’, as traditionally used by the atmospheric modelling community to describe the output of a model, far too literally and that some other word (‘projection’) had to be found to describe the output of a climate model forced by a particular ‘scenario’ of greenhouse gas concentrations. The 1992 Supplementary Report did its best to register some important messages:

– Climate varies naturally on all timescales due to both external and internal features. To distinguish man-made climate variations from those natural changes, it is necessary to identify the man-made ‘signal’ against the background ‘noise’ of natural climate variability.

– The prediction of future climate change is critically dependent on scenarios of future anthropogenic emissions of greenhouse gases and other climate forcing agents such as aerosols. These depend not only on factors which can be addressed by the natural sciences but also on factors such as population and economic growth and energy policy where there is much uncertainty and which are the concern of the social sciences. Natural and social scientists need to cooperate closely in the development of scenarios of future emissions.

– Scenario outputs are not predictions of the future and should not be used as such.”

The Insider’s first point was:

“But already by the time of the January 1992 session of the IPCC Working Group I that finalised an updated ‘Supplement’ to the FAR to inform the final stages of negotiations of the FCCC, some in the IPCC science community had become uncomfortable at the extent to which the nature of climate and the language of scientific uncertainty were being misunderstood in the policy process. There were two particular concerns. Firstly, the decision of the negotiators to set aside the naturally changing component of climate and reserve the term ‘climate change’ for only that part that is due to change in the composition of the atmosphere as a result of human activities was already beginning to set the stage for the massive confusion and misunderstanding of the uncertainties in climate prediction that have dogged the public and political debate ever since.”

http://www.assa.edu.au/publications/occasional_papers/2005_No2.php

Then in 2007, the ‘Insider’ writing about the Third Session of Working Group I in Guangzhou, China in January 1992 stated :

“…proposed (and Working Group I eventually formally adopted) the use of the term ‘projection’ rather than ‘prediction’ for the output of climate models forced by greenhouse gas scenarios – in order to try to reduce the public confusion resulting from the earlier ambiguity in the use of the concept of model ‘predictions.”

http://www.amos.org.au/documents/item/80

Who was the ‘Insider’?

John Zillman
Director of the Australian Bureau of Meteorology 1978-2003
President of the World Meteorological Organization 1995-2003
Principal Delegate of Australia to the IPCC 1994 2004

• These are still predictions as far a logic is concerned. They are called contingent predictions, because the prediction is contingent on certain future events happening. The logical form is “if A happens then B will happen” (or if A does not happen then B will happen, or not happen, etc., all of which are logically equivalent).

Moreover, it is precisely because the contingent events are controlled by humans (energy policy, etc.) that these predictions are so important. According to some people our very survival depends on these being recognized as true. Saying they are not predictions verges on being a rhetorical hoax.

• I have been using the term scenario simulations. Contingent predictions is an interesting term, but as predictions, they are also contingent on the future solar and volcanic forcing, which are unknown. Too many contingencies for this to be a prediction? interesting what wikipedia says about prediction:

“A prediction or forecast is a statement about the way things will happen in the future, often but not always based on experience or knowledge. While there is much overlap between prediction and forecast, a prediction may be a statement that some outcome is expected, while a forecast may cover a range of possible outcomes.”

Seems like forecast would be the better term, according to this definition?

• The predictions are not logically contingent on these natural factors if they are not specified. What is contingent on these factors is the truth of the prediction, that is it may prove false because of natural factors that were not considered. Nor is this the forecast of a range of outcomes, rather it is a range of predictions based on a range of input scenarios. Each contingent prediction is precise for a given model run using a given scenario.

• The natural factors are specified, to be the same as the 20th century (in fact, you can’t run the models without solar forcing in any sort of sensible way). Some semantical issues here.

• one other point. a weather forecast is very different from a simulation with a weather prediction model. The numerical weather prediction model is used to make say 50 simulations, slightly varying initial conditions (to account for the effect of imperfect initial conditions on chaos). then the simulation results are corrected for known biases. then the ensembles are combined in some way (say an ensemble mean, or a pdf). This constitutes the objective forecast; the forecaster can then assess the predictability (e.g. confidence in the forecast) by looking at the ensemble spread and past analogue situations.

So given the way these words are used re weather/climate, i don’t view a single simulation as a prediction (esp with contingent forcing)?

• Indeed, the impact of intrinsic unpredictability due to chaos on the concept of prediction has yet to be fully felt. This epistemic issue is something I have done a lot of work on but it is beyond the scope of this post. It is worth noting however that this also means that chaotic phenomena are intrinsically unexplainable, because explanation is usually just retroactive prediction. Thus to the extent that climate is chaotic its oscillations may be unexplainable. This might include 20th century warming. No one seems to be taking this real possibility seriously.

It is worth noting however that this also means that chaotic phenomena are intrinsically unexplainable, because explanation is usually just retroactive prediction.

Seems awfully close to a chaos fallacy: say “chaos”, wave your hands a bit and you’re relieved of any duty to engage seriously.

All chaos means is that predictability decay is inevitable, but that doesn’t mean the rates of decay are constant (they aren’t; it’s not even monotonic), or the same for all of the “things” we want to predict (again they aren’t), or that we can’t explain a chaotic system. Though, as you rightly point out, we do face real limits in system identification. It’s tough enough in the perfect model situation, but that’s not the actual situation.

• Richard S Courtney

David:

You say:
“Thus to the extent that climate is chaotic its oscillations may be unexplainable. This might include 20th century warming. No one seems to be taking this real possibility seriously.”

I am that “No one”. I have presented the following in several places including in a post on another thread of this blog.

The climate models are based on assumptions that may not be correct. The basic assumption used in the models is that change to climate is driven by change to radiative forcing. And it is very important to recognise that this assumption has not been demonstrated to be correct. Indeed, it is quite possible that there is no force or process causing climate to vary. I explain this as follows.

The climate system is seeking an equilibrium that it never achieves. The Earth obtains radiant energy from the Sun and radiates that energy back to space. The energy input to the system (from the Sun) may be constant (although some doubt that), but the rotation of the Earth and its orbit around the Sun ensure that the energy input/output is never in perfect equilbrium.

The climate system is an intermediary in the process of returning (most of) the energy to space (some energy is radiated from the Earth’s surface back to space). And the Northern and Southern hemispheres have different coverage by oceans. Therefore, as the year progresses the modulation of the energy input/output of the system varies. Hence, the system is always seeking equilibrium but never achieves it.

Such a varying system could be expected to exhibit oscillatory behaviour. And it does. In each year the mean global temperature rises by 3.8 deg.C from June to January then falls by 3.8 deg.C from January to June.

Importantly, the length of some oscillations could be harmonic effects which, therefore, have periodicity of several years. Of course, such harmonic oscillation would be a process that – at least in principle – is capable of evaluation.

However, there may be no process because the climate is a chaotic system. Therefore, the observed oscillations (ENSO, NAO, etc.) could be observation of the system seeking its chaotic attractor(s) in response to its seeking equilibrium in a changing situation.

Very importantly, there is an apparent ~900 year oscillation that caused the Roman Warm Period (RWP), then the Dark Age Cool Period (DACP), then the Medieval Warm Period (MWP), then the Little Ice Age (LIA), and the present warm period (PWP). All the observed rise of global temperature in the twentieth century could be recovery from the LIA that is similar to the recovery from the DACP to the MWP.

And the ~900 year oscillation could be the chaotic climate system seeking its attractor(s). If so, then all global climate models and ‘attribution studies’ utilized by IPCC and CCSP are based on the false premise that there is a force or process causing climate to change when no such force or process exists.

Richard

• David L. Hagen

Judith
Specifying ~ “constant” solar forcing in itself biases the results as solar forcing appears to have varied more than IPCC assumes. Note the arbitrary adjustment made to satellite TSI to reduce solar variation. See Scafetta
N. Scafetta and R. Willson, “ACRIM-gap and Total Solar Irradiance (TSI) trend issue resolved using a surface magnetic flux TSI proxy model”, Geophysical Research Letter 36, L05701, doi:10.1029/2008GL036307 (2009). PDF Supporting material PDF

Numerous indicators suggest that the current solar cycle 23-24 transition appears to be a major transition away from most of the cycles of the 20th century. e.g. The Aurora borealis is at a 100 year low.
This indicates that the 20th century assumptions cannot be assumed to hold for extraterrestrial influences.

Ignoring such factors will cause IPCC’s projections based on the “argument from ignorance” to fail.

• agreed, we definitely need to sort out the solar issue.

• The USGCRP/CCSP needs a solar cycle research program as big as the present carbon cycle program. NASA tried to get one off the ground a few years ago but it did not fly (haha).

• The issue of the proper use of the words prediction versus projection is entirely semantical but that does not mean it is trivial. It is actually very important. Projection is used to make clearer that what happens is up to us, hence not predictable. But this tends to suggest that these are not testable predictions, that is that the scientific method somehow does not apply to them. That is false. This is a major confusion in the debate.

But returning to the topic of this post, the logic of prediction is complex and difficult, as is the logic of modeling. Neither concept is at all clear, yet both are central to the debate.

• Ah but Judith, ‘Climate Forecast’ would be associated in the public mind with ‘Weather Forecast’, and we all know how unreliable those can be don’t we?

• yes there are two semantics issue here, one in terms of logic/science, and the other in terms of communicating to the public

• I contend that ‘climate forecast’ would communicate the quality of the output and the probability of the outcome to the public more accurately. ;)

• Ironically, regional climate forecasting is just what the modeling community is trying to sell as their next big program. People actually trust the weather forecasts quite a lot. (It is the first thing I check when I log on.) NOAA is developing a National Climate Service to parallel the NWS.

• Why “Projection” – instead of “Prediction”?

“Projection” is an officially “acceptable” way to continue deception after the public realized that the IPCC “Predictions” were wrong.

That is why “Global Warming” became “Global Climate Change.”

IPCC copied this deceptive word game from NASA.

In a 1983 peer-reviewed paper it was correctly PREDICTED that the Galileo Mission to Jupiter would find excess Xe-136 in Jupiter [“Solar abundance of the elements”, Meteoritics 18 (1983) 209-222]: http://tinyurl.com/224kz4

The Galileo Probe entered the Jovian atmosphere in Dec 1995, observed excess Xe-136, confirmed solar mass fractionation, and the Sun’s iron-rich interior [“Isotopic ratios in Jupiter confirm intra-solar diffusion”,
Meteoritics and Planetary Science 33, A97, abstract 5011 (1998)]:
http://www.lpi.usra.edu/meetings/metsoc98/pdf/5011.pdf

So a NASA spokesman went to the 1996 Lunar Science Conference to report that xenon isotope ratios in Jupiter were “Normal.”

When we finally acquired the xenon isotope data in Jan 1998, we found that the measured values and statistical errors of the Xe-136/Xe-134 ratios are in Jupiter, in Air, in AVCC (average carbonaceous chondrites), and in the Solar Wind, respectively:

(1.04 +/- 0.06) in Jupiter
(0.85 +/- 0.01) in Air
(o.84 +/- 0.01) in AVCC
(0.80 +/- 0.01) in Solar Wind

Unless financial rewards are ended for deception, the scientific community will continue to be plagued by Climate-gates, NASA-gates, DOE-gates, etc.

Dr. J. Marvin Herndon describes the problem and the cure [“American science decline: The cause and cure”]: http://tinyurl.com/2uokfmk

With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo

28. Joe Lalonde

Terry,

You cannot recreate a future prediction on strictly temperature data.
Our recording time span is too limited. The recording stations are sparse in some areas and abundant in areas of the most population.
It does not include planetary changes. Just records localized events if the fluctuations are abnormal.
Does this make sense?

• TomFP

“You cannot recreate a future prediction on strictly temperature data.” Joe, he hasn’t tried to yet – give the man a chance!

• Joe Lalonde

Tom,
The system of science is too badly damaged from the current theories in science. The system needs to change drastically as we have already generated too many educated idiots who think all current science is correct and all scientists are above everyone else.

29. Where is Terry? Is he going to reply to any of our comments?

• I don’t know, that was certainly the idea. He said this timing was fine for publication, hopefully he shows up today..

• At this point it might make more sense to roll it into his second post.

30. David L. Hagen

Terry Oldberg
Thanks for resurrecting Ronald Christensen’s work.

On “Predictions”, may I encourage you to compare Christensen’s method with J. Scott Armstrong’s “Principles of Scientific Forecasting”

See: Principles of forecasting handbook , summarized in:
Standards and Practices for Forecasting

One hundred and thirty-nine principles are used to summarize knowledge about forecasting. They cover formulating a problem, obtaining information about it, selecting and applying methods, evaluating methods, and using forecasts. Each principle is described along with its purpose, the conditions under which it is relevant, and the strength and sources of evidence. A checklist of principles is provided to assist in auditing the forecasting process. An audit can help one to find ways to improve the forecasting process and to avoid legal liability for poor forecasting.

e.g. “1.3 Make sure forecasts are independent of politics.”

Are these heuristic? Or do they embody Christensen’s work? Or both?

Public Policy Forecasting special interest group has been set up.

Kesten C. Green and J. Scott Armstrong applied these methods in:
GLOBAL WARMING: FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS ENERGY & ENVIRONMENT VOLUME 18 No. 7+8 2007

. . .We audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report to assess the extent to which they complied with forecasting principles. We found enough information to make judgments on 89 out of a total of 140 forecasting principles. The forecasting procedures that were described violated 72 principles. Many of the violations were, by themselves, critical.
The forecasts in the Report were not the outcome of scientific procedures. In effect, they were the opinions of scientists transformed by mathematics and obscured by complex writing. Research on forecasting has shown that experts’ predictions are not useful in situations involving uncertainly and complexity. We have been unable to identify any scientific forecasts of global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder. . . .

This does not engender confidence in current IPCC reports!

• Brian H

Getting down to where the rubber meets the road, the persistent feeling of being the victim of a “fast one” while reading IPCC output comes, I think, from the bit of legerdemain which absolves it from being formally testable by using the term “projections” instead of “predictions”, yet somehow investing a polling or averaging of projections with the respect and importance that a credible prediction would have. This is done semantically by sliding the terms and scenario descriptors used within the models into discussion of the real world climate.

So though they piously abjure any claim of having reality-based predictive models to go on, yet their remit is specifically to provide backup for a preferred suite of policy recommendations.

And therein lies the deal from the bottom of the deck. Climate Science = Prestidigitation. (Come to think of it, that would be a better word to describe the models’ output than ‘projections’! “The weighted average of our best models’ prestidigitations indicates that the forcing driver of climate change and disruption over the next century is anthropogenic CO2 output.” There! All fixed!)

31. John from CA

I’m attempting to follow the logic but have several questions based on the text.

Underlying premise:
1. A model is a procedure for making inferences. <– model = function of inferences to defined procedure

2. … the one correct inference is identified by the principles of reasoning. <– model inference = function of inferences to reasoned procedure / defined procedure

3. Logic is the science of these principles. <– what principals?

4. Non-contradiction is a principle of reasoning. <– please define the meaning of the double negative Non-Contradiction

5. Abstraction is a logical process. <– do you agree that it can also be the product of an illogical and or objective process?

6. … model is the set of this model’s dependent variables and the set of this model’s independent variables. <– circular statement, please define "model" and then please define what you mean by variable and independent variable in this context.

7. Each such variable is an observable feature of nature or can be computed from one or more observable features. <– please define how you are using the term "nature"

8. A “state” is a description of an entity in the real world. <– state is a verb in this context? How can a verb be a description?

9. An “observed” event is a consequence from an observation that was made in nature. <– does this assume the observation is human and not mechanical?

10. A “prediction” is an extrapolation from the known condition of an event to the uncertain outcome of this event in which the outcome lies in the future of the condition. <– a “prediction” is the uncertain outcome of the model?

32. John from CA

Sorry,
My last post number 8 is a bit confusing.

8. A “state” is a description of an entity in the real world.

Did you mean a state equals the premise of an entity in the model?
e.g.: Socrates is mortal, since all men are mortal. <– "Socrates"= "mortal" and "male"

• John:

To clarify, a “state” is a description of something real, like your car. If your car is red then “red” is a state of your car.

33. John from CA

“In building climate models, climatologists generalize. Can the means by which they generalize be improved?”

To be honest, climate models are (or should be) comprised of Peer Reviewed modular sub-components that are commonly shared between all models.

The foundation from which a climate model stems is (should be) pure Peer Reviewed Science. The fun climate questions for research should inherit world-wide effort.

Related module versions within a Model could define your premise for “improvement” and the opportunity for interpolated abstraction as deviations from the control.

PS Abstraction isn’t about semantics — Abstraction is Semiotics, Perception, and a lot more.

• Derecho64

John, you have some laudable intentions, but reality often intrudes. Climate models are not readily plug-n-playable. Some aspects of them have been attempted to be modularized (see ESMF) but there are other aspects for which modularity would be extremely difficult, and would require a large increase in funding.

• John from CA

Derecho64,
If you take an honest look at what Terry is attempting to describe and the aspects of the logic, the realization, Climate Science will never amount to anything unless it is Peer Reviewed Modular; Makes Sense.

;)

• Raving

Did someone say ‘Perception’ (and semiotics) ?

• Raving

Forgot to include ‘Cognition’ there too :-D

• John:
The kind of model you are describing is “reductionistic” in the sense of trying to reduce the phenomenon being modeled to cause and effect relationships. Regardless of whether the reduction is peer reviewed, there is no reason to believe this approach will be successful. To ensure success, climatology must take a holistic approach to model building.

34. Roger Andrews

To put this esoteric debate in terms that a simple-minded person like me can understand, would anyone care to hazard a guess as to what percentage of the IPCC’s climate predictions (or projections, or whatever they are) is based on fact, what percentage on inference and what percentage on speculation?

35. A Lacis

Logic, introspective philosophy, and rational reasoning are good. But there really is substitute for physics and mathematics.

• I assume you mean “no substitute”?

• A Lacis

Thanks for the correction. Sometimes the words you think just don’t make it into print.

• Thanks for taking the time to comment.

It may sound as though I’m engaging in empty philosophizing but I’m not. There is a process by which theoretical results from physics and other sciences can be knitted together with observed events into a model that is the best possible model that can be built from the available information. This process has already been tried in meteorology where it produced an astounding advance in the period of time over which surface temperatures and precipitation could be forecasted.

This process is described in books and articles which are so dense with mathematical and logical ideas that few scientists have proved capable of understanding them. By this series of two essays I am attempting to describe the process in simple enough terms that climatologists get a degree of understanding and act on this understanding. It happens that the route of easiest ascent up the learning curve passes through logic. Currently, climatological models are constructed illogically.

• Zajko

Could be no one is talking about substitution – ultimately we have to address philosophy and logic (is it true, does it make sense). This thread may not be the best example.

36. BLouis79

My take from this and what I have read about climate models or global circulation models are the process of model building fails to properly account for the available abstractions as described by the fundamental laws of physics.

Climate science talks of radiative flux and air temperature when it should be talking about energy and mass and specific heat of the physical system which significantly includes land and water. Climate science should really understand how energy flows from the sun to the earth and how that energy is distributed amongst the masses of land and water and atmosphere resulting in temperature.

• Thanks for taking the time to comment. You can get an understanding of where this is going by reading Parts II and III.

37. This analysis is for a type of model that is wholly impractical as a weather or climate dynamics model. I note that John Nielsen-Gammon and Dan Hughes both expressed skepticism, which I share, that this has much relevance to our problem domain.

Here’s how I would explain it: the amount of information in the real world greatly exceeds the amount of information in the model. (You allude to this.) Even if we consider the real world as deterministic or nearly enough so, your test of correctness can never be satisfied, because an infinite number of real world configurations correspond to one model configuration. Those configurations diverge quickly into separate trajectories.

More to the point, though, you adopt no concept of physical relationships in your model. The likelihood that a purely statistical constraint will prove adequate, given the limitations in number and accuracy of observations, to constrain your model development is nil. You can’t actually avoid physical constraints and get anywhere at all.

• Michael Tobis:

I’m nearly 3 years delinquent in responding to your imprtant and thoughtful comment. I was unaware of it until now and apologize for not responding earlier.

The type of model that I reference in the article has already been applied to mid- to long-range weather forecasting and with extemely great success. This success can be attributed to the use of logical principles based in information theory, rather than the method of heuristics, in selection of the inferences that will be made by the model that is constructed. Under the method of heuristics, an intuitive rule of thumb guides selection of the inferences that will be made by the model from among the many possibilities. Currently, it is the method of heuristics that is used by climatologists in selecting the inferences that will be made by their models. As there are many heuristics, each selecting a different inference to be made, this method violates the law of non-contradiction. Non-contradiction is among the three classical laws of thought.

I would rephrase the first sentence of your second paragraph to read: “the information provided by a model is incomplete.” I’d go on to say that “to minimize the missing information is a logical principle.”

Finally for the model, as descibed by me, to extract information about the outcomes of evenrs from natural laws is possible and desirable. How this can be done is kind of complicated. Colleagues of mine and I have done this.

38. Newport Mac

I do love abstract art.

Is this a discussion regarding differential calculus and integral calculus? Calculus is a wonderful tool to determine meanders if the physics are properly defined in the variables. Once coded, it can define any river meander given any geographic location. Or can it? Rainfall input, etc. are inferred.

What is being presented reminds me of the doughnut theory related to 3D modeling and to a certain extent the need for Semiotics as an aspect of inferred logic.

Fun idea, a program that looks semiotically at the inference within each rule of an algorithm would likely deliver a Mobius band.

Sorry, having fun with this but there is something in the idea that also relates to the idea of abstraction using a chaos constant.

39. Newport Mac:

Thanks for sharing your thoughts. My article relates to the question of how best to select the inferences that are made by a scientific model of a physical system. Sometimes, selection of these inferences results in the incorporation of one or more differential or integral equations into the model. This equation is an “abstraction” from the real world in the sense of being removed from some of the details of it.

Abstraction in graphic art is similar to abstraction in scientific model building in the respect that the representation of a real object is removed from certain of the details. In model building, this can be accomplished through use of the logical connective OR in describing this object. Thus, for example, the representation of a person is removed from the gender differences of real psople through use of the representation “male OR female.”

The “macrostate” of thermodynamics is a well known example of an abstraced state. It is removed from the details that distinguish the “microstates.”