by Mike Zajko
This post addresses an issue that has been coming up recurrently since the start of this blog. I hope it might be a way to step back and reflect on the nature of science in general, as well as a place where we can think about the methods applied in climate science more specifically. I’ve broken the following down into sections that can be read together or individually. I’m hoping for come good discussion of these and additional approaches to the scientific method.
Summary: The scientific method, if presupposed as a core principle of science, deserves some close examination. Adherence to this method or deviation from it is often seen as a way of demarcating science, or distinguishing good science from bad. However, no agreed-upon formulation of the scientific method exists, and I argue it is more effective to consider science’s methods in terms of Hugh Gauch’s “general principles of scientific methodology”. This approach better reflects the actual practice of science, while staying attuned to the fundamental epistemological questions that scientific methodology should be designed to address. Climate science poses its own set of challenges for the design and evaluation of scientific methodology and it is worthwhile to consider how closely certain dimensions of climate science adhere to the general principles identified by Guach, as well as other formulations. Ultimately, I am interested in the question of what it means to “know” scientifically, and how such knowledge is best obtained.
I’m a student of the sociology of science, which means that I am not a practicing scientist (unless you happen to be feeling particularly generous towards the social sciences) but do consider myself relatively informed in several of the fields relevant to the debate here. For the past two years my focus has been on climate science, particularly concerning demarcation practices, or how the borders of climate science are maintained and contested. As such I am not an expert on scientific methodology, and certainly welcome contributions from practicing scientists as well as the more philosophically-inclined. However, I do have a professional interest in definitions of science and their uses in the context of scientific controversy. It is in these circumstances in which the scientific method is typically most forcefully articulated, and while I hope to eventually address other common definitions of science (as provided by Robert Merton, Thomas Kuhn, and the various movements Kuhn inspired) the scientific method seems like the best place to start.
The Myth of a Unified Science
There is a persistent idea that science is defined by its adherence to a certain method or form of reason which provides it with a special means of determining truth concerning the nature of reality. Great scientists such as Albert Einstein are sometimes cited as experts on this method, as masters of science’s winning formula. This myth is useful in a variety of ways, but has its limitations. For one, it presupposes a unity among the sciences which does not exist in practice. Typically, the method presented has a close approximation to that generally used in experimental physics, and it is therefore no surprise that its cited spokespersons are often physicists. This would be fine if all sciences emulated the model of physics or were somehow reducible to it, but despite the efforts of many in the history of the sciences, this has not been achieved. Instead there exist multiple methodologies and strategies for evaluating evidence among the diverse sciences. Many of these share common elements, but often they do not. It is possible to select among them some form of the scientific method, and declare all other methodologies as un-scientific. This has also been done in different ways throughout history to exclude various undesirable practices aspiring to scientific status. But any clear formulation of the scientific method would also necessarily exclude a wide range of practices that have produced a wealth of useful knowledge in their own right, and cannot easily be relegated to the bin of pseudo-science.
The problem lies in our desire to find some essential core in science that defines its nature. Adherence to the scientific method or a certain form of reasoning is not the only such essence which has been proposed, but it is the most commonly cited in public. Philosophers of science have produced countless books in their quest to define or demarcate science apart from other domains, and been unable to reach much agreement on the answer. This should not be confused as supporting an “anything goes” approach to science, where all methods (or their lack) are treated as equally valid, but to argue that any remotely accurate definition of scientific practice must somehow account for science’s disunity. It is surprising how rarely the scientific method in particular has been approached in this way, and I will be repeatedly referring to Hugh G. Gauch Jr.’s Scientific Method in Practice as a useful attempt by a practicing scientist to do just that.
For a lengthier elaboration and justification of the above, as well as a review of various “unifiers of science” I would recommend:
Ian Hacking (1996). The disunities of the sciences. In P. Galiston & D. J. Stump (Eds.), The disunity of science: Boundaries, contexts and power (pp. 37-74). [available on Google books]
The General Principles of Scientific Methodology
The conceptual shift away from a singular scientific method (usually expressed as a series of steps involving theory and hypothesis testing, connected by arrows) and towards some general principles of scientific methodology addresses many of the limitations of the myth of scientific unity given above. It allows us to go beyond asking whether a procedure has been conducted in accordance with the scientific method, and to reflect on how to best apply scientific principles to a given problem. This approach should also force us to confront some core epistemological questions concerning the nature of truth, which do not lend themselves to easy answers. Asking such questions is typically not part of a science education, and many philosophers of science have likewise had little to no scientific training. Hugh Gauch Jr.’s (2003) Scientific Method in Practice is an exception in this regard. Gauch is a common-sense realist who stridently defends the principle of scientific truth against postmodern critiques, but also gives due credit to the various disciplines which address as their object the nature of science – the philosophy, history, and sociology of science. As such he avoids the supposed pitfalls of a relativist view of truth, while presenting an account that largely accommodates the disunity of scientific practice. Although I do not agree with a number of Gauch’s positions, I would certainly recommend his book as a relatively uncontroversial take on a controversial topic, and will attempt to summarize the overall argument below. (All page citations to follow are from Gauch’s Scientific Method in Practice, 2003. Cambridge University Press).
The General Principles of Scientific Methodology
As shown in Fig 1. (Gauch 2003: 2), scientific methodologies for different disciplines are partly similar and partly different.
Science contains both specialized techniques which may be found only within certain scientific disciplines, as well as general methodological principles shared, to some extent, by all. Gauch does not intend to map these methods and classify the sciences, and Fig. 1 should suffice to convey his point. The general principles of scientific methodology include “hypothesis generation and testing, deductive and inductive logic, parsimony, and science’s presuppositions, domain, and limits” (19-20).
See Fig. 1.2 (Gauch 2003: 3). Science’s general principles also come in three kinds, including some unique to science, some that are general principles of rationality, and others (like the principle of non-contradiction) being derived from what Gauch calls the “wellsprings of common sense”. Reason is therefore not the exclusive domain of science, rather, “science is a form of rationality applied to physical objects, and science flourishes best when integrated with additional forms of rationality, including common sense and philosophy” (31). Gauch encourages all scientists to become acquainted with science’s philosophical foundations and critiques, if only so they avoid becoming unreflexive problem-solvers who are unable to address their critics.
The PEL Model of Full Disclosure
“Every conclusion of science, when fully disclosed, involves components of three kinds: presuppositions, evidence, and logic” (xv).
Gauch argues that all scientific arguments or conclusions conform to the PEL model given above, even if some conclusions leave their presuppositions undisclosed. In order to properly assess a scientific conclusion, these three components must indentified and recognized as part of an interrelated whole.
Scientists are often unaware of the presuppositions on which their work is based, and Gauch lists numerous varieties (112-155) including ontological presuppositions concerning the nature of reality, epistemological presuppositions of the reliability of sense data and human language to inform our knowledge of the world, logical presuppositions concerning the coherence of the world, and the applicability of inductive and deductive logic. Many of these can safely be considered as common sense, but they are not provable or testable. Such presuppositions also serve to limit the number of hypotheses under consideration to a finite roster of sensible or testable ones (129), since there always exist wild hypotheses which are effectively ruled out in practice. Therefore, while a set of hypotheses is never truly jointly exhaustive, we can treat it as such out of common sense considerations such as (to use a fairly uncontroversial example) “magic is not a valid explanation”.
Logic combines presuppositions and evidence as a crucial part of scientific argument, and while as far as I know, logic courses are not required for most science degree programs, scientists routinely make use of standard logical axioms and arguments (as well logical fallacies) to make their case. Gauch reviews the fundamentals of several forms of logic, as well as common fallacies. The distinction between inductive and deductive logic is treated as fundamental, with induction acting as reasoning from data to support a model (with varying levels of support), and deduction commonly operating as reasoning from a model or set of premises to what would be expected in terms of observed data (where the truth of the premises guarantees true conclusions). Scientific arguments include both deductive and inductive elements, and “inductive problems often contain deductive subproblems” (191). Both deductive and inductive logic are further broken down into various types, and Gauch spends considerable time on probability theory and the Bayesian and frequentist paradigms in statistics. I think it might be a good idea to devote some time on this blog to inductive and statistical methods specifically, considering their strong role in many climate-related arguments (also including Christensen’s paradigm as championed by Terry Oldberg), but these are topics too dense to begin to address in this post.
Parsimony, Domain and Limits
I’m going to briefly define what Gauch means by the above three terms, since they’re probably the less recognized of scientific methodology’s “general principles”.
Since there are always multiple theories that fit the data equally well, parsimony (sometimes identified as Ockham’s razor) becomes an essential and pervasive principle of scientific methodology, essentially dictating that scientists choose the simplest theory that fits (other criteria to consider include “predictive accuracy, explanatory power, testability, fruitfulness in generating new insights and knowledge, coherence with other scientific and philosophical beliefs, and repeatability of results” ). Numerous empirical examples are provided to demonstrate the crucial part parsimony has played in the history of science.
Science’s domain and limits refer to what we can and can’t expect of science. Science cannot explain everything or provide its own ethical requirements, and it cannot prove the presuppositions on which it is based (376).
Gauch doesn’t devote much of his book to falsification. However, the topic comes up often enough in the climate debate for me to spend some time on it here.
Falsification tends to be the “one bit of Popper” to which scientists are introduced and end up citing as epistemic justification. Those who cite Karl Popper are probably unaware of the radical implications of his work, his low standing among most philosophers of science (who find his solutions largely unconvincing), or the original philosophical problems that Popper felt compelled to address. The “one bit” which Popper popularized in his search for a means of demarcating science from non-science is the idea of falsification, or that “a proposed hypothesis [or theory] must make testable predictions that render the hypothesis falsifiable… a scientist should give his or her favored hypothesis a trial by fire, deliberately looking for potentially disconfirming instances” (103). Gauch think this is wholesome advice, but no more novel an insight than the modus tollens argument in classical logic. As for the practice of scientists seeking to disconfirm their own theories – he considers it part of honest scientific practice (although I would add that the history and sociology of science demonstrate this practice to be rather uncommon).
A recurrent misunderstanding of Popper is to assume that the outcome of a hypothesis test or an experiment designed to falsify a hypothesis could somehow confirm or support that hypothesis. This is precisely the sort of claim Popper wished to avoid, since he argued that all observations were theory-laden (requiring a theoretical basis with which to ask research questions, collect data, and interpret) and that theories could never be proven or verified. He was also firmly opposed to the suggestion that theories could be judged against one another by the weight of supporting evidence, although one might have a basis for preferring one theory over another based on its specification and the sorts of falsification tests it had passed.
However, Popper’s claim that theories could be falsified (and that through such falsification science could refute false theories) itself ran into serious difficulties and was significantly revised and weakened in Popper’s lifetime. First, all theories are underdetermined by data, that is to say “[f]or any given set of observations, it is always possible to construct infinitely many different and incompatible theories that will fit the data equally well” (83). However, according to similar logic, falsification is never conclusive either, an idea grasped by Imre Lakatos (84). Lakatos noted that many theories (to which we may add anthropogenic global warming [AGW]) do not lend themselves to making falsifiable claims or the sorts of “critical tests” that Popper was fond of using. Harry Collins made a related point in 1970s as a sociologist, arguing that experimental results are never decisive in and of themselves (the argument applying both to replication as well as falsification). A claimed falsification of any theory might actually just be a falsification of one its auxiliary hypotheses, such as those related to the observations being a valid measure of the main theory, or the instruments being calibrated and functioning properly. Lakatos prefers to think of evaluating theories based on comparisons among them of corroboration and effectiveness. As opposed to falsification, Gauch lists a number of lines of evidence including explanatory and predictive power, replication, increasing accuracy, and interlocking evidence as part of an argument that the sciences have an “objective grip on reality” (95-96).
Meanwhile, falsification as part of hypothesis testing remains a powerful principle in scientific practice. Popper’s early “naïve falsificationism” is sometimes substituted by an updated “sophisticated” version, which addresses many of Popper’s critics but does not result in the same sorts of bold conclusions. Even when scientists do not explicitly set out to formally falsify hypotheses, we can often think of their work, in retrospect, as a form of falsification through hypothesis testing. Falsification however, does not operate as decisively as Popper had hoped, and cannot differentiate science from non-science (Popper himself significantly revised the use of falsification as a demarcation criteria for science in his later years). Theories can never be falsified or verified definitively, but they can, ideally, be held accountable to evidence and argument.
Further reading: http://plato.stanford.edu/entries/popper/
The Methods of Climate Science
So what does all of this mean for climate science? What methods and applications of scientific principles are appropriate to answer the big questions of climate change? Unfortunately, climate science suffers from much the same sort of disunity as science in general. While it contains many specialized methods, these are the product of a very diverse set of scientific disciplines that developed in a “pre-AGW” world. I see its unification under a single form of appropriate logic unlikely, because a research question such as, “to what extent are human greenhouse gas emissions contributing to dangerous consequences” immediately opens a whole host of complex sub-problems that would best be resolved using different methods. The use of climate models as evidence also leads to various questions regarding their epistemological standing (although I personally think such models can have their uses, as long as their limitations are recognized).
The IPCC has presented several lines of evidence along with the construction of expert consensus to make its case. Criticisms that the IPCC does not follow the scientific method are valid if this method is defined in its simplified elementary sense, because such a caricature automatically excludes the majority of what is conventionally accepted as scientific practice. However, we can also consider how well the IPCC’s case, as well as alternate conclusions put forward by its critics, conform to some of the principles of scientific methodology given above and elaborated in Gauch’s book. Again, while I do not consider Gauch’s account of scientific methodology as authoritative, it does open up some valuable avenues for discussion – in particular I’m interested in how we might consider various lines of corroborating and interlocking evidence as part of the same argument. Since climate science generally shies of making predictions, the predictive power of AGW theory is hard to evaluate and many of its hypotheses or conclusions are difficult to test, but AGW theory does provide a degree of explanatory power that can be compared to other possible explanations for observed phenomena.
That being said, I do not think such a comparison or evaluation can ever “debunk” AGW or “settle” the science. Even Popper had a rather nuanced take of how scientific agreement results from testing, and science in general proceeds just fine with indefinite conclusions, entertaining multiple hypotheses simultaneously. If there’s anything we can learn from the philosophy of science, it’s that answers to even basic scientific questions do not come easily.
Please give a warm welcome to Climate Etc. regular Mike Zajko and his guest post. This is a topic that underlies many of the subjects that were debated on the uncertainty threads. I’ve read Popper and Kuhn and taken a freshman logic course; barely adequate, but I suspect that many scientists have even less background in the sociology and philosophy of science. Scientists learn some sort of scientific method by osmosis from the culture of the field; rarely is this reflected on by most scientists. Stimulated by some threads and comments over at collide-a-scape some months ago, I began reading much more on this general topic, and I view the issues that Mike raises as highly relevant to the debate about climate science.
And thank you Judith for hosting the post.
Relevant indeed (disclosure: I am a philosopher of science). Most of the debate is about evidence and related issues of argument, not about climate per se. However I have one major objection. Mike says “climate science generally shies of making predictions.” If this were true we would not be here. The climate change debate is consuming millions of hours of people’s time every year precisely because of the prediction of dangerous, possibly catastrophic, warming. The prediction is uncertain, but it is certainly there.
I concur that catastrophic predictions are made, but are these predictions coming more from the media and non-science proponents of AGW, i.e. Al Gore, news media, entertainers, etc.?
I would say the predictions are largely made in the public realm by non-scientists. However, we can definitely find some cases of scientists (or representatives of scientific bodies) making irresponsible predictions, due to their understanding that this is what is going to cause us to “wake up” to what are otherwise fairly abstract facts. These predictions can be very risky, and can backfire. I think scientists in general recognize this difficulty and avoid such predictions, but the reputation of science can still suffer when others capitalize on its authority to make claims that do not come to pass.
The scientific literature is full of (1) model runs showing warming over future time frames and (2) impact studies looking at how this warming will affect sea level, ecological system, disease, and a host of other factors. How are these not predictions?
Alright, let me backtrack a bit. I was referring to various catastrophic predictions regarding extreme weather, ice sheet collapse, and so on. There’s support for such statements in the scientific literature, but it tends to be be presented in fairly qualified terms, definitely moreso than when these findings hit the popular press. GCM runs used by the IPCC are also explicitly not given as predictions, though are sometimes treated as such.
In my main post above what I was referring to is that AGW theory (taken in the broadest sense) makes little in the way of testable predictions. The fundamental prediction is that given continued emission of GHGs there will be an increase in temperature over the long term, with the effects of that increase being considerably less clear. Testing this prediction remains difficult, though there is certainly some evidence to support it.
However to add further, I would concede your point that various climate sciences make predictions all the time, even if they are not explicitly given as such – the issue can just be semantic. I probably should have put that paragraph in the post a little differently.
The CRU Team uses the term “scenarios” to describe runs with specific assumptions. But then they go on to AVERAGE the scenarios, and de facto present these as the best available guess about what is likely over timescales of up to 1 – 1½ centuries! This is prediction by any rational test.
Let us not engage in weasel-wording here. That the “scenarios” are in effect illustrations of the conclusions and assumptions of the model-maker is also a vital point to bear in mind; the models are not derived from first principles, and contain many WAGs about vital variables, “parametrized” so as to make the output seem plausible to the creator(s).
Disclosure. I am not a philosopher of science. I am not a psychologist.
Decades ago, I used to be what I would have called myself: A theoretical biologist who is interested in and models complex systems.
Since then I have come to understand and appreciate a great deal about things ‘perceptual’. These perceptual considerations are not so much ‘psychological’, nor ‘cognitive’. Rather, they realizations and cognizance as to the process of forming and using description.
My appreciations are made of tangible nuts and bolts type stuff. They are considerations which track how a person’s focal awareness is seized and displaced. Follow the bouncing ball . I have a flair for picking out which balls bounce. I see trajectory that those bouncing ball take as they move in time.
FOLLOWING A BOUNCING BALL …
(The momentary experience of reality over a brief interval of time – (minutes to hour duration)
The Autistic Momentary Experience of Reality:
The NT (non-autistic) Momentary Experience of Reality:
As an NT (non autistic) person, I experience and work with fragments of reality that are limited, defective, vague and incomplete. Such fragments are isolated independent thoughts and experiences which are loosely coupled together in a great lifelong accumulating pool of all such experience.
The reason that I indulge myself with dubious heuristic gruel is because I have the NON-AUTISTIC ability to span the discontinuous void and assemble those turds into a string of pearls.
Autistic type thinkers do not span discontinuous voids. The reason they don’t do such is because the autistic method is to take care at the outset to create,establish and maintain a singular universal continuum or ‘totality’.
Autism exists within a continuous universal totality.
For the autistic type thinker, the isolated detached imperfect fragmentary experience is useless and meaningless. It is antagonistic to the autistic perspective. It is not the autistic cup-of-tea.
The autistic universe is timeless and universal. Approximations converge upon perfect truth.
The Neuro Typical (non autistic) is timely.
The NTs experience of reality is a large ocean of limited, imperfect incomplete, crude defective fragments. It is a timely experience of reality. Time passes … reality changes as the NT person moves about fragment. The nature of those fragments and the relation of those fragment, one-to-another is inherently understood by the NT person.
Different strokes for different folks. …
For the present, the only game in town is the ‘universal timeless totality’ … a magnificent triumphant autistic accomplishment!
An alternate scheme for discontinuous +unknown agglomerates of fragment awaits introduction.
Damn the charlatan ecologists. The quislings sold out holistic process to gain credibility as being rational.
(Talking to myself. Need to escape this solitude … back another time)
If you have to make a prediction and you have most of society sympathetic to your expertise, supporting your credibility, bolstering your efforts and encouraging you to move ahead forwards, you make do with what you can. You formulate and frame the problem as best as it is in your ability to do so. You disregard the possibility of overextending yourself because a rational expert estimation is better than no opinion or uninformed inexpert conjecture.
I’ll back you up. I am strongly cognizant of what you do so well and far beyond my own individual ability.
Then again I also understand enough to appreciate that this is a complicated situation which is hard to posit in a “properly formed manner”.
I dislike that there are people who support and encourage your efforts because it lends credibility to their own world view. Either they don’t know, don’t care, don’t consider, or don’t believe you are overstepping your own cautious nature to provide such results.
It is not that they are evil or stupid or callous or ingenuous. Rather, you give credence and poignancy to their (not necessarily to be disregarded, nor dismissed) personal opinion, sensibility and expertise.
When I started out at university, I really really wanted to study physics. As with many I felt very insecure and suffered from ‘physics envy’. For some people, aptitude and ease of effort in physics (and mathematics) comes far more easily and [progressively + consequentially + effectively] than it does to others.
Personally, I ended up discovering and falling into the discipline of ‘Theoretical Biology’. In retrospect, I recognized that many theoretical biologists were failed physicists. They lacked the specific skill set that was necessary to be proficient in physics. As a theoretical biologist I felt very awkward. I possess some of the elements that are necessary to do physics but I was not very good at it. Nevertheless, speaking in a relative sense, I was more adept than my colleagues to work with physical/mathematical concepts. I was keenly aware of my own limitations and ineptitude in a topic for which I was recognized and acknowledged as possessing expertise.
I have my own strengths and weaknesses. I am aware of what I know, what I do and what I myself understand well. I am confident and arrogant in that regard.
One day at a meeting partly made up of theoretical physicists with an interest in ‘Theoretical Biology’ a noted physicist took me aside. He told me that he admired what I did. He told me that he could not possibly bring himself to consider the open, ill defined, unconstructable challenges that I was casually prepared to assume.
Was I being foolish? I don’t think so. I recognized, realized and finally appreciated that I could work with and be competent in problems where more rationally adept practitioners were poorly equipped to proceed in an efficient manner.
Years later, in retrospect, I came to understand that physicists themselves suffered the most from the ravages of “physics envy”. I recall being at conference where another graduate student commented on the woes of having Feigenbaum ( … or whoever (?) I’ve forgotten … ) as a thesis supervisor. That person said (paraphrased) .. “If (Feigenbaum?) took an interest in your project, it would be a disaster! Either he would solve it in 5 minutes or else it was intractable!! Those hardest hit with insecurity regarding the lack of aptitude are the physicists themselves.
It took me decades and decades to realize, trust and respect that there are some things that I do well have good reason to feel confident in regard. I know to be confident in regard to what I am spewing at this very moment. A lifetime of experience has taught me to trust that aspect of myself.
I have immense appreciative respectful regard for theoreticians and simulation molders. My own ability is to recognize, appreciate and respect their ability. I also recognize, appreciate and respect the skills, abilities and expertise of ecologists, environmentalists and evolutionary biologists. I understand and appreciate their expertise and point of view. I recognize and empathize with their desire and decision to compromise their position to gain credibility for what they consider to be an urgent and worthy emergency predicament. If I didn’t respect my former PhD supervisor, Prof. R. Hansell, I would not impulsively slime him. Conversely, I’m not sure that at the immediate present, an alternative path is accessible to me. Such decision speaks more to his worthiness than my own ineptitude.
My job is to do the best that I can with what I know and then disengage and walk away.
I am struggling to disengage, walk away and accommodate the consequences of doing such. I continue to do so. That is my responsibility and what I must do at this moment.
I care about super rational people (autistic). I recognize, value and appreciate what they are doing. It hurts me to see them encouraged for opportunistic motive. I understand and empathize with why this is occurring.
Speaking personally, I would like to see development beyond the rational autistic perspective. Humanity is severely hamstrung by the inability to accomplish such. There is more to nature than strict ‘rationality’. perhaps unreasonably, egotistically, ignorantly and stupidly I perceive a route out of such subjective ineptitude and entombment.
I declare and acknowledge that I am a raving loon who is unskilled, ignorant and impatient.
I do what I can and screw the apologies. and vanity. I’ve paid my dues and it is the end of the line. I have already lost all that I imagined I possessed.
Finally, please ignore the rhetoric in all this should such exist. I attempt to express myself as best as I can. Caveat emptor. End of story.
Sincerely and cordially,
David B. Brown (I henceforth revert back to “Raving”)
Please see my comment below. The IPCC adamantly opposes the implication that its models make predictions. According to the IPCC, they make “projections.” A “projection” differs from a “prediction” in that the former does not support the falsifiability of claims while the latter does.
Yes, I can see that and agree with that.
Also had a quick look at your web site. Interesting stuff.
I could follow your abstract and appreciated what you touched upon in your development. Entropy and the statistics of ensembles are important concepts.
‘Knowledge’ (as information) is especially important. It’s a pity that the precedence of first usage of the ‘information’ was in electrical engineering. ‘Information’ as is the role that it provides in ‘biological process is in stark complementary contrast to ‘information’ as meant in the context of entropy
(More comments to follow.)
Use and misuse of the scientific method is indeed the core of the debate.
Thanks for addressing this subject.
I disagree completely. Climate research is just as good as any other discipline’s research (including the bad apples). The debate is about how to interpret the results from all this research, thousands of journal articles a year, which is not about the scientific method at all.
People debating AGW, as on this blog, are not doing science, they are debating science. There are plenty of fallacies to go around in the debate, but these are not violations of scientific method because the activity involved is not science, it is debate. The basic fact is that reasonable people of good will can look at the same evidence and draw opposite conclusions. That is not a misuse of scientific method.
No, David. If they did the components to the best of available art and ability, I’d go partway along with you. But they not only do not, they vociferously and adamantly refuse to collaborate with specialists in numerous sub-disciplines essential to their task: physics, statistics, modelling, program design and coding, etc. They insist on strictly taking the DIY route, and the results, according to those who know better, blow big time.
How can you assemble ‘science’ out of that?
Duns Scotus is my hero.
In my book methodology counts for nothing without a result. Give me a reproducible result and I don’t give a twopence for methodology.
running a model twice and getting the same answer? Right answer, wrong reason?
Models are just a numerology not scientific discoveries.
yeah sure, running calculations based on equations and laws developed to describe the world cannot have any possible scientific value…
Time varying 3-D large scale (high Reynold’s Number) N-S equations with limited resolution initial and boundary conditions are not solvable exactly and as time marches out become impossible to solve numerically (these are nonlinear complex equations and in general become chaotic). Add unknown physics of clouds, aerosols, large scale ocean currents, and possibly other factors (volcanoes, solar variation effecting cosmic rays, etc.), and you get GIGO. Models for this, at most, are supposed to show relative behavior of different parameters, but depend totally on reasonable physical representation of the items mentioned (even if exact details are not known), but that is not yet demonstrated, and may never be.
You are describing important issues of uncertainty, including a likely chaotic limit to predictability. These are good reasons not to take models to seriously, but they are not reasons not to build and study models. Model building is now a central part of all the sciences. Just look at the job adds in Science, for “computational” scientists.
Good old Wittgenstein … “What we cannot speak about we must pass over in silence.”
You are referring to the ‘Demarcation problem”.
“Demarcation problem” = “chaotic limit to predictability”
“Demarcation problem” = “No words, no speak”
Another “Demarcation problem” = Sapir-Whorf hypothesis
Worthless blundering raving can disassemble the postulated limitation.
1) “What we cannot speak about we must pass over in silence.”
2) Fortunately, if a person happens to be a blundering blathering raving loon it is utterly feasible to stumble and lurch back and forth irregularly and imprecisely across the the precise phase transition between coherence-and-chaos or between signal-and-noise-floor.
3) By the means of such brutal inelegant meander it is utterly feasible OVER TIME & IN A NON-AUTISTIC MANNER to construct new and novel words, processes, landmarks, and not to mention foolish fanciful and inelegant abstracts which subsequently permit access to heretofore unspeakable places.
Wittgenstein is wrong. The theory of ‘Linguistic relative’ is wrong.
When it comes to uncertainty, error and blunder the sky is the limit and it is open ended. One need only construct beyond the discerned perimeter. One need only set sail and steer course by dead reckoning (extrapolation of prediction) beyond the perceived limit of discernibly.
There is no fundamental impediment. Sorry.
Of course those who have strong ecological, evolutionary and environmental sensibilities already know about such reality inherently. Regrettably they choose to debase themselves and swear allegiance to rational analysis because it suits their immediate interests.
Fragmented, limited, labile, localized distributed reality which is experienced over a prolonged accumulating interval of exposure? … Autistic type thinkers have no knowledge, nor interest in such multi-focii, discontinuous, temporally extended experience. The autistic’s life long obsession, knowledge, experience, aptitude is for a timeless, universal, certain totality.
If it isn’t science, it’s impulsive, erroneous, unreliable pseudoscientific crappola, … right?
Yes maybe it is so. It is also, nevertheless how the irrational heuristic dumb majority understands and perceives reality.
Each way of seeing things has advantages and limitations. There is no single best method. Every method has strengths and weaknesses.
Currently autism scores 100% The rest of humanity scores 0%
Damn my former PhD supervisor for deserting and marooning and deserting me. I cease to care. I’ll happily drop dead or go to hell right now.
I know nothing. He’s but one of numerous quisling sell out sensibilities.
I do what I can. It means jack squat. No problem. End of story, albeit from a personal perspective I say what I must. (I.E. I have little choice but to express what I feel compelled to say).
PHILOSOPHY = An autistic universe.
Up to this point, as best as I know in my own lazy, subjective personal ignorance, that is all there is to “know”. Perhaps that is all there is that is recognized as being creditable.
I love autistic people. I am am amazed and in awe of all that they have collectively accomplished in aggregate. My desire has been to take the development of autistic understanding beyond the realm of their own experience and expertise.
I do that now alone, marginalized and in a solitary manner. That’s good enough. Onwards to my next posting … :-)
You wish to talk of philosophy and the scientific method? You had better acknowledge, accept understand and respect this thing called ‘autism’ because the autistic perspective is predominately all there in regard to objectivity, rationality and the scientific method.
Does another reality exist? You bet it does!. Regrettably humanity has yet to move beyond the stage of recognizing, appreciating, understanding and respecting the predominant paradigm.
They are good reasons to build very limited models with clearly advertised boundaries of ignorance. Ignorance, you know, is not simply reducible to some numerically estimated level of “uncertainty”, and its consequences, when papered-over in a model or theory, are drastic. Generally fatal.
The problem with ‘ignorance’ is that it very forgettable.
No, seriously. …
Well they design airplanes and rockets with models, but they test them rather rigorously. Surprises are still frequent, by the way.
A telling comparison; we are being urged to refuel and “launch” the planet’s economy, according to never-tested untestable designs, in a direction with marginal prospect of improving on or avoiding events which are highly uncertain.
This is engineering at the level of pre-teen bottle rockets.
Thanks, but no thanks.
‘running calculations based on equations and laws developed to describe the world cannot have any possible scientific value…’
Exactly! They have no value until they make some predictions that are verified by experiment/observation. A technique that seems to be little used (if not completely unknown) in climatology.
Before one gets into all that complexity, what is needed is solid proof, tested in laboratory condition, that CO2 can do at its concentration (0.0386%), to the extent the IPCC claims that it does.
Then repeat the test with the other atmospheric gasses including water vapour (3-4%). It is well known that wv is also a greenhouse gas, small rise in natural warming increases evaporation, thus amplifying the GW trough positive feedback. Water vapour above certain concentration will increase cloud formation, increasing albedo, reducing GW trough a negative feedback. Water vapour thus has the unique self regulating bidirectional feedback. All this can be tested in laboratory conditions on lines of the Svensmark – Cern Cloud experiment.
Next step is to find if there is an initial natural force initiating periodic changes in either warming or cooling direction, correlating to a wider regional or the global temperature.
I think that the geomagnetic field (GMF) might be if not good a candidate than a good proxy, which in a way of a hobby, I’ve been looking into for the last 12-14 months.
(see also my previous post )
Climate models do not solve equations representing physical laws.
How often has one to repeat it?
They do not solve these equations and they cannot do it!
Not yesterday, not today, not tomorrow.
Never ever. It is a complete impossibility because the low resolution simply prevents to numerically solve anything more complicated than a straight line.
So no, it has no scientific value because a set of algebraic equations on a 100 km grid can’t be shown to converge to anything that could be even approximately a solution of the equations giving the dynamics.
The best one could hope for, is that the video game conserves energy and momentum but even that is subject to doubts.
I am still impatiently waiting for a mathematical proof that the GCM converge in some exotic sense to what are the states of the system constrained by real physics like f.ex Navier Stokes.
But I won’t hold my breath, it doesn’t seem to interest many people and they seem to prefer just handwaving.
yeah sure, running calculations based on equations and laws developed to describe the world cannot have any possible scientific value…
You mean incomplete equations and incomplete undertstanding of how those laws works in the climate, including Chaos Theory? You are getting all those perfect in those models? Then how come when one part of a climate model matches reality, another part is forced to not.
“Now is my way clear, now is the meaning plain: Temptation shall not come in this kind again. The last temptation is the greatest treason: To do the right deed for the wrong reason.”
“The martyrdom of Thomas Becket was a martyrdom which he had repeatedly gone out of his way to seek…one cannot but feel sympathy towards Henry”
It suggested that Henry got off lightly because Becket .. uhm … ‘deserved what he had coming to him’ The point here being that the consensus of public opinion ran counter to logical reasoning.
Maybe Becket had framed a problem (or few dozen) badly?
While improved reproducibility for an individual method is desirable, the approach you condone with the above statement completely ignores the possibility of systematic errors.
Systematic errors are present in all measurement systems and models. They can be difficult to root out, and comparison with a different method/model isn’t a surefire way to prove their existence, as the two can share a systematic problem or have two (or more) different systematic errors that induce roughly the same amount of error into the final value, thus giving the appearance of agreement.
Thus, methodology can be very important, and just because a method gives the right answer doesn’t mean it’s the right method or even valid at all.
vukcevic is right, that’s why the source code of GISTEMP and HadCRUT were irrelevant all along.
Including the details of all the fudge factors? Riiiigggghht.
vuk, did you men to say “Give me a reproducible result”? Surely you meant “give me reproducible experiment leading to a physically observed result…”? As JC points out, one of the problems with models is that their runs are axiomatically “reproducible”. It’s only their congruence with the observed world that gives them their value – or want of it.
Give me an experiment without reproducible results, and I don’t give a twopence for the methodology.
Unless it’s the Journal of Irreproducible Results. ;)
In the previous thread, commenter ‘snide’ made a perceptive observation (Nov 9 @ 3:33pm —
For ‘snide,’ the question comes down to, “Can I trust the scientists?” (His/her answer is ‘yes.’) Or, for the purposes of this post, “Can I trust that climatologists are correctly engaging in the scientific method (however defined)?”
For most citizens, and many citizen-scientists, this may be the most important question.
A repeated theme in the comments contributed by (seemingly) scientifically-literate “skeptics” is the recounting of episodes where climate scientists exhibited behavior that did not appear to be consistent with the application of the scientific method.
This set off a common-sense train of logic:
— In the narrow scientific issue at hand, “A” does not seem to be a trustworthy reporter.
— Yet “A” seems to be supported by the field’s consensus in his view of this narrow issue.
— “A” also makes broad assertions about the strength of the science behind AGW.
— “A” is also supported by the field’s consensus in this broader issue.
— Based on my understanding of the narrow issue, I withdraw trust from “A’s” view of that narrow issue, and also from A’s supporters in the field.
— Based on my new view of “A” and his supporters, I no longer trust their broad assertions about the science of AGW.
Is this a common story of how “interested party” turns into “skeptic”? I think so.
Is it correct to move from opinions about a narrow issue to doubts about the field of climatology as a whole? Can and should it be applied to other areas, e.g. medicine or structural engineering? If it is unwarranted, is skepticism towards “scientific expertise” on the part of Citizens ever warranted?
My skepticism isn’t born out of a single ‘bad encounter’ extrapolated to a general mistrust as your charicature portrays, it follows long examinations of area after area of climate science, and discovering serious flaws in all of them, some so serious that they are fatal to the consensus enterprise of validating the AGW hypothesis.
Medecine and structural engineering have a longer track record, better oversight mechanisms, legal frameworks, formalised training practises, better record keeping systems, less of an agenda, and more of a clue about what they are talking about.
And, as has recently become a hot topic in the field, fewer than 10% of medical tests and experiments performed are reliably tested and reproduced, and even when disproven, attractive and “consensus” conclusions and procedures continue to be used for up to a decade or more.
It is somehow taken as a given that all journal publishment-supported hypotheses are then diligently reproduced and verified, and the results given equal or adequate play. This appears to be false, in all sciences.
The consequences are very serious.
Is this related to your comment? Lies, Damned Lies, and Medical Science
I found that one to be most interesting.
Or are you referring to something else?
Thanks for a very thought-provoking comment. I was happily agreeing with your lucid line of reasoning until the punchline “…is skepticism towards ‘scientific expertise’ on the part of Citizens ever warranted?”, which stopped me cold. (One can always shout Yes!, but then under what circumstances?)
So: How do we make judgments about complex and important issues in which we are not expert?
Well, we do it all the time by not unreasonable but quite faillable means: preconceptions, confidence in our own incomplete knowledge, limited personal experience, reliance on a trusted friend or source, perceived consequences of a given conclusion. Consultation with multiple sources of responsible journalism – does anyone remember that? And reliance on experts.
One might suppose that removing oneself from the ‘Citizen’ group to the ‘expert’ group, difficult as that might be, would be the best way to go. But then along comes tallbloke, who claims to have done that, and concludes something quite different from almost all the other ‘experts’. I think he’s blowing smoke, but I don’t know.
Perhaps the best we can do is learn as much as we can and keep our antenna out for anything that disturbs our tentative conclusions, being honest with ourselves about how we’ve reached those conclusions. And proceed in the face of uncertainty, while minding the consequences of being wrong. Which, of course, involves another round of ‘judgments about complex and important issues in which we are not expert’. Ha.
I’m going to feed the chickens.
I am, like tallbloke, also a very qualified person to examine the AGW issue (ScD in fluid mechanics and heat transfer, and many years and numerous awards). I even went in accepting the IPCC position. I then read all that I could on the subject and concluded it was a terrible mistake. I know of many Physicists and Engineers who had the same experience. I think the loud clamor of so called experts (who had a head start), along with politicians and the news media bias encouraged most “skeptical scientists” to keep quiet (those who tried to comment were called names or ignored). At some point they could not keep out of the issue and I think there are now almost as many “skeptical scientists” as those scientists pushing the CAGW issue. This does not mean we totally reject AGW, just that we think the human effect is small and not a threat. See my analysis:
Thanks for the link to your analysis, which I took the time to read thoroughly last night. I appreciate that you lay out your objections clearly and (with some exceptions) in a well-referenced manner that allows one to follow up on your ideas. I was, in fact, introduced to some work with which I was unfamiliar. Nevertheless, I find your conclusions far from compelling.
This is not the place for a detailed ‘rebuttal’, but among my criticisms are that you greatly understate the case for AGW by a selective choice of observational evidence (e.g., ‘These two pieces of information are the basis for the present “Anthropogenic Global Warming issue”’’); the neglect of important physical arguments; a narrow reading of the paleo and ocean acidification literatures; and misunderstandings of the import of modeling efforts and other issues (e.g., the stomatal frequency analyses in relation to the CO2-temperature record).
Nor do you make the slightest attempt to account for specific studies that contradict your selected ones. Have you ever tested your conclusions by presenting them to an author with whom you disagree? You are a pro, you know how to do this.
I keep hearing about all these scientists who, like yourself, are skeptical of ‘mainstream’ conclusions, but I wonder where they are. They are not publishing papers, they do not present papers at meetings, they are invisible except here on the web, talking to each other and arguing with the likes of me. To be sure, there are plenty of papers being published which examine discrepancies, improve or sometimes replace earlier results, etc., as in the normal course of research. But, with perhaps a very few exceptions, they do not challenge AGW in any fundamental way. Neither does your analysis.
So, where do the claims of broad consensus originate? Is it all bluff and bluster, or have that many scientists been hoodwinked — or are many just too timid to speak up? Or is there something else at work?
(Or, are those who support the consensus ‘right’?)
I have worked in many organizations as a software developer, and know all too well how people know the truth yet dare not speak it.
As an aside Bruce Webster’s observations on an observed “The Thermocline of Truth” in IT organizations may be worth a read. I’m not quite sure how this effect might apply in the climate science field with regards to the stated consensus.
Thanks for a very thought-provoking comment. I was happily agreeing with your lucid line of reasoning until the punchline “…is skepticism towards ‘scientific expertise’ on the part of Citizens ever warranted?”, which stopped me cold. (One can always shout Yes!, but then under what circumstances?)
Skepticism is always an option, but I’ll be damned if I can work out the radiation calculations.
Bertrand Russell (Problems of Philosophy?) had the following to say, closely paraphrased:
“When the experts are in agreement, it is intellectually unsafe to be certain of a contrary opinion.
When the experts disagree, it is intellectually unsafe to be certain of any opinion.”
AMac – I think the logical course you describe is known to logicians as “falsus in uno, falsus in omnibus”. It provides a reason for doubt, but doesn’t of itself prove or disprove anything. That doesn’t make it illogical, or of no value in understanding the debate. Indeed, as you say, it is one of the logical tools available to anyone who can’t follow the science itself – the vast majority of us – to distinguish between good and bad scientists.
I became a sceptic when I found that I had a bit of free time and had kept hearing that the ‘Science is Settled’. Being a bit of a scientist a long time ago, I decided to spend a little time investigating…as it seemed like a fairly ambitious claim.
I remembered from my academic days just how hard it was to really understand the chemistry of the high atmosphere and to build mathematical models of it. So that somebody could now confidently predict exactly what would happen to the whole Earth’s atmosphere 50 or 100 years from now with exact predictions of temperature and sea level and the number of hurricanes was quite mind-boggling. Obviously there had been some major theoretical advances in the last 30 years that I knew nothing about, and was keen to learn.
So I went off to learn. The initial encounters with the gatekeepers of the sacred flame at the guardian were not a good omen, and I was banned for asking once too often why anybody thought that – in maritime cities with big tidal variations – a 3 ft sealevel rise in 100 years would be a big problem, not a minor annoyance…Then I looked for the evidence – not just the theory – that CO2 causes warming. All I could find was the famous graph with the CO2 lagging temperature rises by 800 years. Not good evidence at all….it actually leads me to think the opposite of what people tell me it does. Tried RC but after two brushes with Gav’s full on unpleasantness I gave up. Not a guywho suffers from a lot of self doubt…nor one who wishes to win friends and influence people :-(
But I kind of assumed that there must still be compelling experimental evidence or some giant theoretical breakthrough. Kept on looking…didn’t find one.
I guess I am not alone in that Climategate and its aftermath was a watershed for me. It is immaterial how the mails were obtained. The mails showed that the senior people in the field were acting with the morals of a street fight and ‘professional standards’ that really weren’t fit for purpose.
In real life I have managed a lot of IT shops, and understand the primacy of data. And of keeping it safe and secure, backed up in multiple places and with reliable metadata. And yet I saw and heard the Director of CRU admit (with little sense of shame) that he hadn’t kept a lot of stuff..that they may have lost it in an office move and that anyway it was his data and he wasn’t going to let anyone else see it. Very very very bad! Harry_Read_Me showed that they had very little clue at all about what data they had, what it meant or how it had been processed. In many commercial environments this could lead to jailtime, not to a shy shrug in front of a Parliamentary committee.
So I was just about to conclude that there really was no sound foundation to AGW..if the raw data was so badly organised, what chance was there of drawing may sound conclusions at all…when I came across the Hockey Stick. If anybody deliberately set out to act in a way that cast doubt upon their own work, they couldn’t do better than to emulate MB and H. Their documented behaviour comes straight from episodes of investigative journalism where our gallant reporter confronts the dodgy geezer who has defrauded the old ladies and brings them to task.
And then I read about the theory of ‘teleconnections’ among trees. The idea that some trees in widely differing parts of the world respond to the same long-term global climate changes , but are immune from local weather and may show a completely different response than a similar tree a few feet away.
Once I had picked myself up from the floor, dabbed my eyes, dried my trousers where the uncontrollable laughter had caused a minor accident, I concluded that though there might be some areas of AGW theory that are soundly based on experiment and observation, most of it is complete bollocks.
And that is why I became a sceptic.
They don’t pretend to be able to tell exactly what is going to happen. You are putting words in their mouths.
You do the same with the temperature proxies. Please try to understand what they are saying before you criticise them.
OK – maybe I’m just too stupid to understand. Or maybe I don’t find that Mikey passed ‘Writing for Understanding’ 101.
Please explain teleconnections to me in a way that I do understand. You clearly must understand it yourself in great detail as you are so certain that I have it wrong.
‘Once upon a time there was a tree standing on a tree line in Siberia…..
Over to you
The ‘teleconnection’ is the global temperature. While local temperatures do rise and fall indenpendently, when the global temperature rises, most local temperatures will rise. If the sun was to increase it’s energy output by 50%, would the global mean temperature rise, and most likely all of the local temperatures? The scientists are looking at ‘forcings’ upon the whole system.
And the telecommunicated signal has unique, say, harmonics and overtones such that one can actually discriminate it from the noise? (And lay off with the 50% increase in insolation comparison; quantities matter, and an overwhelming change in baseline is not in the slightest way analogous to what is being discussed here.)
I’m guessing it has a signal that goes up and down with the global temperature. The 50% is just to state the principal, there is a global mean temperature, the proxies are responding to it as well as the local variations, just as thermometers have been recording the present day temperature trends. The proxies are not as accurate as thermometers. The NAS agreed, the temperature now is most likely warmer than it has been for the past millenium.
excellent post. I am sure you echo the sentiments of many a non-believer
Another way of looking at this is to ask the question ‘is this the best that climate science can do?’ If it is, then they can count me out.
The reason I visit this blog (and I hope this is also representative of many non-believers) is that I think climate science can do better and that is the basic goal that Judith has set herself on this blog.
I would really love to know just what the strength of the AGW hypothesis is without all the bollocks (as you so eloquently put it)
Actually, it can’t. As others have described above, the non-linear variability of the system is such as to overwhelm any conceivable “advances” in physics and modelling. Honest. The best you can get out of chaos theory is low-to-moderate likelihood ballpark clumpings of possible outcomes.
The above is in ref. to ” I think climate science can do better”.
They aren’t predicting the weather, or even the local climate, just the global climate which is predictable to an extent, since it is a physical system responding to known physical inputs and responding with known physical responses. To date, the global climate has been responding as predicted, within the error margins.
Thank you. I needed that.
As with others, it was not a single incident that started my skepticism.
– too many catastrophic claims
– spin by non-science based politicians
– focus on a single factor for climate change (CO2)
– results that didn’t mathmatically add up (quoting doubling for Co2 vs. temperature while failing to realize it was a log function)
– data smoothing routines and faulty input data
– near religious zeal for the movement instead of defining the problem and weighing options
I fully believe mankind can effect the climate, but I don’t think adequate research has been performed to justify the IPCC’s conclusions.
They do not focus on a single factor, they consider all forcings. Analysis of the current climate change reveals that CO2 is the strongest forcing. It has not always been so, it will not always be so. That is just how things are at present.
The issue is global climate change. This is a global phenomenon, it touches everyone, and everything that lives in or responds to changes in that climate.
“Is it correct to move from opinions about a narrow issue to doubts about the field of climatology as a whole? Can and should it be applied to other areas, e.g. medicine or structural engineering?”
I’m not sure this is the way people approach the issue. On a simple level I’ve never listened to a weather forecast and wondered if there is any hidden agenda in it. I trust the weather report more now than I did pre-AGW (because they are more accurate). On a wider level I enjoy reading about climate scientists work, dragging whatnots across the arctic ice in an attempt to understand it better seems like a great thing to be doing.
I specifically have trouble with misanthropes who use climate science to push the idea that all human activity is essentially problematic. I dislike the IPCC process and I have little regard for the scientists that choose to be the mouthpiece for it.
AMac I am perfectly capable of throwing out the bathwater without harming the baby.
Yes skepticism is warranted.
The dragging effort was a doofus PR stunt, considering a single overflight by a DC3 with ice radar achieved far more in 1/200 the time at zero risk. The ignominious collapse of the dragger’s expotition (h/t Winnie the Pooh) was inevitable poetic justice. A blizzard or two almost made the lesson permanent.
Share the data, share the code…
Who knows a sceptical scientist might find an ERROR or an assumption that demonstrates that AGW is worse than we thought..
But because they have not done so for years we have less time to adapt, and millins will die unnecessarily, because they would share not with their critics..
Unlikely, of course, probably…
archive everything with the journals (at least from now on) as they should anyway
That is an ethical and moral reason, that not sharing or not letting your work be critically reviewed in unsupportable..
Sir John Houghton didn’t have a response for that either.
Does anyone doubt that Doug Keenan or Steve Mcintyre would not tell the world that they had found an error, that demonstrates CAGW. Or do the usual suspects think they are oil shills,and that no sceptic cares for their grandchildren
Sorry for the abrubtness…. I explain at Watts up, in the comments section of Tom Fuller’s farewell, which at this moment seems like a good idea.
Barry, most of the data and code has been out there for years and years.
Most of the ‘skeptics’ are more interested in bleating than working.
Reminds me of a little incident over at RC ( I think it was earlier this year).
A bunch of source code and data was made easily available for interested individuals to play with – the usual suspects were conspicuous by their absence.
Somewhat more complete (and contradictory) version on CA. Typical RC trollism.
as the person who coined the term ” free the data free the code” I have to say that the important code was not available. Neither was the important data. There are still key elements missing.
Now, as I said at the time I asked for the code, I did not expect to find anything that would change the results in any significant way. What we basically wanted was to run the GISS code and change parameters that were subjectively chosen to see how sensitive the code was to those parameters.
Several of us who downloaded the initial code release attempted to get it running. After a couple weeks of trying ( see climate audit to see how we tried to coordinate the effort) we gave up. Some people working on it went dwn the route of trying to find an old AIX machine at weird stuff in Sunnyvale. Others went on to try to emulate GISS code in R. Later CCC would get a port working. Then EM Smith and I talked ( we were both looking for an AIX platform) and he succeeded in getting it running. Peter Oneill would take the code and do his own version in C++. He added capability to the code and uses it to test various alternative settings ( its a really slick piece of work) You can see his contributions by visiting the GISS update page. Personally, I decided to look at both GISS code and CRU code and do my own version in R, focusing on the metadata issues and the lack of adequate traceability in that area. Nothing earthshattering. The science wont change. All the more reason to share things.
Gavin has also run up against problems trying to get code from skeptics and Those of us who support open code have joined him when he complained about not getting code.
A lot of code and data is available. In certain places, however, key pieces are missing. For example, see magicjava’s attempt to recontruct satellite processing. He hit the ITAR wall. I’d suggest that you not speak in generalities when there are people in the house who have command of the specifics. Or we can talk about ICOADS if you like. or station histories. Or the code for doing TOBS adjustments. or CRUs adjustment code. or where do you want to start? The lack of the code and data doesnt make me disbelieve in AGW, I just think everyone would be better off with open code and open data. But Im open to your proof that open code and open data makes for worse science.
I’m curious, Steve; how did you deal with and replicate the fudge-factors? You know, the hand-inserted substitute data hard-coded into the programs?
Share the data, share the code…
Who knows a sceptical scientist might find an ERROR or an assumption that demonstrates that AGW is worse than we thought..
There were skeptics reviewing the AR4, but they didn’t find the Himalayan glacier error, either.
BTW, the code is shared. Unless it’s Spencer’s code. Or Corbyn’s code.
Those who noticed the Himalayan glacier error were ignored. Many other errors were pointed out by reviewers and ignored.
I can’t see anything in the review comments about the glaciers. As for ‘many other errors’, I saw a lot of reviewer comments such as those by Vincent Gray that amounted to nothing more than pointless nit picking.
Thank you for giving the stage to Mike Zajko, the article is well written and raises interesting points.
However, Mike Zajko is not a scientist, especially regarding Natural Scineces. His claim that there is no one scientific method is like saying that there is no room for mathematics in Physics.
His attempt to bind together political and social sciences with natural sciences is exactly what you are trying to avoid in the relationship between science and politics.
If there are no rules, then there is no right and wrong, and we are getting into the dangerous zone of moral equivalency, which should be strictly kept out of science.
You cannot make the rules as you go, there should be and there is a clear way of determining if a scientific theory is right or false, not according to the results you want to achieve.
I made no claims regarding the social sciences, in fact I explicitly excluded them since I don’t care to defend their scientific status. This is precisely why I presented an account given by a natural scientist, along with the philosophical backing. And I explicitly made clear that having no single scientific method should not let us to assume there are no rules – there are plenty of rules.
If you have a clear way of determining if a scientific theory is right or false, by all means propose it. Just be aware that many have tried to do the same, and the results have been… inconsistent.
Zaijko – I agree you took care not to conflate the Natural with the Social Sciences – but I do feel that you failed adequately to distinguish between The Scientific Method, on the one hand, and the “methodology/ies” prevailing in the various Fields. They are not the same thing. As your first link indicates, the various scientific fields overlap an area labelled “Principles of Scientific Method”. Outside that overlapping area, Fields develop their own functional methodologies – to enable them to, er, function. Indeed it’s hard to see this diagram except as an illustration that the Scientific Method is that body of doctrine which all scientific fields must share if they are to worthy of the name. The field-specific methodologies, then must surely not VIOLATE the “principles” contained in the overlapping area. In other words, the practical obstacles to experiment inherent in a field must not used to excuse violation of the core principles of the Scientific Method (which, for this reason, are few in number).
You may agree or disagree, but in any event I think it’s a point that needs clarifying, and I don’t think it’s quite clear from your post.
Gauch occasionally lapses into talking about “the scientific method”, but he never treats it as a singular, unified entity, and never spells it out as doctrine.
He is also definitely not concerned with demarcating science (though I imagine many of his readers will wonder just where he would draw such a line), and while he argues that all sciences make use of his general principles I do agree that the diagram provided lends itself to confusion. The field-specific methodologies should not violate his general principles, yet they seem to exist outside of them in the diagram. It seems to be designed just to illustrate visually that the general principles of scientific methodology (shared) need to be combined with a field’s more specialized techniques.
And note there is nothing about experimentation being a prerequisite for science. We can imagine his general principles, including hypothesis generation and testing, existing without need for experimentation.
Zajko, Firstly, i’d like to thank you for the post- another viewpoint is always welcome.
Now i’ll mercilessly deconstruct your argument… :-)
As Tom above, i think your lack of direct scientific experience MAY mean you are unable to see that the common parts of the sciences are the most important. I also do not agree that the ‘unique’ areas/techniques of each field are somehow immune from comparison. Granted, there can be very specific, technical and cutting edge techniques used, but the core mechanisms (to my mind at least) stay the same.
For me, some of the key core scientific parts of the method are:
– good experimental design (including a good knowledge of what you are ACTUALLY testing vs what you THINK you are testing- a huge problem in climate science).
-good idea of equipment limitations/methods/conditions.
– reproducability of method (not necesserily results).
– Records. You record EVERYTHING, no matter how insignificant it may seem at the time (for example if i’m performing a study on something that is moisture sensitive, i record if it’s raining outside).
These steps (though of course non-exhaustive and they assume basic scientific competence) can be applied across ANY field in science.
I do not accept that because climate science uses seperate techniques and difficult data (proxies, multiple disparat records etc) that it can somehow claim to be exempt from what PRACTICAL scientists see as the scientific method.
Or to put it another way, if you want to see how the scientific method SHOULD be performed, i’d heartilty reccomend that you go visit a cGMP facility with a research department- then you’ll see what should be done and how that CAN apply across the fields.
Not that i don’t find your view point and post exceptionally interesting, it was and i thank you again for it, i just think that this sort of analysis is devoid from the practical aspects of the scientific process and allows Climate Science to hold itself to different standards than the ‘other’ mainstream sciences.
Re- the multiple lines of evidence, again i think you’ve slightly missed the point on the scientific method. Having multiple lines of supporting evidence is brilliant, but- they do not help with the main theory unless they directly interact.
For example the following are used to prove C02 causes cAGW, but none, if you examine carefully actually do that:
-rising sea levles
-basic physics surrounding ghg’s
-IR absorption measurements in the atmosphere
These can ALL be used to improve the picture, but on the KEY issue, does co2 cause cAGW, they don’t help (for that we need climate sensitivity and the mechanisms involved- though interestingly a few of the above list CAN be used to answer this).
Though this is not to say that further analysis of the scientific method and analysis of procedures is not useful, i’d just suggest that the inclusion of practical scientists would be imperative.
You list some core methodological parts that “can be applied across ANY field in science” and “should” be.
Alright, but as in my reply to George Crews below, I’ll put forth that there are sciences (or what are commonly referred to as such) that have made do without “good experimental design”.
To say we need good experimental design in order to test climate hypotheses is one thing, it’s another to say experiment is fundamental across the sciences.
Mike, 2 days ago I asked, in response to a similar statement, for an example of “immensely valuable” science being produced without experiment. I note that you are preparing a post on the topic, but in the meantime surely you could spare us just one example of this exciting new phenomenon?
Actually, I wasn’t planning on it, but let me throw some ideas out there. Biology, ecology, and environmental science have accumulated a large body of knowledge through observational methods. So has medicine and epidemiology – while good experimental designs are increasingly valued there, sometimes experimentation on human subjects is just out of the question. This hasn’t stopped plenty of disease causes being identified through inference. Likewise Astronomy and geology (especially in its early days) deal with objects largely precluded from experimentation, and I’d say we understand our place in the universe a whole lot better than we used to because of them. Hypothesis testing still occurs in these sciences, but it would be a stretch to call these methods experimentation.
Some specific instances would have helped. Taking your medical instance “being identified through inference” – surely from legitimate (ethically and scientifically) experiments performed on analogues, whose fitness to stand as analogues, as well as their limitations, have been exhaustively tested by experiment? And the difficulty of experimenting on humans is much lamented in medicine – no pretense is made that that it is not a problem, as is the case with climate “science”. And to supply a specific instance of my own, don’t forget Barry Marshall, who contradicted the consensus, obtained by the inference you seem to favour, that gastric ulcers were attributable to stress, by infecting himself with helicobacter pylori. How “immensely valuable” was that earlier, inferred consensus? As a sufferer myself, I can tell you – not very.
You talk about the “early days” of astronomy, and yes, experiments were difficult to design, but the point is, astronomers understood the importance of designing them if their work was ultimately to have value. They did not say “experiments are hard to devise in our field, therefore our field must do without them: – they persevered until they HAD designed a good experiment. Of course many good hypotheses may yet await the design of a good experiment. So long as those proposing them acknowledge this, they are not behaving counter-scientifically. Climate scientists DO NOT acknowledge the want of real experiment in their field, and from that we may infer that they are less interested in supplying that want than they ought to be. That is counterscientific.
In fact, if you think about it, the greatest accolades in science often, and rightly, go not to the guy who conceived a hypothesis, but to the guy who designed the experiment that confirmed it.
One verification of Einstein’s theory was the measurement of the precession of Mercury. It was as General Relativity predicted. I would categorize this as an experiment. The apparatus was already set up, but that does not disqualify it as an experiment. Much of the knowledge of astronomy has come from observation. Not so much from computer models of the Universe though.
I suppose it depends on what you would class as experimentation, it is not so easily defined.
It does not always have to involve laboratory work; for example following observations of migatory patterns of a certain animal, you could hypothesise that if a particularly dry season occured then they’d change their route. The experiment would then be the observation and recording of this effect.
The ‘design’ comes down to making sure, that in this instance it is only the dry season and nothing else that is affecting the migration, not some other, potentially unknown factor.
And i would argue, strongly, that this sort of methadology or experiment IS fundamental. You MUST be sure that what you are observing is what you THINK you are observing.
I suppose i would, in this instance suggest that an experiment was any ‘test’ (be it observational or practical) that seeks to challenge or re-inforce a theory on a given, tightly defined criteria. If that makes sense :-)
this can be applied to any core science. Again however, this would depend on just what you thought was a science too.
Welcome to my parlor….:-)
I cracked the officially stamped pseudo-science of centrifugal force. It does not fall into any of the categories of physics as it compresses mass, stores energy, changes density and releases stored energy.
This is quite an essential tool in understanding rotation and the balance shifts of motion.
Turbine technology is where I was able to crack the understanding of centrifugal force. This is where the claim of 92% efficiency in power generation just did not add up. No one knew exactly how this was come up with but the formula they created stated it was. Doing some digging in the past(150 years) I found that this was come about by water water was needed to pass by the housing, that “did not touch the blades”. So, 8% was passing by the housing, then it had to be 92% efficient.??? I notice ALL turbine were harnessing energy the same way. I inverted a turbine as the speed of a turbine was exerting energy back onto itself. A lever(lift a stone with a stick and a pivot point) is also part of a whole circle. This means on a radius of a circle, it takes more energy to turn the wheel as you go towards the axis/pivot point. The optimum would be a circumference of a circle for maximum energy torque to work together. I created a concave stationary cone that has blades going on a slight angle to split the energy and slope it in line at all 360 degrees(looks simular to the center of a wringer washer). Next, the turbine, is a drum turbine seen id furnaces with the energy hitting all the blades at once. This took away the problem of centrifugal force being against turbines.
I cring at the current wind turbines that use the space of a full circle but takes energy from only 3 small points.
It has been seen by engineers and will run but NO ONE knows how powerful it is.
Here is the delema. Engineers can only say if the mechanics are sound but are not qualified in physics. Physics will not touch it as it is mechanics.
So, I had to work out all the science from angles of deflection to friction, etc. Physics also are totally stuck to the LAWS and will not allow these to be broken.
I decline to define science as whatever scientists practice. I do not accept that climate science must be whatever you have found that climate scientists choose it to be. IMHO, such an empty definition destroys the great utility of the scientific method.
Consider an analogy. The mathematical concept of a circle does not exist in nature. In the real world, we are stuck with tires, dinner plates, etc. A perfect circle has never been observed. Our confidence in our measurements are never absolutely perfect. Error is everywhere. But that does not negate the practical utility of such a fundamentally simple, abstract concept. How could we understand nature without such simplifications? We could not. We can only understand simplifications/abstractions of nature.
To say there is some fundamental difference in the very concept of roundness for a tire versus a dinner plate is to make it impossible to have rational discourse about tires and dinner plates.
This extends, self-referentially from its assumptions, to the scientific method itself. That the method does not map exactly to anything that any scientist actually practices is an advantage and not a defect.
Science is following the scientific method. The scientific method is an abstract process (think mathematical circle) that can be used to describe certain activities (think tires and dinner plates). I outline the method here.
Let me note that there are only four assumptions behind the method:
1. All scientific theories must be logically consistent. (For example, the interpretation of experimental evidence as Bayesian. This assumption is necessary for rational discourse in science.)
2. Scientific theories and experiments must be parsimonious. (For example, Occam’s Razor. This assumption is necessary for error management.)
3. The sole test of all scientific knowledge must be experiment. (Feynman’s ‘almost’ definition of science. Nature is our only source of scientific truth.)
4. All experimental processes and evidence shall be independently verified and validated. (This assumption is also necessary for error management.)
The scientific method’s emphasis on error management is because the method contains two logical flaws. The reliance on experiment means that we must reason from the particular to the general — abduction rather than deduction. And verification and validation rests on an appeal to authority.
Defining science as “what scientists practice” is indeed a rather straightforward, and very inclusive way of going about it, which seems to make you and a whole lot of other people uncomfortable since it doesn’t leave much room for clear standards.
You propose a clear set of standards for the scientific method, which many scientists wouldn’t be able to meet. Fair enough, and the result would be a rather severe re-definition of what we consider science.
There are two basic ways to approach defining science/the scientific method.
1. What scientists do in practice.
2. What scientists should do in practice if we want to consider their actions as scientific.
You propose #2, which was Popper’s position as well, though his standards were somewhat different. Both yours and Popper’s prescriptions would exclude vast areas of science as unscientific – for example, those fields that have made do without experiments and still produced immensely valuable theories.
That does seem to be the point however, and is necessary if we are ever going to have a clearly defensible argument for what we can call science.
I wish you luck re-drawing the borders of science however.
How can my or Popper’s definition of science “exclude vast areas of science as unscientific”? Are you claiming logically inconsistent definitions? Don’t think so. So better to say something like a vast number of people define science more loosely than I do. That is something we could debate.
In that spirit, why is it important that a definition for science be adopted that a large number of people’s activities can meet? Why must, by definition, knowledge about nature be easy to come by? Just one obvious risk is that since the labels scientist and scientific still have a lot of prestige associated with them (IMHO, because of physics) many people simply could be wanting the labels. How do you weed these guys out? By asking them?
And yes, the point for many people may be that the scientific method can not be used to rank a theory’s value. All it has to rank theories with is Occam’s razor. The value I place on the scientific method and the theories it produces uses a value system other than a scientific one. What’s wrong with me doing that? I can still believe social science, political science, etc. are immensely valuable fields even if I think they should take the word science from their titles.
BTW, no need to wish me luck in re-drawing the borders of science. Definitions are easy to make. You don’t need any luck doing it either. That, of course, is the issue. Everyone, it seems, can just define what they want to do as being scientific.
Okay, I’ll admit I was a little loose with the definition of science above, but I was referring to less controversial examples than the social sciences (Gauch doesn’t mention them at all, and while I can see how his argument could be extended to cover some of them, particularly the statistically-informed ones, that’s not my point).
Popper’s persistent thorn was biology, paleontology etc. and the theory of evolution. He famously declared evolution “metaphysics” because it so clearly violated his demarcation of science. Later on he softened his stance, even tried to help biologists out by designing an experimental test that the theory of evolution could be subjected to (1970 – Objective Knowledge) but I don’t think biologists took much notice. Evolution did just fine as a theory before Popper, had great explanatory value and generated lots of productive avenues for further research, and had no need to validate itself as a science after he came along.
It sounds as though your thinking or the thinking of some of Popper’s critics is that in fields of inquiry like paleontology, where a predictive model is inapplicable, it would be impossible to satisfy falsifiability. This is untrue. In these fields falsifiability can satisfied in the construction of a retrodictive model. A “retrodictive” model is one for which the outcomes of statistical events lie in the past. Such a model stands in contrast to a “predictive” model, for which the outcomes lie in the future.
A retrodictive model is extracted from observational data by the same process by which a predictive model is extracted. Also, a retrodictive model is statistically validated by the same process by which a predictive model is statistically validated; this is by drawing a sample that is independent of the one that was used in the model’s construction and observing the outcomes of the events that are in it, with subsequent determination of whether the model is falsified by the new evidence.
As falsifiable retrodictive models are buildable, there is not the need for the falsifiability criterion to be loosened in order for retrodictive fields like paleontology to be included among the sciences. Loosening the criterion has the logical shortcoming of violating the law of non-contradiction. In view of this shortcoming, the idea that is referenced by the word “scientific” is illogical.
In deciding issues such as whether or not creation “science” is a science, the courts of the United States have had to decide what the adjective “scientific” means. Their decision is embodied in the Daubert standard. Under this standard, a theory is not a “scientific” theory unless it makes falsifiable claims. Interestingly, under the Daubert standard the “endangerment finding” of the U.S. Environmental Protection Agency is illegal for it is based upon claims which the EPA alleges to be “scientific” claims but which are not falsifiable.
Here’s my quickie widest possible definition/characterization:
It’s scientific knowledge if a) we know how we learned it, and b) we can rely on it (so far).
That encompasses experimentation, observation, and (valid) logical deductions from known events. And nothing that violates it is scientific. E.g., climatology meets neither test.
“those fields that have made do without experiments and still produced immensely valuable theories.” This is news to me – I would be grateful of some instances.
The interiors of black holes seem remote from experiments. However some astronomic data on blackhole jets and gravitational effects of blackholes are inferring their existence. I consider blackhole theory reasonably within science.
Macro evolution is more questionable, as the quantified accumulations of mutations and quantified mutation rates “differs” from “prescribed mutations” (handwaving).
From what I have heard of multiverses (multiple universes) they a priori cannot be observed or tested, and thus cannot be “science”.
I don’t think any of these is “immensely valuable.”
The “projections” (predictions” of climate science appear fundamentally limited by chaos etc. (See Weinstein above. I am particularly concerned over global warming models not using principles of scientific forecasting. Contrast the stringent double and triple blind testing set up in medical research.
Thanks David, I had in mind black holes, and similar conjectural phenomena.
Interestingly, I know about these largely because those working in the field loudly lament the want of an experiment to test their hypothesis. I conclude that they are earnestly trying to design one. Even though they have not yet succeeded, I suggest this makes their efforts science – if not we would need another word to describe the stage between conjecture and the performing of an experiment. Climate “science”, by contrast, seems to feel no obligation to design “a better experiment” (Rutherford), but takes refuge in increasingly meaningless declarations of uncertainty. It is not science.
But Mike has promised us a post showing “immensely valuable” science conducted without experiment, so maybe we’d better wait and see.
I’m still agog.
So, is Geology a science? Given that with some exceptions, you cannot do an experiment (much as it would be interesting to reassemble pangea and see how it broke up, the NIMBYs would probably complain about the resultant complete destruction of the atmosphere and oceans).
The idea of observational science (Geology, Astronomy, Palentology etc) is well established; it is still possible to make testable predictions, simulate processes and run models. The idea that the only science that can be called science is what you do in the lab is a bit naive, to be honest.
I never said anything about doing things in labs, did I? The idea that an experiment is something necessarily done in a lab is a bit naive, to be honest.
And I’ll let a geologist answer for geology.
Do I have the multiverse argument for you! [huge grin]
Watch out for the Raving channel.
You might like to have a quick run through an article by a physicist who disputes Big Bang and Black Hole theory:
There appear to be egregious ongoing violations of the Feynman injunction to give the data veto authority over the theory!
Not sure how true it is but I have heard it rumored that Statistical mechanics remains to be verified.
I personally expect that it is incomplete. Baysean a priori probabilities and ensemble concepts don’t play nicely together.
Parsimony is a perceptual thing.
Does ‘simple’ mean….
Simple to describe?
Simple to use?
Simple to choose?
Simple to comprehend?
Simple because of the absence of parameters?
Simple because it is random?
Beauty is in the eye of the beholder and thus parsimony can be extremely misleading.
Nature is an ugly hag.
Raving I’m not sure “parsimony” doesn’t have a well-defined scientific, in addition to its literary meaning. I may be wrong, but I thought it was a pretty straight substitute for Occam’s Razor. That seems to be what Wiki
But maybe a scientist can help?
Law of succinctness, law of economy. Those will suit me also as well as any of them. .. Simple, succinct, economical.
How about throwing a few more into hat? … immediate, direct, proximal
The problem with these notions is that they are subjective. What is simple, direct or economical to you or me is not necessarily so in the natural inherent sense.
Example: Which is these is simplest .. sqrt(2), 2, pi, 14 ?
Each of those values is variously simple or complex, contingent upon the perspective that one views them. Just because it appears simple, is easy to comprehend or can be constructed with minimal ancillary consideration .. well those are personal human preferences. It’s not clear that there is an inherent natural justification.
When I used to construct simulation models, I would try to make them as complex as possible. I was modeling evolutionary process and I was ‘shaping’ (engineering) it with the hope that history of past events would be retained, would be selectively active and would trend to accumulate as iteration of the whole system progressed.
All my efforts at boosting the complexity/intricacy/richness of the model more or less failed. Invariably a very small handful of parameters .. 1, 2 or occasionally 3 parameters set and limited the overall evolution of the system.
The obvious criticism was “Why build such a complicated model? It is difficult to estimate just one or two parameters from reality. How can you possibly estimate 20 of them? !”
My response at that time was to say … “What sense is there in throwing away all the excess parameters and dimensions? Does setting them to ‘null’ or ‘nonexistent’ genuinely create a simpler (more parsimonious) process”?
What I appreciated was that ditching the complicated fluff amounted to little more than hiding the intricate granularity of a rich reality from myself so that the situation would appear to be simple.
Most of those extraneous parameters, for most of their arbitrarily set values had no obviously discernible influence on how the simulation evolved.
Occasionally one or two of those extraneous factors would exert a commanding influence over how things transpired. They became hinge variables. The simulation was steered and limited specifically by their influence.
Thus most of the extra parameters and detailed adornments were of null influence, except for those occasions where they were influential. In such anomalous scenarios the influential ancillary factors served as the dominant factors.
The lesson for climate modeling is thus: Things work sweet when they turn out as anticipated. When things start go wonky, they tend to get wonkier quickly. It’s unlikely that the unexpected destabilizing parameter can be anticipated beforehand.
Cutting a simple model at the outset and calling it truth because it performs and predicts as desired is only as good as nature is willing to co-operate with the status quo.
As climate changes then so too will nature and reality change. All bets are off and the prevailing model will be inappropriate.
How will nature and reality change? I haven’t got a clue.
To my own mind, given my own experience, any individual or committee of expert opinion who believes that the models they use will continue to trend forward in the face of changing climate have falling victim to their own PR job.
Making a model simple amounts to hiding irrelevant aspects, out of sight and out of mind. It not that such aspects don’t exist, rather they are not influential until such occasion as they make their presence unexpectedly felt. When that happens, it’s called “A regrettable unforeseen eventuality. The good news is that we now know what’s wrong and can revise history!”
(until the next regrettable and unforeseen eventuality crops up.)
Complexity never bothered me. When it is without influence or consequence it is neither here nor there. It is irrelevant.
When complexity does matter and you are blithely unaware of it because you have constructed your model a while back, pre-purged of apparently spurious and inconsequential details … For the current IPCC committee that strategy is pure FISHDO. :-)
Adage from Polish Soviet days:
“The future is certain. Only the past is in doubt!”
… parsimony by any other name.
Raving you – may – understand this better than I but in answer to “Does setting them to ‘null’ or ‘nonexistent’ genuinely create a simpler (more parsimonious) process”? I would have answered, No, each parameter you neglect or fudge counts as an instance of counter-parsimony. You seem to be treating parsimony as a preference for the SIMPLEST explanation, whereas I understand it to mean the preference for the explanation which requires the fewest assumptions – regardless of it or its rivals’ complexity. An explanation may be far more complex than its rival, yet contain fewer assumptions – in fact this is intuitively (but not axiomatically) the case, as your example implies. And since instances of assumption or fudging can be counted, I don’t see how it is a subjective property.
Yes I agree with you and thank you for reminding me (it’s been decades since I considered it.)
Lets assuming one is provided with a set of causal explanations which are consistent with the observed situation. Those causal mechanisms which are simplest, most economical, most direct, most proximal are the more likely explanation.
Notice how this is similar to entropy
Standard simulation perspective:
The parsimony argument refers to the over-parametrization issue. Construct a model with a sufficient number of control parameters and one can twiddle and tweak those parameters to approximate whatever. (Example: Fourier series expansion of a function).
Hence it is desirable to the construct a model which describes an anticipated result with as few control parameters as feasible.
I have no idea what sort of model is used to predict weather. Nevertheless last year’s bad predictions by the U.K. Met office hinted of the possibility over-parametrized results.
It was claimed (paraphrased) “Our model(s) performed quite well in the short term and in the long term. The model performed poorly in the midterm”
When I heard this explanation , a plausible cause for such a signature became apparent.
Plausible Explanation: Two different simulation models are used. One model is optimized for the short term time scale.The other model is optimized for the long range time scale. There is a coupling, steering or empirical transfer function as time into future progresses off loading and handing off from the short term model and on to the long range model.
Both models quickly lose predictive accuracy as time-in-the-future moves away from their optimal performance. The mid term dip in performance occurs because the short range prediction simulation has run out of steam and the longer range (scale) has not as yet gained optimal performance.
Such a scheme as I posit is a neat idea but a bit of a cheat. The perception is that the model predicts well projecting forward over an extended interval of time.The reality is that one is considering the co-alignment of more than one model. Double your parameters and double your fun!
So why not shoot for the big money and co-align 10 simulations? That’s 10 times the the parameters to twiddle!
[ … musical interlude … Why is this cheating? .. If it works then how can it be a cheat? A better forecast prediction ensues .. or maybe not! ?]
Regarding the role of entropy, Christensen (“Multivariate Statistical Modeling” 1983) argues that the principal of maximum parsimony (aka, Occam’s razor) is akin to the principle of minimum conditional entropy. An additional principle is necessary for the construction of a model. This is the principle of entropy maximization under constraints expressing the available information. These two principles are sufficient for identification of that unique model for which each inference that is made by the model is correct.
Like Occam’s razor, maximum parsimony is an example of an intuitive rule of thumb called a “heuristic.” By tradition, scientists use heuristics in deciding upon the inferences that will be made by their models. There is a logical shortcoming in this practice. This is that the method of heuristics violates the logical principle called “non-contradiction.” For example, different people will find different inferences the most parsimonious. From its violation of non-contradiction, maximum parsimony violates non-contradiction thus being illogical. Some physicists favor maximum beauty but, alas, beauty is the eye of the beholder. Thus, like maximum parsimony, maximum beauty violates non-contradition.
I believe that the truth in science (like beauty) is in the eye of the beholder, but, IMHO, parsimony about scientific theories can always be assumed to NOT be misleading. Thus, parsimony can consistently be an assumption of the scientific method.
Note that there are always a large (infinite) number of theories that can explain any size collection of experimental results. As an analogy, consider that given a polynomial of sufficient order, it can approximate any sets of experimental data to any degree of precision required. As a practical matter, low order polynomials are to be preferred over high order ones.
Then consider that Einstein’s law of gravitation is a superset of Newton’s law of gravitation. All of the data that confirm’s Newton’s law confirms Einstein’s law. But it does not stop with Einstein. If some experiment ever falsifies Einstein’s law, it will not falsify ANY previous experimental results. Any new theory of gravity MUST have Einstein’s and Newton’s laws as approximations within their range of applicability. This is what it means for the sole test of knowledge to be experiment. So as a practical matter, we always use simpler theories within where we can.
The goal of science is not truth, but making predictions about the results of future experiments and then having those predictions confirmed. Parsimony is used to control complexity and thus is an error management tool. Parsimony contributes to the predictiveness of science being maximized.
If not an assumption that scientific theories be parsimonious, what would be a more agreeable mechanism for ranking the utility of one scientific theory against another? Would not anything else be even more controversial?
IMHO, there is no requirement that the scientific method produce theories that are true. The scientific method is a process for the management of our ignorance about Nature – our inability to make reliable predictions. Thus, the scientific method is subjective. It is about our ignorance, our states of minds. Inherently subjective. An endless self-correcting looping process. Nature herself has no notion of our science. Do you think the weather has any knowledge of the Navier-Stokes equations. Do the planets understand Hamiltonians? The scientific method is anti-Platonic.
What is subjectively parsimonious for one person may not be for another. That is fine. The very idea that there could or should be a consensus about science goes against the method. Certainly there is no inconsistency in acknowledging people have differing subjective beliefs.
“An endlessly self-correcting loop process” reminds me of Douglas Hofstadter’s I am a Strange Loop, which muses on the nature of consciousness and thought.
“Self-leading” … tautological by any other name. (I am a big supporter of Tautology. [Groan. The puns are inescapable at this level of preoccupied focus])
Interesting singular categorization of science. I like it!
a large (infinite) number of theories … Agreed.
“… within their range of applicability”
…Those two fatal words, Mine and Thine. (these two words, mine and yours) “The law of non-contradiction”… Is it ultimately unreasonable? At some level(s) of scaling(s) the concept of entropy seems to rot away. Perhaps that be called “The law of contracting subjectivites”
…Your singular categorization again. Moving in the direction of a more absolute truth. (Read this as singular, dense, reduced set of truths)
Exactly! That is the reason that ‘science’ is the autistic perspective.
It is the choice of:
… that means the current prevailing option is to like it or lump it.
IMHO, there is no requirement that the scientific method produce theories that are true. The scientific method is a process for the management of our ignorance about Nature – our inability to make reliable predictions. Thus, the scientific method is subjective. … Agreed.
Problem is that ‘objectivity’ is universal and timeless.
‘Objectivity’ beats out ‘subjective experience’ in a clock stopped, timeless scorecard analysis every time.
The lack of a viable alternative to timeless universal circumspection blocks the view beyond such an appreciation.
Regrettably the autistic (objective) perspective is a universal timeless entirety. That is the compromise made and the advantage gained.
As for differing subjective beliefs? Sure, why not. … “You can choose any color that you like so long as it is white.”
I would put it this way. The fact that the scientific method assumes Nature to be (in your terms) autistic does not place any actual limitation on Nature Herself or how She behaves, but only on our descriptions of Her. If our theories of Nature are not rational and our observations of Her not objective, a library of knowledge or overall consensus about Nature would be impossible. It is hard enough as it is with all the error management fallible humans have to perform. How would we communicate or verify information otherwise? As has been said, there seems to be an unreasonable effectiveness of mathematics in the natural sciences. Lucky us.
It would be interesting to hear more about climate models. Climate models are developed using the laws of physics and parameters obtained from observations of the complex materials (the absorption spectra of GHGs, condensation of water vapor). As Stainforth has shown with ensembles of climate models using different sets of parameters randomly selected from the range defined by observation, reasonable models can be developed with climate sensitivities ranging from 11 degK. The IPCC works with a small “ensemble of opportunity” drawn from an even larger universe of possible models. Those models have been tweaked to reproduce the historical record of 20th century warming and some feature of current climate. The historical record could be contaminated by a significant amount of UHI, especially if UHI varies mostly with the log of the population rather than the population itself. That record could also be contaminated with significant natural variation (such as the current “pause” in warming or the 1980’s “spike” in warming) that current models may not show. Is there any real science here?
But those models do not take into consideration :
Planetary motion, rotation, suns angle of deflection off a moving planet, atmospheric pressure increases, salinity changes,etc.
This single time in science fails when the peramenters are put back into deep planetary history when evaporation was not occurring, the planet was rotating faster and centrifugal force was stronger.
Frank states, ” models have been tweaked to reproduce the historical record of 20th century warming and some feature of current climate. ”
A common misconception among individuals not engaged in climate science is that models are adjusted to reproduce observed trends they are designed to test. That is incorrect. In general, models are tuned to fit an existing climate, either the one current in a particular modern era or in an earlier time – they must reproduce seasonality, latitudinal gradients, wind characteristics, etc. Once tuned, they are then “forced” with the variable of interest – e.g., a progressive increase in CO2 is introduced and the models are asked to generate temperature change as an outcome. Whatever the results, the models are not adjusted to make their results fit the observations. If they perform well, fine. If not, that is unfortunate but must be accepted.
It turns out that GCMs have done a good to excellent job with some variables such as long term global temperature trends in response to increases in CO2, but a poor job in other areas – regional changes, short term effects, ENSO events, etc. Even in areas of good performance, models are improving, and so current GCMs, utilizing more accurate inputs, match temperature observations better than Hansen’s 1988 models for example, although the predictions made by the latter overestimated trends by only a modest extent.
Because of the complexities of climate and the need for parametrizations of phenomena that can’t be precisely characterized, models will never be perfect, but they are already quite useful when employed judiciously. It is also important to realize that the principal conclusions underlying modern climate science as it relates to the relationship between greenhouse gases and warming can be derived from basic principles of physics without recourse to complicated models. The models add quantitation, but Arrhenius in 1896 was able to provide a fairly accurate assessment of the CO2/temperature relationship without GCMs, overestimating climate sensitivity by only a modest amount.
I take what you say as correct.
My problem is that with regard to global temperatures the fit of the ensemble mean is to the temperature record is “breathtaking” (my reaction) particularly 1950-2005.
By that I mean that the unexplained variance is very small which indicates that the ensemble mean “knows” more about the climate than the individual models do and in a sense more than the world does, for either the climate is either almost completely lacking in natural variability (which the models aren’t) or the instance of the climate as revealed in the record just happens to be very close to the ensemble mean.
All I can say is that people such as I are suspicious that the models have by some means have “learned” something from the record in some way, learned does not imply deliberate teaching. The models are all different, have different sensitivities, and they use different forcings but seem when averaged to match the record with great accuracy even though the record could be considered just instance of many possible somewhat different 20th centuries and doubtlessly the record has its faults. If they had all “learned” something from the record, they might well all disagree with the record in a random way but their average (ensemble mean) would tend to reveal what they had learned, averaging favouring the learnt signal over the model noise,
Now you know better than I but can you see my difficulty, particularly as the match 1920-1950 has much more unexplained variance (can it all be instrumental error, or a magnitude of natural variation absent prior 1950?). Also after 2005 (admittedly a short period) the variance seems to have come back again (is all this due to a difference between real and scenario forcings or a emergence of enhanced natural variation).
I really would be grateful if you could give guidance on how these apparent anomalies should be interpretated.
Now you know better than I but can you see my difficulty, particularly as the match 1920-1950 has much more unexplained variance (can it all be instrumental error, or a magnitude of natural variation absent POST 1950?).
Alex – Not being a climate modeller, I don’t feel qualified to answer all your questions other than to confirm your astute observation that the ensemble means simulate the climate trend better than individual models. Clearly, there seems to be some type of averaging effect, whereby errors within individual models cancel out. However, as you point out, the “skill” of the models varies over different time intervals, and so to some extent, precise concordance with observations is coincidental and can’t be guaranteed as an inherent attribute.
For more details, I refer you, with some trepidation, to Gavin Schmidt’s two-part FAQ on models. As you know, Gavin is a favored target of skeptics because his defense of current mainstream thinking sometimes tends to be intolerant of differing opinions. Nevertheless, he is an expert modeller, and regardless of how his overall perspective is viewed, he can be trusted to provide an accurate description of how models operate. You’ll notice he does address the issue of how models have improved recently. The link is
Thanks, I have read it before and unfortunately the thread is now closed to comments. Never mind.
I could have cut it all a lot shorter anyway.
The crux is that if the models are fair representations of the world and possess natural variability and also the world shares that degree natural variability, then the variance of each model from the ensemble mean should be of the same order as the variance of the world to the ensemble mean. Which post Agung doesn’t “visually” appear to be the case.
Strictly speaking the Earth should be added to the ensemble but I can’t do that.
Here is the link IPPC AR4 image:
My case is not clear but in the period prior to Agung the temperature record looks as though it might have similar variance (naturual variability) as the models. Post Agung it seems to be very tight to the ensemble mean even though that part of the record is blighted by volcanoes and hence faces a greater modelling challenge. It may be that they lucked in on a rare occurence.
I shall remain puzzled for a little while longer I expect.
After Agung it’s a gong show:
“The agung is a large, heavy, wide-rimmed gong shaped like a kettle gong. of the agung produces a bass sound in the kulintang orchestra and weighs between 11 and 15 pounds, but it is possible to find agungs weigh as low as 5 pounds or as high as 20 or 30 pounds each, depending on the metal (bronze, brass or iron) used to produce them.”
Sorry. Couldn’t resist! :) ;) :D
Stainforth’s systematic exploration of “parameter space” by an ensemble of climate models shows that the range of possible models is far greater than the IPCC’s “ensemble of opportunity”. (And even Stainforth didn’t change parameters controlling heat transport in the oceans.) Stainforth’s work suggests to me that one could find models that reproduce current climate reasonably well that give a different answer for the 20th century. Do you believe that all of the IPCC models converged on a similar answer merely by chance? Don’t models with high climate sensitivity seem to compensate by stronger aerosol cooling? See Feynman’s history of the mass to charge ratio for the electron in “Cargo Cult Science” and then ask yourself if your colleagues “bend over backwards” not to be influenced by knowledge of the correct answer. Stainforth’s failure to identify an optimum set of parameters from random exploration of parameter space suggests that the refinement of the IPCC models hasn’t led to more accurate representations. And you presumably know that “model democracy” is interfering with developing “useful” regional projections, creating further pressure for consensus.
If climate models “did what they are supposed to do”, they would be able to reproduce the climate patterns that supported a green Sahara desert about 6000 years ago. The only(?) major difference between then and (pre-industrial) now is that the sun was closest to earth during the northern hemisphere summer. But models don’t show monsoon rains penetrating the Sahara the way they presumably did back then, IMO because they have been adjusted to reproduce today’s climate. If they can’t reproduce that climate change (the kind of disaster we should be worried about), are they of any real value for avoiding future climate disasters?
My favorite book in this field is Henry H. Bauer’s Scientific Literacy and the Myth of the Scientific Method. Bauer suggests that the scientific method is more an ideal, never actually realized, than a process to be strictly followed by scientists.
One of the more useful ideas Bauer has is to divide scientific activity and outputs into different stages. The first stage of distilling our experience of the world into a science framework he calls frontier science, which he describes as characterized by bright ideas, silly ideas, luck, ruthlessness, cutting corners, trial and error, quick and dirty experiments, generosity, hunches, ingenuity, idiosyncracy, mixed motives, stubbornness and conflicts of interest. (Note that many of these characteristics are mutually exclusive; obviously they do not all apply at once.)
The ideas thrown up by frontier science he suggests are then filtered into the primary literature, which he describes as putative science, mostly not obviously wrong,and possibly right.
as time goes by, the primary literature is further filtered by testing and use by others, replication of results, modification and extension of results and the citation of useful work, all of which results in the rejection of mistakes, uninteresting results, and fraud to produce secondary literature (review articles and monographs) which are mostly reliable.
The secondary literature is again filtered by the passage of time, use by others and the recognition of concordance with other fields to further eliminate mistakes and any obsolescence that may have crept in. This results in what he calls textbook science (ie appearing in undergraduate textbooks) which is mostly very reliable.
I have found this a very useful framework to analyze climate science. Anything I can find in my copy of F. W. Taylor’s Elementary Climate Physics I take to be very reliable. Anything that goes much beyond this I regard as possibly right.
(BTW, I recall that on an earlier thread someone posted that they weren’t aware of any undergraduate texts on climate science. I recommend Taylor.)
Dr Curry, consider this SPPI document:
This is an insight into the skeptic mindset on the matter of trust in climate science.
All the talk about the IPCC, ideologues, dogmas and engaging the skeptics misses the elephant in the room, which is that the skeptics are the largest part of their own trust problems. For all the justified criticisms you can make of climate science and the IPCC, the skeptics are generating a whole load more unjustified criticisms, such that the sum of total loss of trust is mostly of their making.
Consider page 13 of the SPPI document:
“There are lots of ways to find “faster warming”. More tricks from the climate establishment:
They removed inconvenient thermometers. There were nearly 6,000 thermometers in the official global network in the 1980s, but there are now just 1,079. The removals increased the proportion of thermometers:
At airports (which are warmer than surroundings).
Nearer the equator (it is hotter at the equator).
At lower altitudes (it is colder in the mountains).”
This entire passage makes both unsubstantiated and false claims which reduce trust in climate science for any reading layperson.
Of the three points at the end, the last two are basic errors and identified as such with just a little understanding of how temperature records work. The first point can be tested (remove the airports and see how that changes the global trend).
So it’s now 2010. We’ve had years of blogs going over the temperature records. There are loads of papers out there and information on websites. GISTEMP source code and input data has been available for years. These errors in the SPPI report, at such a basic level, are unacceptable.
This isn’t some random skeptic blog releasing this. This is the SPPI. With a published report that is aimed at policy makers.
If such errors were in the IPCC report rather than an SPPI report, skeptics would pounce all over it with criticisms, including dare I say going a little overboard on the ramifications and actions needed to correct the situation. Instead with regard to the SPPI report we get silence. Well not quite silence because they are at least spreading the SPPI report all over the internet!
If such errors were in the IPCC report skeptics would talk about losing trust in climate science. When it happens in the SPPI report shouldn’t skeptics lose trust in climate skepticism? Apparently not. Why not? Why does it only work in one direction?
Maybe climate scientists stopped engaging with skeptics because they lost trust in climate skepticism.
Cthulu raises a good point at 7:53pm.
The report s/he links appears to contain errors of fact and misunderstandings of basic principles in a quasi-authoritative format. What should be the response of the skeptical community? Especially considering that there is no “skeptical community”, but rather, a heterogeneous, unorganized, ununited population of individuals*.
I generally don’t contribute “this is wrong” comments, as it becomes nearly impossible to hold a conversation on a topic of interest, in the midst of a food fight (examples abound).
– – – – – – – – – –
(*) Excepting the coordinated actors of the Fossil Fuel Conspiracy :-)
Considering the mountains of alarmist nonsense coming out from the climate establishment, a little bit of, what do you call it, skeptical ‘misinformation’ is really not the problem and indeed welcome, seeing as it might stimulate critical thinking.
On the other hand, tremulous palpitations and feigned outrage that ‘some wrong stuff has been printed’ and ‘the public are being led astray’ can actually do more damage to science’s position.
And what exactly is wrong with the ‘claim’ you point out above? The document provides references – so it should be easy to sort that out.
I look forward to Dr Curry’s reply
First, having gone and actually looked at the SPPI report, I followed the links for the two claims made to the report they were taken from. In that report, Ross McKitrick lays out the reasons why the network changes could affect the overall record. The SPPI report did, in reducing the reasoning to bullet points, oversimplify to the point of creating a false impression of the reasoning, but this does NOT invalidate the reasoning itself and so does not necessarily invalidate the claims. Pretty much the same point the non-skeptics are making about the IPCC report – yes, we might have had to oversimplify for the policy makers, but the errors found do not invalidate the science.
Apparently, once the policy makers and advocates get involved everyone quickly loses trust in the motives of their opposition, bringing political attitudes into the scientific debate. This is why I would wish for scientists to remove themselves from advocacy – or at least identify which role they are taking. Step away from the political method, please, and return to the scientific method. Well, whatever that is – I thought I knew but now I’m not so sure…
Secondly, your statement “maybe climate scientists stopped engaging with skeptics because they lost trust in climate skepticism” presupposes that some of the most prominent climate scientists (eg. the hockey team) ever did have trust in, or engaged with, the skeptics.
They always refused to engage because, famously, they’d “just try to find something to criticize” in the received canon! The nerve!
I think that there may be a whole section of science that is not being considered that may play by another set of rules.
I will call these the elucidators, people not involved in theory and hypothesis, but in discovery.
From what I know, the rules are somewhat different, as they are in the business of fact building by some form of experiment or visualising technique. Provided that their results are not “unlikely”, they may never be fully checked as replication may be considered simply too expensive.
Provided that their results help rather than hinder progress in the field of interest, they may never be challenged.
They do not work in isolation and they are open to scutiny but their work may not be challenged for the sake of scientific perfection but only for general adherence to recognised technique, note taking, etc.
In another area, mathematics, the difference between best and effective practice is large. Mathematics is based on theorems rather than theory and this may make a difference. The effective standard of proof (that a theorem is a statement of truth) is that a descriptive proof is convincing, it is a mix of equations and natural language but is a long way from a formal logical proof. This rarely ever fails and proofs are not widely examined after they have been excepted by a small number of reviewers.
I am not sure how many proofs are produced per annum, it is a function of the amount of mathematicians and the availability of coffee. But I think it is of the order of a 250,000 per annum (a wiki-fact). Subjecting all these to best practice (rigourous proof) would be enormously time consumming.
I may be rather off topic but I will say that scientific practice is well tailored to the needs of science but AGW is a different animal altogether and what may be good enough for the science may not be convincing to the laity.
Very perceptive observation!
Current companies want to generate profit by producing many products.
If one say, power generating turbine can replace 18, where is the incentive for profit? Cheaper power, yes but not for the manufacturer, he is just interested in the articles he is producing for a profit.
So the answer, my friend, is blowing in the wind?
Not quite…too inconsistant.
Puff the magic dragon may have the right energy though…. :-)
What happens if the science is too complexity advanced for the current scientists?
The person doing the research and making discoveries has all the rights in the world then to generate their own scientific language and time barriers.
Today it is .80007563 B.E.(Before Evaporation) Why not? Religion has done it. He could also rename the whole field of study that is advanced in say…Welcome to Joe’s World………unfortunately, your in it. :-)
The scientists could look extremely incompetent and stupid.
Ugh, philosophy. :-)
I’ve always had a deep interest in philosophy but each of the handful of times I started taking philosophy or history&philosophy of science courses I would hit this barrier where I felt like I was a duck out of water, a philosopher who didn’t fit in as a philosopher.
Since then I have come to understand the essence of the immiscibility. Philosophers are obsessed with definitions. They can never be too careful (pedantic) with the details. Details as in ‘detail focus’. Details as in hyper-focused ‘descriptive discrimination. As my autistic friends would say, “There is a devil in details.”
Just to be clear about it here … I am using the term ‘autism’ as a functional and effective ability rather than as a dysfunctional pathological limitation which is the manner by which the term is overwhelming used. :-(
Honestly, how can one discuss logic or rational thought without acknowledging and appealing to autism? The autistic person is so strongly, if not overly, rational. It both the autistic strength and what limits the autistic perspective.
There is a whole other universe of’ critical thinking out their that is “Neuro Typical” (NT), non-autistic. The trouble is that almost everything we call ‘science’ and ‘philosophy’ is formed and embedded within the autistic perspective.
The whole world isn’t functionally autistic,and probably not even half of the world is operatively autistic. Problem is that autistic and NT are convergence-of-reality junkies. We all are drawn to that forward looking detail bias.
One might suggest that a considerable percentage of mathematicians and physicists have a functionally autistic rigor to their style of thinking. But however that might be so, relatively speaking the representation or influence of autism on philosophy is much more pervasive. This can be seen in the intense emphasis afforded to getting the details of the description spelled out precisely. That is the autistic perspective! It is very hard to participate in the philosophical discourse without abiding to such pedantic rigor.
I’m interested in abstraction but I’m not autistic. [frustration]
When I think of the Demarcation problem, Wittgenstein comes to mind …
In other words, a discontinuous, fragmented, distributed, multidirected reality. Welcome to the non-autistic universe!
Forgive me, but are you saying we should allow climate scientists to use the Chewbacca Defense for the validity of their climate models? :-)
Chewbacca defense or multiple and reiterated “ignoratio elenchi or red herring”? You betcha! :-)
Things appear to be as they seem, .. | time passes | … unless they happen to turn out being otherwise … | time passes | … in which case we had the problem misconstrued! … | time passes | … but the good news is that the problem has been properly reformed …. | time passes | … so things are normal and appear as they seem to be.
I think I sympathize with you to a large extent, but will likely be spending a long time yet (probably the rest of my life) figuring out where I sit in regards to philosophy.
It’s quite possible to “over-think” the philosophy of science. Even the basic assumption of scientific realism – that there is a real world of phenomena and objects we can understand scientifically, takes a lot of philosophical work to defend. Thick books have been written trying to justify the existence of a real, objective world that we can understand, and some criticisms of this view refuse to go away.
Gauch prefers to consider realism as an assumption or presupposition, and not requiring philosophical justification. This does leave a bunch of epistemological questions open, especially for the more abstract & theoretical sciences, but it’s probably the most practical way to proceed.
My former PhD supervisor was a scientist who transmuted himself into an anti science ‘green man’. He retired and maintains involvement in climate change advocacy.
Meanwhile I struggle alone without interaction to construct a substantive concrete foundation to holistic process. I have done the lifelong work. I understand what has happened. I know what must be stated and how to state it so that considerations might move beyond the ‘virtualized entirety’ (autistic perspective)
Regrettably I am alone and at the end of the line.
The topic is perceptual. It is much harder to ‘describe’ than it is to understand or appreciate. It is utter hell to be marginalized and converse only with one’s self.
I am betrayed by the person I trusted to provide me with interaction and lend credence to my intellectual efforts.
I damn ecologists. They are the greatest of hypocrites and charlatans in this business of climate change science.
As a retired scientist and one interested in the methods of science, I offer the following comments. The word science comes from Latin, “scientia”, from “scire” : to know. It is a method of knowing and depends on careful observation and experimentation as well as inductive and deductive reasoning. It is not the only way of knowing but it has the benefit of being subject to “falsification”. Other ways of “knowing” such as personal beliefs generally cannot be falsified.
It would seem useful to distinguish observations derived from passive observing from those derived from interventions.
A word missing from the discussion so far is “causality”. (It seems to be termed “attribution” by some climate bloggers). Yet, causality is important in science generally and particularly in Climate Science. I recognise the nebulous nature of the concept of causality but it must be examined.
Passive observation alone cannot demonstrate causality unless models are constructed based on the observations. If the models have predictive accuracy, then some causal factor that was part of the model can be impuned to be causal, cf gravity in tides and solar systems.
In interventional experiments, selective alteration, interference with or removal of a putative causal agent can give insight as to whether that agent played a causal role in the phenomenon being examined.
I suggest that climate science is an observational, non-interventional science and therefore highly dependent on accurate observation and future verification of models based on those observations. Its integrity depends on the accuracy of the observations and accuracy of its models. The latter can be judged only in the future.
Accuracy means following that line of thinking or science to it’s conclusion(even if it is outside of the person’s restricted areas of LAWS or science).
Learn that area of science knowledge and continue on that trail.
Two things I have learned is that our current science has generated it’s own walls of do not pass as these LAWS are designed for boundaries.
Secondly, our science is generated around this time period. And if you use the exact same perameters, the whole thing falls apart due to the evolution of the planet to advance is based on the planet slowingdown. It has to do this to evolve and change. If the planet never changed it’s rotational speed and the complex systems around it, there would be absolutely no changes from one year to another. Only slight fluctuations.
Well, it gets real causal real fast with its ‘observation’ that “CO2 must be the forcing driver because we can’t think of anything else that would cause the changes we’ve seen. Therefore we must seize the BS by the horns and use it as a magic thermostat to guide the climate!”
It was a very long day and I haven’t had a chance to read over all the comments so forgive me if this has been previously stated.
All models related to the Scientific Method begin with a question. The question is human or human defined.
Given the inability of IPCC Climate Science to define their Method and their inability to confirm the checks and balances that must occur to maintain validity over time, is the issue Method or foolishness?
When Climate Science finally defines the proper Method and ensures validity, we can restart the clock and will someday conclude they got a portion of it right.
Water generated a magnificent defence system that involves it bonding with salt to prevent mass evaporation and atmospheric pressure. Any sudden disruption sets the defence for waters survival by increasing pressure and bringing salt to the surface to combat evaporation.
This was the trade off when water was evolving from a chemical mixture. The speed of the planet was much faster and so was centrifugal force. So, it was natural for water to bond with salt as not to “fly-off” this planet.
This planet was a dry planet up until 800 million years ago when evaporation started. The planet slowing allowed water to detach from salt and settle on the oceans floor.
So in essence, as our planet ages and slows, ocean water is becoming fresher and fresher.
Being a round planet, water vapour does not cross the equator.
Good stuff. Thanks for the post.
I, like Alex Heyworth, am a fan of Bauer’s book. I’m curious what Mike Zajko thinks of it. In his field, I suspect that it’s too imprecise to be useful, but it rings true for me.
One useful thing Bauer does is to give concrete examples of scientific work that do not fit the classical scientific method. For example, a geologist might study some rocks and hypothesize the circumstances under which the rocks were formed hundreds of millions of years ago. That’s generally as far as it goes: observation and hypothesis, but no experiment.
Or consider the chemist who sets up an apparatus to detect and identify the intermediate molecules that form as part of a chemical reaction. Careful observation and experiment, but no hypothesis other than the circular one that the experiment is worth doing.
Bauer notes that the fields of study that most closely and explicitly follow the scientific method tend to be those fields of study that most want to be perceived as scientific.
My wife once took a psychology class in which the entire first lecture was devoted to explaining why psychology was a science. None of my chemistry or geophysics classes ever paused to justify themselves in this way.
I haven’t read Bauer, but came across it when doing a general search of books on the scientific method – Gauch seems the most popular and most recent, and since Bauer claims to be based on the insights of STS (closer to my field) I assumed it would be more controversial.
Alex Heyworth’s recap above is interesting, though I’m skeptical about applying a progressive model for the development of scientific disciplines in general. I’d have to see the argument in its detail to have more to say on that.
However I agree with you that it’s important to conceptualize sciences that do not follow the standard model of experimentation and hypothesis testing, such as the more observation-oriented ones. We could probably come up with different ways of classifying these different sciences, but I don’t really see the point.
As for Bauer’s observation: “the fields of study that most closely and explicitly follow the scientific method tend to be those fields of study that most want to be perceived as scientific” – I agree to a large extent, but this really depends on which version of the scientific method is cited. Not everyone can measure up to the model of physics, so the scientific method in these other cases is usually just cited as some generalized comparison of theory and observation. But it is true that seeing “scientific” as a qualifier (scientific materialism, scientific astrology) tends to be a tip-off that the body of knowledge really wants to be treated as a science, and is probably a little insecure about its status.
To me, much of this post is really academic navel pondering. The issue of potential harm from ‘anthropogenic climate change’ is not solely a scientific issue. Nor is it political or sociological. It is an engineering issue. Quite frankly, it is false to assume that a scientist specializing in climate science is qualified to answer the question of whether climate change is a problem.
The job of a climate scientist in human society is to examine factors effecting climate and predict what weather we will experience in the future. Regardless of any protestations about projections versus scenarios versus predictions, that is the function they are attempting to perform. One simply claim that the future will be warming while at the same time claiming to not make a prediction.
The problem begins when climate scientists, with expertise in areas such as temperature history from tree rings, or programming algorithms into computer programs to simulate long term weather conditions, or other such narrow fields, claim their expertise entitles them to evaluate and make pronouncements about what engineering methods should be employed to deal with future weather extremes.
Ultimately, the role of science in human society is to provide tools. A scientist examines the universe, asks a questions, theorizes an answer, and sets about determining the accuracy of that answer. Many answers are simply intermediate steps to more questions and answers. Eventually though, the end result must be a concept, method, or prescription to improve something real. The function of making use of that knowledge is the role of the engineer or technician. I see a great reluctance on the part of climate scientists to get out of the way and turn their predictions over to the engineers and technicians so that their expertise can be employed.
Of course, I doubt that many engineers, presented with the output of climate science so far would conclude that the potential cost of attempting to control future weather patterns by adjusting CO2 production was justifiable.
But Gary, don’t ya jest ITCH to fiddle with the Magic Thermostat?
That should read: One simply cannot claim that the future will be warming while at the same time claiming to not make a prediction.
My thanks to Judith and to Mike Zajko for this excellent thread, which I suggest, with a little trimming, be given to all students interested in the philosophy of science.
FWIW, I think that most of appears above applies to any honest research endeavour conducted by human beings, not just to ‘science’. We want to find out something, and it is important that we get it right. We know what ‘right’ ought to mean. We don’t want to look foolish, so we are (usually) careful and patient in what we do. We don’t like criticism, and react (usually) badly to it, especially if the criticism is well founded (projection: we are cross with ourselves and attack the messenger).
We try to do a better job next time because we learn from our mistakes.
It’s a while since I read them, but Miliband (the father, not either son) and Lakatos were helpful on the way in which we often build defences around our prized theory/hypothesis — an AGW example, to me, is the possibility that aerosols explain a gap between theory and observation — because the theory is ‘right’, and the data can’t be ignored, so there has to be an explanation.
I don’t think that deficiencies in scientific method are the problem with AGW so much as an apparent failure to be open with data, code, methodology and so on, as though climate science is a private domain. I haven’t encountered a similar attitude anywhere else in the natural sciences over the last thirty years.
I found your post informative and thought-provoking.
I like Gauch’s Fig. 1, but his Fig. 2 appears to be a bit of a plum pudding, or more inside-out cake.
Isn’t there a good deal of common sense well outside of scientific principles, and outside of the conventions of reason too? (I expect when I take the time to interpret the models as the author intends, I’ll see more clearly. My way is funnier, though.)
Keep up the excellent analyses, and one day you may be a commendable chemist. Or possibly even a physicist. ;)
Yes there is indeed much of irrational common sense that could be located outside. The point just seems to be that certain forms of common sense are a key part of science.
Vukcevic wrote: Models are just a numerology not scientific discoveries.
That is bad. Newton’s model (called Newton’s Laws) is useful for, among other things, predicting planetary motions and guiding interplanetary explorations. A complex probabilistic model lies behind the sequencing of each genome.
Most automated physical measurements are based on a mathematical model relating the desired (and estimated) quantity to some other more directly measurable attribute. The instruments are validated before deployment, meaning that a regression equation is fit through known values of the desired (in validation they are fixed) quantity and the resultant measurable attribute. The regression equation is inverted in practice to yield the measured quantity. For a basic thermometer, the desired quantity is the temperature and the measured quantity is the expansion of mercury. Without models, practically nothing could be known.
Those are just a few examples demonstrating the badness of Vukcevic’s comment. Whatever problems there may be in the models of AGW, a blanket dismissal of models is wrong. (This is not a criticism of Vukcevic in toto; some of his posts are stimulating.)
Zajko’s main point can be written more succinctly: No short list of principles that absolutely distinguishes scientific methods from non-scientific methods can be presented; A list of the 10 most important principles (experiment, measurement (operational definitions), use of logic, use of mathematics, debate, public presentation of methods, public presentation of data, hypothetico-deductive method, etc) shows that each is used in non-science sometimes, and violated by particular scientists sometimes. Each scientist has usually been wrong most of the time; we respect the great ones like Newton, Einstein and von Frisch (append long list here) because they got some important things right when their predecessors and contemporaries could not, and because they invented methods. To trust particular scientists, therefore, is a bad bet.
It should be noted parenthetically that Newton was an avid numerologist, with a special interest in Biblical numeric fantasies. Make of that what you will.
It’s easy to debunk IPCC science bec. they are making a prediction on global. temp. If you test their climate models against historical data, the errors are huge. They can’t even ‘predict’ the known past, much less the unknown future. The problem with IPCC science is claiming that it is settled when the evidence is weak. Their theory has no predictive power. It has explanatory power but that’s weak. All theories have explanatory power, even the phlogiston theory and the bible creation story.
IPCC is making a positive claim – anthropogenic CO2 is the cause of global warming. The burden of proof lies on those making a positive claim. As a skeptic, if I doubt CO2 is the cause, I don’t have to prove my doubt. They have to convince me that their positive claim is true. Instead, IPCC is ignoring all those who doubt. There is no need to debunk AGW. There is a need to prove it.
Another problem with IPCC science is it is being misrepresented with the certainty of physical science. In fact, it has the uncertainty of social science. They give the misimpression that their forecast is like celestial mechanics where astronomers can predict the position of the planets 100 yrs. into the future. In reality, it is more like an economic forecast. Climate models are no better than econometric models. Everybody knows economic forecasts are guesswork. But economists are more cautious. They usually forecast just one year ahead. IPCC forecasts 100 yrs. ahead.
IPCC scientists have more in common with economists than with physicists. Like some economists, they are engaged in advocacy. Supply siders preach that all economic problems will be solved by cutting taxes. IPCC preach all climate problems will be solved by cutting anthropogenic CO2. We’ve seen what happened when suppy siders took control of economic policy. It might come to a bad end when IPCC takes control of intergovernmental policy.
Hear, hear! The Team’s propensity for talking about such things as 1-sigma confidence levels, even “90% certainty”, only emphasizes the point. Such talk is suitable for manipulating uniformed opinion, nothing more.
Though I Laffer at your gratuitous put-down of supply side economics!
I would like to Judith for this blog. I am glad to read an open-minded assessment of what went wrong with climate science, not by so called “skeptics”, but by someone who has been on the IPCC side of the trenches. I thank you for your courage.
I am not a climate scientist, just an engineer interested in science. I actually became interested in the subject just after climategate. I didn’t want to accept what the press was saying without doing some research into what the scientists were saying, and what the facts were. I’m like that – a skeptic.
What I found shocked me, and upset me; A totally un scientific information war. Claims of tempering with data, which were only countered with ad hominem attacks, hiding of data from other who wanted to check the facts, claiming only IPPC inner circle scientists are capable of understanding the science, claiming “the science is settled” when several scientist were openly questioning key points of the “consensus view”. I actually found a website for PR-people in the sector that only gave advice on how to make ad hominem attacks on skeptical scientists with dirt an all the prominent skeptics. People asking questions were labeled as “flat earthers” and compared to “holocaust deniers”. All this with a religious fury.
Even without looking at the science one could not avoid seeing something is seriously wrong. Science does not work like this. So I am glad to see you working to get climate science back to a respectable science, not religion on dogma.
One thing that should go, is the classification of climate scientist into:
1. Climate scientist
That alone tells me something is wrong. Skepticism is, and should be a virtue in science!
The Wikipedia says:
“Contemporary skepticism (or scepticism) is loosely used to denote any questioning attitude, or some degree of doubt regarding claims that are elsewhere taken for granted.
The word skepticism can characterise a position on a single claim, but in scholastic circles more frequently describes a lasting mind-set. Skepticism is an approach to accepting, rejecting, or suspending judgment on new information that requires the new information to be well supported by evidence”.
…a denier is something else.
I have noticed how the scientists, who thru their research came to support a different view than the IPCC, very quickly became labeled as “skeptics”. A very handy way of keeping the “non skeptics scientists” unanimous. This is the case at least with Dr Roy Spencer. I guess it is just a matter of time before you will be labeled a skeptic too ;). (Another telltale of non-science: Label critics heretics, thus making what they say meaningless).
So how about some new labels:
1. Natural warming deniers – The political movement (bordering on religion)
2. Skeptical Climate Scientists – The ones who don’t claim the science is settled, and want to find out more using the scientific path. Who can have differing opinions as part of the process.
3. Climate change denies – Are there any left?
I am also very skeptical of Homeopathy, bending spoons with the power of the mind, electrical allergy, the 9-11 conspiracy, and many other things. I’d like to take Climate Science off that list one day.
You raise good points, but why not step away from labeling people, and focusing on what can be proved or falsified?
I think the labeling is the major problem, as it’s often used as a shorthand way to dismiss opposing viewpoints
A statement is either accurate or it is not, and I care not where the accurate statements originate. I can judge for myself whether someone has an ‘agenda’, and take that into account if it’s relevant to the discussion.
vukcevic | November 9, 2010 at 5:10 pm |
Give me a reproducible result and I don’t give a twopence for methodology
Yes, word experiment should have been in there, as an engineer I consider a physical demonstration not only is a basic requirement but reproducibility is essential.
As an example I would mention Svensmark’s Cosmic Rays hypothesis and CERN Cloud experiment.
This is what climate science should be about; CO2 hypothesis needs similar rigorous testing in the laboratory conditions.
Result then reproducibility!
I have an the above experiment, the effects of the Earth’s magnetic field (rather than heliospheric) on climatic events:
Well its been a good read, thanks and good night.
Not so. You are playing the game of trying to force questioners right back to First Causes; the ultimate epistemology. Forget that. This is about the basic “consensus” view of what makes adequate and respectable science. The IPCC’s lab-rats-by-proxy and editorial summarizers seem to have a long hike to get to that standard.
Sorry, somehow this got attached to the wrong post; it relates to John Whitman’s entry, below.
Thank you for your considerable post. Judith thanks for hosting a very relevant topic due to the significant and escalating controversies surrounding IPCC supported climate science.
What I found in the post was a sort of cultural survey of the usefulness of various interpretations of ‘scientific method’; indirectly implying focus on the IPCC supported climate science.
More to the point, I would expect discussion of objective knowledge as being science and the comprehensive process to obtain objective knowledge being the comprehensive scientific method. I would expect to see discussion of demonstrating the very possibility of objective knowledge by human mind; then how to prove it is objective; then how to properly apply objective knowledge to reality; and then how test its proper applications in reality.
That is the ‘first’ science to be addressed to be used to measure the controversies in IPCC climate science.
Can you provide pointers to this based on your considerable research? I hope other commenters here can assist in pointing some direction.
To me, without that discussion, then what can we say about the controversies in IPCC climate science? Without that then we are just socializing in a blog cocktail party. Don’t get me wrong, I like cocktail parties and Judith has a nice place, but . . . . . sometimes we need to attend to business before the cocktail party starts. : )
Another thought – if climate “science’s” claim to use methodology which disregards the traditional Scientific Method is as reasonable, or as “culturally acceptable” as you seem to me to imply, one would expect either that other fields would recognise and respect this, or that interdisciplinary bickering, based on disagreement over methodological doctrine (another word with endless possibilities for dispute) of whole fields, by practitioners in other fields, would be widespread. I am aware that such disagreements rage WITHIN disciplines, particularly over hypotheses which await the design of a suitable experiment, but I’m not aware of another instance of an entire field whose methodology attracts so much adverse comment from such a wide range of its putative peers. This reinforces the impression of special pleading on the part of climate “science” for an indulgence not available to the rest of science, and increases the determination on the part of sceptics to refuse it.
Another way of defining climate “science” might be to say that it comprises individuals from other fields who have granted the suspension of the Scientific Method the field demands. Those who sniffed the methodological air and found that their noses wrinkled, remained climatologists or glaciologists or whatever their true field was before all this nonsense started, and were vilified as deniars for their trouble. Since in this analysis, suspension of the Method becomes a defining characteristic of the field, defending this shortcoming by saying, in effect “that’s just what climate scientists do”, seems like circular reasoning to me. The point, which you raise briefly but fail, IMO, to deal with satisfactorily, is that they shouldn’t.
And if suspending the Scientific Method were such a good way for climate “scientists” to go about the business of predicting future climate, how has it borne so little fruit? How come models constructed along the privileged lines claimed for climate “science” fail disastrously when the UK Met Office tries to use them for the prediction of weather, while Piers Corbyn, who claims no such privilege, enjoys statistically significant success?
Sorry, the “you” in the first sentence is Mike Z!
No problema. : )
I think methodological disagreements between the sciences largely don’t erupt because there is usually little reason for such disagreement, except in cases (like creationism) where some claim to scientificity is seen as a general threat to all of science. Scientists don’t usually have a good grasp of the methods used in other disciplines, and even if they did and found them disagreeable, there wouldn’t be much reason to argue.
Climate science is different, because climate change and the policies intended to address it affect us all. The result is a lot of people looking at the climate sciences through their own lens of what they consider methodologically appropriate, and many of them do not like what they see because there are different standards at play than what they are used to or were taught in school.
I disagree that climate science is defined by a “suspension of the Method” because I think the Method is a bit of a myth that never really applied to some sciences in the first place. However, the point you raise about whether avoiding this Method has been fruitful is a good one. The models have poor predictive power, which is why they are often treated as projections. Testing them through an application of the Method is therefore not entirely appropriate, but this should not leave them unaccountable to evidence.
But lets leave the GCMs aside – how does the rest of the evidence hold up? There is a fairly robust case for the physics of radiative transfer etc., which has some explanatory power for what (arguably) reliable temperature records we possess. Hypothesis testing is hard in this regard, because we simply can’t wait long enough, end even then the test will never be definitive – but we do have some observations and a hypothesis to explain them.
Are the observations good enough? Is the hypothesis based on solid ground? Many people would say yes, and many would disagree. No application of the Method is going to be convincing for either side in this case. If this were just a case of regular, policy-irrelevant science, we wouldn’t need to take it any further, but it isn’t.
My personal take is that it is best to craft policy that isn’t based on the outcome of a definitive test for the hypothesis, one that would be a good idea even if many of the skeptics’ criticisms have merit.
No, CS can’t have it both ways. Either it makes “projections” which are explicitly labelled as the elaborations of some “expert’s” opinions, and are not treated as robust models that encompass and can be challenged with all available hard data, or they are actual forecasts, predictions, which can be “falsified” by testing against real world outcomes (a long-term prospect, to be sure).
The Team seems to want the respect and clout associated with the latter, while playing around with the former–with complete gatekeeper control of the inputs. Poor show.
BS Footprint and Tom:
I maintain that these methodological disagreements arise from the misimpression among most scientists and philosophers that the so-called “problem of induction” remains unsolved. A consequence is that rather than build their models under the principles of logic, scientists build them under intuitive rules of thumb called “heuristics.” The heuristics vary by discipline and by individual within a discipline, hence the disagreements over methodology. These disagreements are symptomatic of illogic in the thinking of scientists.
We have a 47 year old solution to the problem of induction, hence we don’t need to wait for the solution to the disagreements because we’ve already got it. I’ve drafted the first in a series of essays on this and related topics and will publish the series on Climate, Etc. if Judy approves.
Yes, climate models give projections. But these projections are dysfunctional even compared to the inaccurate economic projections of econometric models. In economic projections, the models are based on multiple regression, meaning there are many independent variables. In order to forecast the dependent variable, you have to make a forecast also of the independent variables. The model, in effect, describes the cause and effect between the independent variables and the dependent variable. Of course if your forecasts of independent variables are inaccurate, the forecast of the dependent variable is also inaccurate. This is the source of uncertainty in econometric models. It is nearly impossible to accurately forecast all the independent variables.
In climate projections, the models are based on simple regression, meaning there is only one independent variable – CO2. All other variables are held constant hence the dependent variable (temp.) is solely dependent on CO2, nothing else. This never happens in reality. Climate models do not even represent reality. They do not properly represent cause and effect. If you put other independent variables in the model, their effect would be so great that CO2 will become insignificant. But these variables are nearly impossible to forecast so what do climate modelers do? They either exclude this in the model or include them but hold them constant so only CO2 would matter. This is not science. This is manipulating the model to get the result that you want.
But lets leave the GCMs aside – how does the rest of the evidence hold up? There is a fairly robust case for the physics of radiative transfer etc., which has some explanatory power for what (arguably) reliable temperature records we possess. Hypothesis testing is hard in this regard, because we simply can’t wait long enough, end even then the test will never be definitive – but we do have some observations and a hypothesis to explain them.
The rest of the evidence does not hold up. The greenhouse effect can be described by the equations of quantum mechanics. If you double CO2 concentration from 280 ppm to 560 ppm, the increase in temp. is 0.3 C more or less. How did IPCC got the sensitivity of 2 C to 6 C? By adding positive feedbacks in the dubious climate models. These positive feedbacks are over and above the greenhouse effect and they are the result of the complex interaction of various independent variables which the models cannot actually predict. In short, they are guesswork.
Are the observations good enough? Is the hypothesis based on solid ground? Many people would say yes, and many would disagree. No application of the Method is going to be convincing for either side in this case.
The observations are inconclusive and do not prove AGW hypothesis. Look at the temp. and CO2 data from 1900-2000. CO2 is steadily increasing but temp. is fluctuating – warming and cooling. If CO2 were the only cause, their relationship would be directly proportional – either both are increasing or both decreasing. The fact that sometimes their relationship is inversely proportional means that there are other variables influencing temp.
But there is correlation between the two. Does it mean CO2 is the dominant variable? No, because correlation is not causation. Do a regression analysis of consumer price index (inflation) vs. CO2. You will see CO2 has a higher correlation with inflation than with global temp. Does it mean CO2 is the cause of inflation? Further, I did a regression analysis of global temp. vs. a random walk function. Guess what. Global temp. has higher correlation with the random walk function than with CO2. Does it mean temp. is random? Not necessarily. A function (temp.) would look random if you are using a variable (CO2) that is not a true cause.
As somebody without any knowledge of GCMs, I found your remarks to be eminently logical. I would be very interested to hear a rebuttal from the other side.
Marvellous posting. One of the best I’ve seen. I will be linking to it and quoting it freely — that’s what you get for making it public! >:)
It is a well-established scientific fact that the weather and climate are a chaotic system, which is inherently uncertain and unpredictable. To demonstrate the uncertainty of climate, suppose global temp. is a product of 5 independent variables (in reality it could be more). In order to correctly predict global temp., you have to correctly predict the values of the 5 variables. Suppose you can predict each variable with 90% certainty. The probability you can correctly predict temp. is (9/10)^5 or 60%. This is better than a toss coin (50%).
However, you don’t to predict just one year ahead. You want 10 yrs. or more. Predicting successive yrs. depends on correctly predicting the previous yr. The probability of predicting the 10th yr. is (6/10)^10 = 1/200 or 0.5%. Your chance of correctly predicting the temp. is the same as your chance of winning a roulette game with 200 slots in a single throw of ball. Note that 90% certainty is reduced to 0.5% certainty in just 10 yrs. with only 5 variables. More variables and longer time further reduces the certainty. Such is the nature of chaotic systems.
Ironically, IPCC predicts 100 yrs. ahead and claims 90% certainty. (They must be modeling a non-chaotic system unlike the climate) IPCC argues the climate is chaotic but predictable (an oxymoron). To justify this absurd claim, it used actuarial mortality rates as an example of a chaotic system that is predictable. This is pseudoscience since death may be random but it is not chaotic. The uncertainty in random systems is fairly constant while the uncertainty in chaotic systems increases over time. Random systems on aggregate may be predictable using statistics. The graph of mortality over population is linear and hence predictable. The graph of global temp. over time is a wave (fluctuating) and cannot be described by single variable linear and quadratic equations.
The 90% certainty of IPCC is of course false. Dr. Patrick Frank tested the climate models of IPCC by comparing how well they matched known historical data. The errors are over 2,000%. They have no predictive power. Their predictions are no better than random guesses. This simply proves the well-established fact that the climate is unpredictable and disproves IPCC’s absurd claim that chaotic systems are predictable.
If we accept the fact that the climate is chaotic, it is very easy to dismiss IPCC’s obsession with CO2. If CO2 controls global temp., it necessarily means that temp. is described by a single variable CO2 equation. Without such equation, there is no cause and effect to speak of. The effect of CO2 can never be determined and hence indistinguishable from pure fantasy. In effect, IPCC is saying “CO2 is the cause but we cannot prove it.” We can never attribute a cause by mere pronouncement unless this is a matter of faith not science.
On the other hand, if such equation exists, it follows that the climate is deterministic and predictable. Anybody who can show that global temp. can be described by a single variable equation has essentially proven that the climate is deterministic and predictable. Where is the equation? Until we find it, we should accept the fact that the climate is chaotic and unpredictable and cannot be described by a single variable CO2 equation.
In regards to John: “I would expect to see discussion of demonstrating the very possibility of objective knowledge by human mind; then how to prove it is objective; then how to properly apply objective knowledge to reality; and then how test its proper applications in reality.”
This is very tricky to answer. Epistemologically, objectivity is rather hard to defend, and is closely connected to scientific realism. As such it can be further divided/specified into other types (there is more than one way of thinking about objectivity).
I found Ian Hacking’s “Representing and Intervening” a good read in this regard, it’s very well written, and covers the basic post-Kuhnian philosophical disputes (Kuhn didn’t leave much room for objectivity). Hacking distinguishes between being a realist about the existence of objects (electrons etc.) as being different than being a realist concerning theories, which is much trickier position to hold.
Overall, I don’t think the question of objectivity is one we need to fully resolve before we can get on with the science. It lingers in the background, raising questions that refuse to go away, but the sciences have ways of approaching the ideal of objectivity that are “good enough” if not epistemologically solid.
Gauch spends a few pages on objectivity, along with realism, truth, and all that good stuff most of us take for granted but tend to deteriorate under close philosophic inspection. His take (and one I largely share) is that objective knowledge is largely maintained by certain social aspects of science. Testing, replication, explanatory power, prediction, agreement among various peoples with their own subjectivities on basic scientific facts (consensus), and the generally complementary nature of various scientific truths are a good indicator that we have an objective grasp of reality. Objectivity can never be guaranteed, and is in many ways related to the ethical underpinnings of science (which are also very open to dispute) such as openness, honesty, humility etc., but there is a practical argument to be made that considering what the sciences have achieved, and the levels of agreement among people with diverse mindsets, they have developed rather successful techniques for developing objective knowledge. This is partly why I think exploring the social nature of science is so important, because ultimately it is only in the broader context of science that something can be recognized as objective. You can take this approach too far, and treat all of science as a sort of social game disconnected from the physical world, but to think the physical world as simply bestowing objective knowledge upon us makes an error in the other direction.
I’m also currently, slowly, getting through Brian Ellis’s “Metaphysics of Scientific Realism”. Ellis has essentially been trying to defend objectivity and realism for a long time, and been forced to revise his position especially in regards to quantum mechanics, but he seems to be largely able to accommodate it.
You said, “I’m also currently, slowly, getting through Brian Ellis’s “Metaphysics of Scientific Realism”. Ellis has essentially been trying to defend objectivity and realism for a long time, and been forced to revise his position especially in regards to quantum mechanics, but he seems to be largely able to accommodate it.”
Thanks for your reply.
I think the long term advances in science are advances in the achievement of objectivity; the evidential buildup speaks toward it. However, we cannot say that what we know from science is objective without the ‘tricky’ epistemological and metaphysical analysis. To not look closely at the basis of objectivity is to leave the scientific door wide open to inimical views. A tree without strong roots cannot withstand much stress.
Thanks for the pointer. I shall look up Brian Ellis’s “Metaphysics of Scientific Realism”.
Looking at Gauch’s fig 1, right in the middle, in the general principles of scientific method, should be honesty. Without honesty, the scientific method is useless.
I think what has happened in climate science is, more than anything, a loss of honesty. Were the various people involved in the hockey stick construction, really unaware that they were either abusing statistics, or were badly in need of statistical help? Even if they can plead ignorance, wouldn’t honest scientists have obtained some statistical expertise as soon as questions of that nature were raised?
Likewise, those who supply global temperature data in which most data points are calculated by interpolation (without an explanation and appropriate caveats), can’t really be considered honest scientists, can they?
Whenever the methodology of a climate science paper is questioned, a common response is, “Never mind those details, there is other evidence pointing in the same direction!”. This sounds less like science, more like an alchemist struggling to convert lead into gold, who asks for a little gold to start the process off!
Which hockey stick construction? There are dozens of the things.
Well how about the original one – the one analysed by Steve McIntyre!
Since he showed that the algorithm could extract a hockey stick from low pass filtered noise, it would be easy to produce extra hockey stick graphs – but what would they show?
As I tried to explain, it really is awful to try to justify one paper by claiming there is lots of other evidence pointing the same way! Science just doesn’t work that way!
Skeptical should mean “open to confirmatory evidence” as much as “open to refutation.” The calculation of the “global temperature anomaly” on the basis of the instrumental temperature record is quite robust, in my opinion. But statements of naked opinion should carry little weight. See, instead, the work of the “Clear Climate Code” or Zeke Hausfather’s posts at Lucia’s Blackboard. Both are easily googled.
Orkneygal, for hockey sticks with big problems, you could look at Mann08. This is recent, authored by heavyweights, published in a highest-impact peer-reviewed journal, heavily cited, and adamantly defended by leading lights of climatology.
If it is true that the number of land measuring stations has reduced from about 6000 to about 1000, and the remaining 5000 data points are computer by interpolation (is this in dispute?), then nothing will persuade me that the process is robust!
The real problem about “being open to confirmatory evidence”, is that once you realise how part of the science has been performed, you can’t tell without deep involvement, just what is valid! If there is a significant AGW effect, we won’t know before some reliable data has been recorded and analysed in a competent fashion!
David Bailey (11:39am) —
The context and meaning of the reduction of land stations is in dispute. Further, efforts are underway to bring more-recent records of “dropped” stations back into the historical networks. (They weren’t so much dropped, as not digitized and included.)
1. The effects of station number and station selection can be tested, and have been. These are subjects of fruitful ongoing inquiry. See e.g. CCC and Zeke’s post.
2. Sometime soon, GISS and other historical collections will not show the dropoff in station numbers (in the 1990s IIRC). Will analysis of the expanded, improved record influence your opinion, if the results continue to provide evidence for recent warming? If they do not?
First, you might ask yourself why, if climate research was so important, these raw sources of data were considered so unimportant!
Secondly, it is important to remember that there is great puzzlement as to why raw temperatures have been adjusted upwards in recent years. On average, adjustments for UHI effects would need negative adjustments – even if a few measuring stations ended up in watered parks and needed a positive adjustment!
I didn’t start off with any particular views on this subject – before the CRU emails were released – but I am simply staggered by how the experts treat their data. Some of it isn’t digitised (digitisation must have become progressively easier over the period of this change), then it would seem the CRU managed to loose some of it!
Please look at the following account, to see how compromised the land temperature has become:
Thanks, David. I’m aware of this and other reports of significant errors in the instrumental record. I hope you have a chance to look at the work that I cited earlier: such errors are not being ignored.
You seem to have made up your mind about this. If it’s indeed true that “nothing will persuade [you] that the process is robust,” then perhaps this is not a dialogue quite so much as interleaved monologues.
Nevertheless, perhaps others will be interested in learning more about this, if you are not. Zeke Hausfather and Steven Mosher did an analysis of global temperature records at Watts Up With That. There has, in fact, been “deep involvement” by people who were highly skeptical of the data.
I am viewing this as an outsider. When you look at a theory which is being used to justify a total upheaval of the world’s economy, you expect the science to be done competently.
When you discover that much of the data got discarded or lost, that it was also adjusted in various dubious ways, that some of the published results seem to be based on dodgy statistics, etc. it is very hard to start to believe that process again – to believe that climate scientists are suddenly committed to real science, and not to just stuffing the evidence for their worst goofs back under the carpet!
I would agree with you (I expect) that this is a tragedy if there really is an important signal buried in that data, but the only way to start to return trust in the process, would be to remove all those responsible for the current mess (which was, BTW only exposed by the efforts of the “climate deniers”). Dodgy used car salesmen don’t get much repeat business.
Exactly; which is why, IMO, Jones goes around these days looking like a dog whose dump under the corner furniture has just been tracked down as the source of (much of) the bad smell in the room. The rest seems to come from a large wet spot near the center of the carpet which Mikey is barking furiously to protect …
The worst thing man created was the thermometer.
Climate science has gone through great lengths to use this one area for models and predictions. The worst possible thing to do, is use it as a barometer for the world’s climate health. Temperatures are regional events changed by factors outside of the thermometer measurements.
Directional pressure changes of high pressure and low pressure gives a few days warning to an event that is coming or being generated.
Most of Nature uses the sunlight changes as a barometer to when to shed leaves and hybernate. Is this a temperature event?
Fish spawning, is that a temperature event?
The proxy of tree rings for global climate temperature prediction proved to be inaccurate.
So, why is climate science so focused on temperatures when there is a great deal more involved?
Another head scratcher on human stupidity.
Ice measurements are used to see if the ice sheets are expanding or contracting. What use is the temperature therometer?
To tell regions when you need to put on a coat or wear shorts.
Migration, is it a temperature event?
No, it is a food driven survival event.
The events of trees and plants getting ready for winter means food supplies are also hybernating.
So, where is the sense in using themometers as models?
I think I understand your frame of reference, but I must disagree must vigorously the worst thing man ever invented was the thermometer.
Here are some human inventions that I think are much, much worse than thermometers, in no particular order-
-Crimes against children
I’ll end there. I think my point is made.
I stand corrected!
Dr. Curry & Zajko
Couple of sentences I posted provoked some criticism, few additional words (here in bold), I think may help to clarify what is ment:
In my book methodology counts for nothing without a result. Give me a reproducible (experimental) result and I don’t give a twopence for methodology.
Models are just a numerology (unless experimentally confirmed) not scientific discoveries.
Vukcevic, your terse writing style is not conducive to understanding. In any case your first sentence is certainly true, but only because if there is no result there is no science. Your second sentence is about your feelings so irrelevant. But I would point out that if an experiment is reproducible it can only be because it has a clear methodology.
As for models I have no idea what you mean by numerology. At a minimum models tell us if an observation might be explained by a given mechanism. That is, they establish physical possibility. Models are a fancy form of theory building. Confirmation comes later, if at all. Note too that in climate science, as in several other observational sciences, experiments are not generally possible.
My working definition of science is the mathematical explanation of nature based on observation. Experiments are a special case of observation. Models are a special case of explanation. Your view is unduly narrow, to say the least.
‘Terse writing style’ is a consequence of what the IPCC and its promoters are asking (or even insisting) of humanity without a single solid proof that are correct.
Any model has no physical reality until proven by an experiment, (it is numerology, play with numbers, not knowing if result is meaningful or not) e.g. one the elementary laws of physics, which was so spectacularly proven by the most devastating experiment in human history. Of course it is e=mc2.
I am afraid I really can’t understand what you are saying. Nor do you seem to be responding to what I said. First of all, there are observational sciences where experiment is not possible. Astronomy, plate tectonics, paleontology, and climate are a few obvious examples. But models are now commonly used throughout these and all the sciences. If you divide science into theory and observation models are on the theoretical side, basically working with equations. The validity of computational science, as modeling is now called, is well established. Also, science is about evidence, not proof. Proof only happens in math. Perhaps you should take the time to say what you are trying to say clearly.
Models, as employed in the predictive, projective IPCC sense are not explanations at all.
Models, as they would have existed, but for the IPCC, might have qualified.
I am afraid I do not understand what you are saying. Can you elaborate? A climate model is a possible explanation for climate behavior, or at least some aspect of climate behavior. I do not think any of the existing models are acceptable, but that is for specific reasons. I have no idea what you mean by “Models, as they would have existed, but for the IPCC” and can’t guess.
If a range of models allow for a range of observations, parameters and sensitivities to be accommodated – as does the IPCC – we are already out of the realm of scientific prediction.
Scientific prediction, in the form of climate models predicts that temperatures could rise from 1 degree to 8 degree in the next 100 years. Big deal – I could say such things without any of this model business.
I am not immune to numerology either. Here is an example, some years ago I defined equation demonstrated here:
All numbers are astronomical values (look it up wikipedia etc), future may or may not prove if the equation holds or not, but is it a numerology?
Possibly, since there is no proven known physical law to establish mechanism and experiment is not possible.
Is it a ‘model’ of the solar polar magnetic fields behaviour?
Most likely, but does it represent reality, may not, but it wouldn’t ever be possible to prove, even if the future confirms equation’s prediction.
Could one on basis of it build a theory?
No, it is only a hypothesis (there is difference).
Should one follow the IPCC’s example and ask humanity to prepare for onset of another Little Ice Age.
Possibly, but it would achieve zero result.
I see this post as a prime example of too much time, too little challenge. Most Westerners simply have too little to truly worry about, and too much time to be doing it. AGW is meaningless to someone struggling to feed a family in Venezuela. We who are well housed, clothed and fed, however, must invent artificial challenges to keep us stimulated, boredom being worse than poverty, although as my father might have said, better to be rich and bored than poor and bored.
Discussing what science is philosophically and whether or not climate science is part of a ‘true science’ is an academic waste of time. AGW theory states clearly that the climate is warming due to the activities of industrial mankind, period. This must be proven, or shown to be highly probable. The methods by which this can be done are many. It is not as frequently stated, too complex for most people to understand. In fact, it is exceedingly simple. Making it some complex statistical regression of questionable instantaneous temperature readings is probably the height of scientific myopia. I can do a far better job by keeping a record of my fuel oil and air conditioning expenses, which actually reflect the true sum and total of the outside temperature energy throughout the year.
Regarding the climate debate, I think it is clearly the case that “science in general proceeds just fine with indefinite conclusions, entertaining multiple hypotheses simultaneously” (which Mike mentions but does not explicitly endorse).
The problem is that public policy does not do this and climate change is fundamentally a policy issue, not a scientific one. In fact the IPCC is part of the policy community, not the science community. Its members are countries and its job is assessment, not science. The IPCC does no science.
This is a great source of confusion. Climate science has seen a proliferation of alternative explanatory hypotheses, not a narrowing. The scientific literature itself is full of conflicting ideas. This is normal scientific progress. Policy cannot accept this uncertainty, hence the debate as to what it all adds up to, as though it must. But science is not like that.
The IPCC includes parts of both the science and policy communities, including many working scientists, as well as national/government representatives. There is always a challenge translating scientific conclusions, which typically lend themselves to alternate explanations, into policy. Science can afford to take its time and see where progress will lead, with alternate hypotheses sometimes hanging around for decades, whereas policy typically cannot.
Sorry. When the IPCC’s science is cite by the EPA as justification for classifying CO2 as a pollutant, worthy of draconian controls, the line has been crossed. The IPCC de facto must take the hit (ALL the hits) for the failures and inadequacies of the underlying science. That it has suppressed caveats and qualifications and internal questioning to come up with its final reports just makes the situation worse.
Real governments, and the shambalooza called the UN, are making and attempting to vastly expand, very serious decisions about suppressing private activity and welfare for the “common good” under the cover of these reports and studies.
‘No holds barred’ time.
It strikes me that there’s only one field that grants the possibility of absolute certainty to the enquirer, and that’s mathematics. And yet, in its purest form, it’s completely abstract, though that is not to say it can’t have practical application.
I think the scientific method is at least in part to do with trying to replicate the possibility of a mathematical degree of certainty. Physics comes closest, because it concentrates to a large degree on fundamental phenomena. These manifest nature in its simplest forms. Simple enough so that maths can quite often very accurately model reality (accuracy implying that phenomena can actually be observed and measured, either in nature or in laboratory experiments).
But then, one thinks of cosmology, where people are working in largely mathematical terms on pure abstractions, leading them to develop concepts of things that are by definition unobservable and unmeasurable (e.g. dark matter/energy), so that “accuracy” has no meaning other than in the limited sense that the mathematics may be correct. Is that science, I wonder? Or is it simply the reification of abstractions in the mind of the practitioner, which are then projected onto the universe?
I chose cosmology because it’s the most extreme example I can think of that illustrates the tendency to project meaning onto reality. This enables one to fend off the natural human abhorrence of a lack of explanation. We will fill the void with something, and successful, useful science fills it with something that approaches the truth and, at best, turns out at some stage to be useful.
Because human beings are involved, and because they want to find explanations for the world and its phenomena, it’s inevitable that human factors will come into play. There’s kudos in finding explanations, especially when our peers applaud us for that. And there’s protectionism when it comes to our own preferred theories, and defensiveness if they’re challenged. These kinds of considerations apply in everyday life, not just science; and it’s a mistake to think that at the rarefied level of science, they are somehow always overcome.
If there is a scientific method, I’d say its business is to eliminate, as far as possible, the influence of human failings. This is routinely possible in mathematics, because its methodology is as rigorous as it gets. Anything new one comes up with has to be scrutinised and accepted, with no possibility for wiggle room. And anyone who comes up with something new, or manages to spot a flaw in something hitherto deemed wiggle-free, can’t be gainsaid if the mathematical rules have been followed.
Physics approaches nearest this ideal. Then as one gets further and further from simple, fundamental phenomena, it gets less and less possible to be wiggle-free.
At the root of AGW, there’s some basic physics which can’t really be argued with. I say that not because I myself know it to be so, but because I accept it: and that’s because it seems that even sceptical scientists endorse it. It’s better than consensus; it’s virtual unanimity.
Once you go a certain way past the relevant basic physics, the phenomena become more complex, involving the interaction of many variables; some well known, some less so, and some, very probably not yet even thought of.
The question is, does the scientific method employed eliminate the possibility of human failing? First of all, in terms of the nuts and bolts of procedures and procedural approaches, but second, in terms of forestalling the possibility of bias over and above those?
This isn’t, I daresay, something unique to climate scientists involved in investigating this particular field. A host of factors come into play, and they’ve been discussed here ad nauseam – the possibility of confirmation bias, tribalism, secretiveness, etc.
There are two pretty polarised views on this. One says that a few influential players, consciously or unconsciously, have let their biases run rampant, and that the systems in place that should in theory help prevent this, which I think are really all part and parcel of sound methodology, have been open to abuse. I’m thinking of such things as peer review and informal networks of scientists with instincts to circle wagons. The second says that if there has been any impropriety, it is minor and doesn’t affect the validity of the field.
How is a poor non-expert to come to a sound judgement? S/he may not know enough of the science to be able to judge its overall validity; not enough to come down on either side of the fence.
In such circumstances, non-experts may be wont to apply their own methodology, which is itself not free from all possibility of being influenced by the same human failings as sometimes happen in science. My response is one of agnosticism with a leaning to scepticism. Another person’s may be to reject the whole thing out of hand, and yet another’s, to give scientists the benefit of the doubt. All of these responses are, I think, in some degree justifiable. Indeed, they are responses which are found not only in non-experts, but experts.
An excellent comment, Michael. However, I don’t agree with this aspect:
In the sense that it’s coupled with your amplification:
And my comments are not meant to reflect on solely your summary above; it’s seen all over the place.
The ‘basic physics’ for which there is virtual unanimity is that addition of a so-called GHG, CO2 for example, into a homogeneous mixture of other gases changes the radiative-energy transport properties of the original mixture. So long as no other physical phenomena and processes are associated with the mixture and the radiative transport of energy.
This characterization is not the root of AGW physical phenomena and processes. It’s orders of magnitude less than even the Spherical Cow version. The most important aspect of this situation is that it is the anti-thesis of the scientific method. The ‘public face’ of the problem is simply wrong.
Your additional statement above comes closer to the root of AGW, especially the part about ” . . . many variables; some well known, some less so, and some, very probably not yet even thought of.” It is well known that some of the parameterizations that are critical to accurate representation of the important phenomena and processes are at best correlations of empirical data, and at worst ad hoc EWAGs employed to close the model equations. By accurate representation I mean high fidelity with respect to the real world.
It is these extremely overly simplified summaries, overly simplified to the point of being useless, that has, in my opinion, attracted well-informed individuals to the issues. We see them all the time; mathematical models based on the fundamental principles of mass, momentum, and energy conservation, these equations are solved, all the software is in excellent condition and qualified for production-grade applications, and the list goes on. While at the same time, the real issues are never mentioned. Plus, when well-informed individuals attempt to note the lack of attention to especially important issues, those individuals are given a label and summarily dismissed as being cranks, among other conservation-stopping antics.
oops, should be conversation-stopping.
I do that a lot. Comes for dealing with mass, momentum, and energy. Especially the first and last.
Thank you for your kind appreciation of my post.
The “basic physics” I referred to concerns such things as the GHG effect and radiative physics, which, fundamental as they are, I freely admit to not understanding well. I have to defer to expert opinion, and am inclined to accept these basics, in isolation, as it were, as being facts on the table that I don’t see many experts in either camp rejecting.
You evidently see the basic physics at a higher level than I do; at a level at which it’s already tending to complexity because other factors have been brought into play. It’s quite likely that there’s no meaningful way to project what can be demonstrated in, say, a carefully controlled laboratory experiment into even the simplest of real-world scenarios.
It may be that we are just at cross-purposes as to where we draw the line, but in essence, agree. I certainly feel that once the complexity has set in, which I can readily accept occurs once we start talking about “CO2 in a homogeneous mixture of other gases”, it becomes a different ball game. And I’d agree that there’s a kind of blurring of the shift from simplicity to complexity.
AGW proponents constantly refer to basic scientific phenomena that have been known for a century or more, and make the conceptual leap from that to inferred certainty about how they will operate in a more complex system. And let’s face it, although scientists on the pro-side may be aware of the leap, they seem quite happy not to interfere with the over-simplified PR directed at lay people.
That, I admit, both annoys and rings an alarm bell. Such scientists – and Dr. Curry is a noble exception – aren’t being completely principled. They should be all over such simplifications like a rash; but it’s convenient to let them stand because, though inadequate, they allow the juggernaut to rumble on regardless.
Finally, I agree that the real issues struggle to emerge. They would never have impinged on the consciousness of a layman such as myself were it not for the more knowledgeable of the sceptics on blogs. I’m lucky that I at least have a science degree, albeit in a not very numerate discipline (zoology). Hence, even without detailed understanding, I can grasp the nature of some of the issues. But as for laypeople without even this level of awareness, well, things must be even more difficult to evaluate.
The point you make about complexity is key. Complexity is not just more difficult computations to perform; it observes that it is IMPOSSIBLE to predict the behaviour of systems of variables from their ‘basic’ characteristics in most cases. They have their own rules, which must be learned and applied on their own merits.
An excellent e.g. is the famous “laboratory” behaviour of CO2 when impinged on by IR. When you actually get to the point of applying this in gas mixtures, the actual result, as summarized by the ‘concrete thermodynamics engineering textbook author’ Schack, is “that the radiative component of heat transfer of CO2, though relevant at the temperatures in combustion chambers, can be neglected at atmospheric temperatures. The influence of carbonic acid on the Earth’s climates is definitively unmeasurable.”
Thermodynamics engineers, you see, have to operate at the complex system level.
I agree with both Redbone and David Wojick.
Climatologists are not to blame for copying the techniques that worked well for decades for the space science community.
The deep roots of the climate scandal date back to events of 1969, when two important sets of extra-terrestrial material (Allende primitive carbonaceous meteorite on 8 February 1969 and lunar samples from the Apollo Mission to the Moon on 24 July 1969) became available for detailed analysis in our best equipped laboratories.
Data from these two samples were unpalatable to influential NAS members with control – by budget review – over the US space science program:
1. Data from the Allende meteorite showed that the entire solar system came from one supernova [“Xenon in carbonaceous chondrites”, Nature 240, 99-101 (1972); “Elemental and isotopic inhomogeneities in noble gases: The case for local synthesis of the chemical elements”, Transactions Missouri Academy Sciences 9, 104-122 (1975); “Strange xenon, extinct super-heavy elements, and the solar neutrino puzzle”, Science 195, 208-209 (1977); “Isotopes of tellurium, xenon and krypton in the Allende meteorite retain record of nucleosynthesis”, Nature 277, 615-620 (1979)].
2. Data from Apollo samples showed that the Sun formed on the collapsed SN core and is composed mostly of iron (Fe) [“Solar abundances of the elements”, Meteoritics 18, 209-222 (1983)].
Another $1,000,000,000 of public funds was spent on the Galileo Mission to Jupiter. The results, when finally released in 7 January 1998, confirmed solar mass fractionation and an Fe-rich Sun [“”Isotopic ratios in Jupiter confirm intra-solar diffusion”, Meteoritics and Planetary Science 33, A97, abstract 5011 (1998).
A new book, scheduled for publication by Thanksgiving, and a new series of videos expose these deep roots of the climate scandal.
Professor Manuel, would you be kind enough to let us know about the new book to be out around Thanksgiving?
Bob, I will respond to a private e-mail, but I do not want to use Professor Curry’s blog to promote the book.
My chapter is “Deep Roots of the Climate Scandal”.
With kind regards,
Oliver K. Manuel
Hmm…. I am trying to justify the concept that two stars of equal diameter and equal brightness, one with hydrogen/helium core and one with and iron core having the same mass. That would be a requirement for simple Newtonian physic to not be able to tell the difference.
We have a pretty good guess at the mass of the earth. We have a pretty good measurement of the distance between the earth and the sun. It seems like a calculation based upon those numbers would give us a pretty good guess at the mass of the sun. Does that mass match up with both an iron and a hydrogen/helium core models of the sun’s interior?
Gary, I will be happy to respond to a private e-mail.
The average density of stars and atoms sets no limits on the density of the core nucleus.
With kind regards,
Oliver K. Manuel
Just as an aside, a very enlightening article. Didn’t see much of this in my engineering or science classes.
I’d like to highlight a comment that Mike Zajko made (Nov 9 at 7:22pm —
By such a definition, what would terms like “questionable,” “elegant,” or “mistaken” mean? To say nothing of “misconduct”. It’s a vampire concept, sucking the meaning out of words.
“Hey, this is how other scientists in my area practice their craft” becomes a powerful deflector of criticism — irrespective of whatever “this” may refer to.
Good point, but I don’t see that as much of a defense for poor conduct or weak argument. Is it possible to do bad science? Unethical science? Mistaken science? Science makes plenty of mistakes, and scientists have participated in many questionable practices in its history. There’s ways to be critical of all of this and I don’t think “that’s science” is much of an excuse for faulty methods or poor conduct.
Gauch says science doesn’t provide its own ethics – this is debatable, and maybe worth talking about. Robert Merton had a bit more to say on this, but he’s hardly seen as authoritative.
I believe it’s not a bug, but a feature of the concept of “science” put forward in Zajko’s post. Saying:
> This is not science.
is not used to express a descriptive judgement, but a normative one.
Of course, this normative judgement has to be put into an appropriate network of normative concepts, by which we can judge if some action is elegant or a misconduct.
Realizing that it is a normative judgement can help distinguish when a criticism applies to a craft as a whole, or to a crafter in particular.
Willard, you say —
> Saying “This is not science” is not used to express a descriptive judgement, but a normative one.
I’m not clear on your meaning. I think we’re talking about the distinction between “This isn’t science, it’s art” (we might say this of a skilled physician) and “This isn’t science, it’s a pale imitation” (we might say this of a meterologist who was incapable of accurately reading a thermometer). Is that about right?
I still don’t see the merits of “science is as scientists do.” I recall an offer at the State Fair to have my fortune told by a practioner of Astrological Science. Her methods were in accord with those of the consensus of Astrological Scientists…
Yes, it looks like what I have in mind.
I don’t think we need to have a very stringent criterion of “scientific method” to disqualify astrology as something seriously considered as a scientific field. For what is worth, there is no such thing as **one** astrology: there are many, many flavours and interpretations, most of which are rightly described as “art” by the practionners themselves.
Even if we could solve the demarcation problem (i.e. what Popper was trying to solve with his falsificationism), we still need to use it to express normative judgements. Those judgements will have to be justified outside the theorical discussion of what counts as the “scientific method.”
Somehow, the idea that all the scientific practice should be overseen by Physicians’ super-ego is counterproductive. It offers a Procrustean bed in which even Physics (if we can imagine it’s one unified field) has difficulty not to get its feet cut.
Finally, note that Zajko’s framework does not entail relativism, only pluralism. There still can exist some minimal best practices for science.
I think disqualifying Astrological Science as science may be harder than you and Zajko think. (I’m talking about the real Scientific Astrology, not those flavors that describe themselves as “art”.) :-)
Those minimal best practices that lie beyond Zajko’s framework: aye, there’s the rub.
The framework that Zajko’s describing answers the question: what is science? You want it to answer: what should be science? These are two different questions: the former is descriptive, the latter is normative. The two questions are distinct, but also related. The framework Zajko’s describing will lend to more fruitful discussion than armwaving pseudo-anthropologic concepts like “cargo cult”, “circling the wagon” and “tribalism.”
I’d like to have a real reference to the “real” scientific astrology.
Re: is versus should be, a fair point. As long as we all know what we are discussing. :-)
On “cargo cult”, I think Feynman’s essay is a model of clarity. He says what he means–I don’t see it as arm-waving or an exercise in pseudo-anthropology.
As to “real” scientific astrology, I think we can agree to look for it in the Null Set!
The “cargo cult” essay by Feynman is frequently used- since Feynman is frequently brought up by skeptics, I wonder how he would have dealt with the majority of what passes for skepticism on this issue or with the many poseurs i.e. as rudely as he treated the uninformed journalists who interviewed him?
RB, That’s a fair bet.
I don’t think it detracts from the value of what his essay teaches.
FWIW, my own “lukewarmer” concern is not that “the theory of AGW is wrong.” Rather, it is that “Mainstream climatologists are prone to accepting weak arguments and faulty arguments, if and only if they support the Pro-AGW Consensus.”
Feynman’s compared what he believed were bad theories (mainly from psychology and pedagogy) to a “cargo cult”:
He used this analogy to speak of this:
> It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards.
Feynman’s principle of scientific integrity is not that clearly defined, but sure is moralistic. Reading this essay many times now, I still fail to see how this moral principle prevents scientists from indulging into a cargo cult. The story that personal integrity justify scientific institutions is yet to be told.
Willard (Nov 11 at 4:05pm) —
Thanks, response downthread.
Whoops! Response supra mistakenly placed at the end of the “Why engage with skeptics?” commentary.
Indeed, disqualifying those disciplines at the margins, like the more scientific astrology (which has been argued would actually be classified as science under Popper’s criteria since it puts forth falsifiable hypotheses) or parapsychology (which has gone out of its way to import scientific experimental controls) is a little difficult under many demarcation criteria, especially the rather blurry one I use for myself.
Luckily I’m not too worried about what status should be accorded these disciplines, though it is entertaining watching other people fight over it. However, when someone tells me science has proven our emotions shape water molecules (Masaru Emoto) I have no trouble telling them that either this is either not a scientific claim, or among the most incompetent science possible.
Normative demarcations of science are essentially a power play – either to endorse a favored approach as scientific (and therefore authoritative), or more commonly to demote something as non-scientific, and therefore not credible. I have no problem considering science as broadly credible, but also understanding that various scientific claims both today and historically have turned out to be bullsh**t.
There are two requirements for qualification of a model as a “scientific” model and not the one that you represent. The one that you represent is falsifiability. The other is that the model is not falsified by the evidence. Astrology is excluded by its falsification by the evidence. By the evidence, one’s astrological sign does not predict one’s outcomes in one’s future.
So, to be clear, if a model is falsified by evidence it isn’t scientific? To be scientific a model needs to be falsifiable and not falsified – this is your claim?
I’ll modify your description of my claim slightly. My claim is that if a model is falsified by THE evidence, it isn’t scientific. The counter claim would seem to be that if a model is falsified by THE evidence, it may not be non-scientific. It seems to me that it would be impossible to prevail in taking the opposing position for this position is illogical. However, if you’d like to try to do so I’d be pleased to hear your argument.
Cordially, Terry Oldberg
But before the model is falsified, is it scientific? Does it cease to be scientific once it is falsified, or was it never scientific to begin with?
Does it cease to be scientific once it is falsified, or was it never scientific in the eyes of an all-knowing God to begin with?
There, fixed that for you. ;)
My claim is that, given that a model was falsifiable and subsequently falsified by the evidence, it was “scientific” before being falsified by the evidence but not afterward. I’m not aware of any breaches of the principles of logical reasoning from this claim but will be pleased to be corrected if you know of any.
Cordially, Terry Oldberg
That does seem logically consistent, even if I don’t find it of much use conceptually.
I’ve got to call it a night, and will be unable to check in on the post over the weekend, but it’s been fun. Thanks for participating all,
I’ve enjoyed exchanging views with you and hope to continue the dialogue.
For me as a practicing model builder, the usefulness of a concrete definition of “scientific” has been apparent. This is that it has been possible for me to build models without apparent logical contradictions.
My reading of the literature of the philosophy of science is that modern philosophers are unaware of the possibility of building a model without logical contradictions. They think that the so-called “problem of induction” has never been solved. This view was expressed by the professor who taught Logic I at M.I.T. in 2005. In his class notes, the professor stated that “In practice, we hardly understand anything about how to get from observations to general laws.“ I disagree. I think we very well know how to get from observations to general law but that modern philosophers are insular in the sense of ignoring anything that is published outside the philosophical journals. By their insularity, these philosophers have ignored the colossally important event of solution of the problem of induction.
Cordially, Terry Oldberg
IPCC science would be perfectly good science if it only represented the undelying science in a fair and unbiased way.
As long as there are gates and gatekeepers at every level, from the admission and training of graduate students, to the wording of the “Summary for Policy Makers”, there is no place for objective, truth-seeking science.
My experience in science taught me that working scientists ignore philosophers of science because they find so little use in what philosophers offer. I find the discussion of falsification offered here a classic example. It doesn’t get at the value of falsification, lost in theoretical flaws. Darwin’s proposal of natural selection was utterly without experimental justification, but Darwin himself – in his brilliance – not only understood that it could be falsified, but pointed out what major parts of his still-incomplete theory were yet unsupported and subject to falsification. He didn’t need Popper to recognize the logical problems with his own work.
The fact that much of planetary climate science as used in GCMs – cannot be falsified – that is, cannot right now be tested – puts in question that work. The output may yet be proven to be correct, but the lack of falsifiable hypotheses make it weak science, just as evolutionary biology is a weak science when compared to experimental physics. The field of evolutionary biology has revealed a great deal of well-supported information, but the nature of the evidence for natural selection is more brute force than elegant experiment.
No doubt very good work has been done in climate science, but before 1988 and Jim Hansen’s Congressional testimony, I doubt scientists from other fields would have bet the house on it. There are limits to knowledge regarding chaotic systems, and the principle of falsifiability help us keep those limits in mind. Working scientists don’t really care that philosophers have disputed Popper – they know a good tool when they see one. It’s that simple.
Mike argues that there are many different criteria by which a “scientific” model may be distinguished from a non-scientific one. If these various criteria are capable of of identifying a given model as scientific and non-scientific, there is a violation of the law of non-contradition with the consequence that rational discourse about the nature of science is impossible. I’m going to argue to the contrary that these is only one criterion and that it is the “falsifiability.” However, I’m going to agree with critics who point out that, under a circumstance that is found in practice, the word “falsifability” is a misnomer. “Testability” would be a more apt choice of words.
Pertinent ideas of the scientific method of inquiry can be illustrated by the fable of the ornithologist who observes 3 swans, finding all of them white. From this evidence, he makes the generalization that “all swans are white.”
Using statistical jargon, this generalization can be rephrased as “In the population of swan observations, the limiting relative frequency of white swans is 1.” The “limiting relative frequecy of white swans” is the proportion of white swans in the limit of observations of swans that are of infinite number.
If the ornithologist’s generalization is tested, in the event of observation of at least one non-white swan this generalization is falsified. As the generalization can be falsified, it has the property of “falsifability.”
Though the generalization can be proved false, it cannot be proved true for regardless of the sample size supporting the conclusion that “all swans are white,” the next observation might be of a non-white swan. This fable, I think, illustrates what Popper means when he states that a scientific theory cannot be proved true but can be proved false. If this is what he means, his logic is impeccable.
The circumstances of my fable are pertinent to the IPCC models because each such model states that under given circumstances the limiting relative frequency of an Earth with a particular value of the global average surface temperature is 1. The ornithologist’s generalization and the IPCC models share the property of being deterministic.
From the shared property, one might gather that, like the orthologist’s generalization, the IPCC models make predictions and satisfy falsifiability. However, given that an IPCC model is falsifiable it is falsified by the temperature record. However, the IPCC has not yet thrown up its hands and informed the world that its models are useless for their intended purpose.
The IPCC acts as though the models are not falsified. It supports its position by emphatically denying that these models make predictions. One piece of evidence is at At http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html , where the noted IPCC climatologist Kevin Trenberth reports that: “In fact there are no predictions by IPCC at all. And there never have been.” He goes on to state that, instead of predictions, the IPCC “…proffers ‘what if’ projections of future climate that correspond to certain emissions scenarios.“ Another piece of evidence is at http://icecap.us/images/uploads/SPINNING_THE_CLIMATE08.pdf where the IPCC climatologist Vincent Gray reports on the IPCC’s response to his question to the IPCC on how the IPCC’s climate model could be statistically validated. The IPCC responded by changing the word “validation” to the word “evaluation” and the word “prediction” to the word “projection.” I gather that, in contrast to a prediction, a projection is not susceptible to being falsified by the evidence.
In this paragraph, I take a detour from the thread of my remarks to comment on the ethics of IPCC’s use of language. As the word “evaluation” sounds like the word “validation” and the word “projection” sounds like the word “prediction,” a casual observer could be led to the erroneous conclusion that the models make predictions and are statistically validated thus being “scientific” models when they make no predictions and are not statistically validated thus not being scientific. It seems to me that it was ethically incumbent upon the IPCC to inform its readership in clear language that if its models were “scientific” then they were falsified by the evidence while if they were not falsified by the evidence they were not scientific.
Now, regarding criticism of Popper from within the philosophical community, I’ve not read it but believe it would be impossible to mount a successful attack on Popper’s ideas so long as the model being referenced was, like the IPCC climate models and like the generalization of the ornithologist, deterministic, for in this case Popper’s conclusions follow deductively from his premises. I think, therefore, that these philosophers must be quibbling over the use of the word “false” in the context in which the model being referenced is probabilistic. If the ornithologist had made the generalization that “in the population of swan observations, 90% of the swans are white,” the observation of a non-white swan would not falsify this generalization but only make it less likely to be true. Thus, one could argue that Popper’s choice of terminology was not completely apt. That there is this semantic problem does not detract from the worth of the idea that is referenced by the word “falsifiability.” This idea is just mislabled when the model is probabilistic.
I understand from our moderator that while the “projections” of the IPCC models are not falsifiable, there may be an argument somewhere that the set of projections that are made by the set of IPCC models makes predictions thus satisfying falsifiability. In a Web search lasting more than a year, I’ve been unable to find this argument. If anyone knows where it is, I’d appreciate a citation to it.
“what Popper means when he states that a scientific theory cannot be proved true but can be proved false.” This has always been the “one little bit” of Popper that I grasped. I think this differs from what Mike characterises as the “one bit of Popper” that people “get”.
“if its models were “scientific” then they were falsified by the evidence while if they were not falsified by the evidence they were not scientific.”
I am reminded of the thesis examination: ‘This thesis has elements which are original and have value. Unfortunately, where they are have value they are not original, and where they are original they do not have value.’
Yeah but I prefer the thesis examination where the examiner starts reading the thesis and after ten minutes says “This isn’t correct”. Half an hour later, the examiner makes the concluding comment, “This isn’t even wrong!”
The examiner missed the point entirely. The purpose of the thesis was an exercise in observation and description. There was never any intention of providing conclusive proof.
There are projects that are worth doing which cannot be carried forward so as to prove in a meaningful way. To suggest that the enterprise was worthless and without useful result because it was wholly inconclusive makes a mockery of open mined, critically considered inquiry.
Moreover to insist that proof is only worthwhile product of research (or a thesis that is to be defended) indicates the lack of appreciation a few bare rudiments of ‘perception’.
A person can propose questions.
A person can assert answers.
They are independent activities.
There is no de facto linkage between floating a question and asserting that it ought to be answered beyond that of having the recipient fall for the fallacy. The hapless recipient implicitly and uncritically accepts the premise without possessing a satisfactory resolution.
There is an early stage of thinking-about where throwing sh** against the wall is indeed appropriate. But the next stages get down to examining the sticky parts of the wall in some detail to see why they were superior sh**-grabbers.
Popper’s criterion of falsifiability was meant to be applied to theories, not to single scientific statements like predictions, nor to models:
This criterion does not preclude scientists to have to make a decision regarding what needs modification, e.g. if the modifications needs a whole new theory.
This criterion also entails we bring up a theory that has more “explanatory power.”
Everyone is welcomed to propose a better theory than AGW: Science of Doom needs you!
I appreciated your erudite post. I particularly enjoyed this:
“It seems to me that it was ethically incumbent upon the IPCC to inform its readership in clear language that if its models were “scientific” then they were falsified by the evidence while if they were not falsified by the evidence they were not scientific.”
I have no qualms with the IPCC making projections. Doesn’t bother me much that they may or may not be falsifiable OR verifiable.
The predicament is in badly framing what’s being projected.
The “assumptions”, the minor insignificant nitty gritty details is located in the framing parameters and characteristics.
The termination points of those nice shiny innocuous framing guides is intensely, proudly solidly convergent. That’s what draws the awareness. People want to know the conclusion.
Getting to the conclusion often requires going through many many freight train derailments. Lousy guidance track is all over the place.
Thank you for your article. Its great value is demonstrated by the debate it has engendered.
In my opinion, the problem is best summarised by your statements saying;
“This would be fine if all sciences emulated the model of physics or were somehow reducible to it, but despite the efforts of many in the history of the sciences, this has not been achieved. Instead there exist multiple methodologies and strategies for evaluating evidence among the diverse sciences. Many of these share common elements, but often they do not. It is possible to select among them some form of the scientific method, and declare all other methodologies as un-scientific. This has also been done in different ways throughout history to exclude various undesirable practices aspiring to scientific status. But any clear formulation of the scientific method would also necessarily exclude a wide range of practices that have produced a wealth of useful knowledge in their own right, and cannot easily be relegated to the bin of pseudo-science.”
Indeed, would any single definition of the “scientific method” include all, some or none of the studies known as string theory, cosmology and paleontology?
So, instead of attempting a single definition of ‘science’, I suggest that perhaps it would be better to define the difference between ‘science’ and ‘pseudo-science’. And I suggest the following definitions.
Science is an attempt to understand the physical world by using the method of logical assessment of all available information and rejecting ideas that fail to agree with some information.
Pseudo-science is an attempt to prove an idea (or ideas) about the physical world is correct by finding information that concurs with the idea.
If my definitions are accepted then science and pseudo-science are mutually exclusive because:
(a) Science accepts any idea as being a possible explanation of part of the physical world unless there is information that contradicts the idea. No amount of information consistent with an idea can provide scientific proof of that idea, but one piece of information that is not consistent with the idea provides scientific disproof (or, if you prefer, falsification) of the idea
(b) Pseudo-science accepts an idea (or ideas) as being the correct explanation of part of the physical world and seeks information that confirms the idea. Any information consistent with the idea is accepted as evidence supporting the idea, and information not consistent with the idea is ignored or rejected because it is not consistent with the idea.
According to these definitions string theory, cosmology and paleontology are all science although much of each of them is not open to testing by experiment. Ideas of cosmology and paleontology are only capable of assessment by comparison with empirical observations, and ideas of string theory are almost entirely based on logical conjecture so can only be tested against logical consistency. Empirical oservations and logical consistency are information.
However, and importantly, if these definitions are accepted then much of AGW-research is pseudo-science and, therefore, is not science. For example, the AGW-hypothesis predicts the tropospheric ‘hot spot’ that is not observed to have occurred in reality (according to radiosonde measurements from balloons since 1958 and from MSU measurements from satellites since 1979) and AGW-researchers have attempted to reject these data sets and to compute tropospheric temperatures from wind shear in an attempt to show the ‘hot spot’ exists.
great post, Richard
for “or, if you prefer, falsification)”, would you endorse “disconfirmation”?
“I think it might be a good idea to devote some time on this blog to inductive and statistical methods specifically, considering their strong role in many climate-related arguments …”
I have to disagree. A solid layman-oriented article by Dr. C, Steve McKittrick, or Briggs (or you) on the various error figures — r^2 and whatnot — what they mean and how they should be interpreted might be useful, but a string of high-level, overgeneral articles on statistics would necessarily be an occasion for armwaving by the author and yawning by the reader, as well as giving rise to the suspicion that this provocative and fascinating blog is moving towards RC-style “trust us we’re the experts” strawman blather.
There is very little useful that can be said about induction, unless the field has changed radically since I got my MA with a minor in Philosophy of Science some 40 years ago (which I doubt; the Greeks were trying to figure it all out 2400 years ago. Progress is slow in philosophy). Bayesian stuff still depends too much on subjective judgments to be actually useful in scientific studies. It is still true that all it takes is one black swan.
“As opposed to falsification, Gauch lists a number of lines of evidence including explanatory and predictive power, replication, increasing accuracy, and interlocking evidence as part of an argument that the sciences have an “objective grip on reality””
But this is just another way of saying that the theory has a very strong and narrow set of predictions over a wider range of phenomena, and thus exposes a much larger surface for falsification. “Interlocking evidence” means only that more different types of efforts can be made to falsify it, and have failed. His point is?
“The use of climate models as evidence also leads to various questions regarding their epistemological standing (although I personally think such models can have their uses, as long as their limitations are recognized).”
But this is the most errant nonsense. Models of any sort cannot possibly constitute evidence, they are merely precise expressions of a hypothesis, which in turn can be either confirmed or refuted by actual scientific observation. They are “evidence” only for the consequences of a theory; they are not evidence for the truth or falsity of that theory.
“Since climate science generally shies of making predictions, the predictive power of AGW theory is hard to evaluate and many of its hypotheses or conclusions are difficult to test, but AGW theory does provide a degree of explanatory power that can be compared to other possible explanations for observed phenomena.”
Please. This is pure armwaving. And if climate pscience shies of making predictions, why do we find Dr. Mann, pscientist extraordinaire, worrying about polar bears and claiming that Arctic ice will disappear in “a few decades”?
A point that this article — much of which is very well done — seems to miss is that the field of Philosophy of Science, Popper et al., is not concerned with describing what scientists actually do, any more than the Ten Commandments are concerned with describing what the Faithful actually do. It is an attempt to make explicit how the system should work. Kuhn in this field is simply playing the part of a sociologist, pointing out (like innumerable prophets of old) that humans are not sociologically measuring up to their theoretical ideals. Big surprise, humans are human! But that does not effect either the ideals or the goals they are intended to serve.
The field has changed since you minored in the philosophy of science 4 decades ago. The change has resulted from extension of the deductive branch of logic that was elaborated by Aristotle into the inductive branch. This extension occurred 4.7 decades ago. The philosophical community failed to take notice of this advance. However, the cybernetics and systems science community picked up on it. A result is for virtually all modern communications systems to be designed in accordance with the principles of reasoning but hardly any of the models that are built by scientists to be designed in accordance with the same principles. One consequence is for it to be possible for a philosopher of science to sit down for an evening in front of his/her HDTV set and to be baffled by the spectacular improvement of the picture and of the sound as compared to old-fashioned analog television. This improvement is a consequence from the design of HDTV under the principles of reasoning and the lack of conformity of the design of analog television to the same principles.
Same design principles as used in the butchering of ‘Digital Audio’?
How about the invention of ‘Digital cable’ as an opportunity to deliver 500 channels of low definition ‘specialty’ drivel and rake in the money NOT!?
“… , the cybernetics and systems science community picked up on it [4.7 decades ago].”
How odd, then, that I’ve been a professional programmer since 1976 and have seen no sign of it, either on my realtime projects or my business-process oriented ones. What I’ve seen has been application of basic engineering equations based on scientific discoveries — presumably discovered by the usual scientific method, which consists of inexplicable flashes of insight followed by experimental/observational efforts to confirm or disconfirm the predictions of the theory.
The only efforts related to “induction” I recall are the brief AI hype (Prolog and all) of the late ’80s (how’s that “expert system” working out for you? We’d seen that before, in the early ’60s) and occasional use of “neural net” programming, which as far as I know has no place in either realtime or electronic-design programming.
What specifically are you referring to?
You are far from alone in having not gotten wind of this advance.
There is a tutorial and bibliography at http://www.knowledgetothemax.com.
At a quick glance, littered with and totally dependent on outrageous absolutisms. The lousy grammar and syntax is, I guess, a freebie extra.
In your complaints about the outrageous absolutisms, lousy grammar and syntax and freebie extra, I hear an example of an ad hominem argument. As everyone here knows, an ad hominem argument is fallacious for it makes the person who espouses the allegedly bad argument the victim of attack rather than this argument. To make an ad hominem argument in this or any other blog is unethical and a waste of everyone’s time. If you believe that anything I have said is untrue then it is ethically incumbent on you to present your logically valid argument. If you cannot provide such an argument then you must remain silent.
This improvement is a consequence from the design of HDTV under the principles of reasoning and the lack of conformity of the design of analog television to the same principles.
You have no idea what you are talking about, do you? Engineers are using the same “principles of reasoning” – the words you are looking for are called “evolution of technology.”
In addition to being rude, you’ve violated the principle of discourse under which it is permissible to expose a false assertion but impermissible to attack the person whose assertion is allegedly false. To attack the person is to make this person the issue rather than the allegedly false assertion. To attack the person is called an “ad hominem argument.” This type of argument is among the logical fallacies. You’ve just been guilty of employing one in making an argument. For shame.
Just one problem. …
The activity of ‘perception’ serves as the foundation for mostly everything including the tools that are used to trace out nature.
‘Perception’ is pre-logical and pre-rational.
‘Perception’ plays a major role in description and describing.
‘Perception’ is voraciously assumed and used and used and used. Perception is not understood.
And then there is the matter of cognition by multiple, concurrent, independent observers, each of which perceives in their own individual and personal ‘subjective’ manner.
good commentary. Even your malaprop (“errant nonsense” instead of “arrant nonsense”) pretty much works! ;)
Richard, I was following along OK until I got to the tropospheric hot spot. That “hotspot” is a result of all, repeat all, warming from any forcing, it is not related to GHG warming in particular. If the sun had been blasting us towards higher temperatures and GHGs had been hiding in a corner these last years, that hotspot would be expected also.
afaik, there’s only one specific expectation that refers to the troposphere. It is that if GHGs are responsible for warming then we’d expect to see troposphere warming at the same time as cooling in the lower stratosphere. Which is exactly what we do see.
You say to me:
“Richard, I was following along OK until I got to the tropospheric hot spot. That “hotspot” is a result of all, repeat all, warming from any forcing, it is not related to GHG warming in particular. If the sun had been blasting us towards higher temperatures and GHGs had been hiding in a corner these last years, that hotspot would be expected also.
afaik, there’s only one specific expectation that refers to the troposphere. It is that if GHGs are responsible for warming then we’d expect to see troposphere warming at the same time as cooling in the lower stratosphere. Which is exactly what we do see.”
Sorry, but you provide two demonstrations of what I define as pseudo-science.
Firstly, it clearly was the case that “related to GHG warming in particular” until the ‘hot spot’ failed to occur. This is stated in Chapter 9 of IPCC AR4 titled ‘Understanding and Attributing Climate Change’ that can be read at
The matter is summarised in Figure 9.1 of page 675 and its title is
Figure 9.1. Zonal mean atmospheric temperature change from 1890 to 1999 (°C per century) as simulated by the PCM model from
(a) solar forcing,
(c) wellmixed greenhouse gases,
(d) tropospheric and stratospheric ozone changes,
(e) direct sulphate aerosol forcing and
(f) the sum of all forcings. Plot is from 1,000 hPa to 10 hPa (shown on left scale) and from 0 km to 30 km (shown on right). See Appendix 9.C for additional information. Based on Santer et al. (2003a).
Only Figures 9.1 (c) for “wellmixed greenhouse gases” and 9.1(f) for “the sum of all forcings”.
So, according to IPCC AR4 and as illustrated by the output of the PCMmodel the IPCC AR4 presents,
(1) the ‘hot spot’ is a unique effect of forcing from “wellmixed greenhouse gases”
(2) the unique effect of forcing from “wellmixed greenhouse gases” is so strong that it overwhelms the effects of all other forcings.
But now the ‘hot spot’ is nown to not exist it is asserted that,
“That “hotspot” is a result of all, repeat all, warming from any forcing, it is not related to GHG warming in particular.”
This assertion is rejection of the importance of the information that was stated to be important when it was thought the information would support the idea of AGW.
Science places most importance on information that conflict with an idea, but in pseudo-science information not consistent with an idea is ignored or rejected because it is not consistent with the idea. Your rejection of the importance of the ‘hot spot’ as explained in IPCC AR4 Chapter 9 is an unambiguous example of rejection of information because it is not consistent with the idea of AGW.
And your rejection of the information is an illogical excuse for the absence of the ‘hot spot’.
as the IPCC says, the ‘hot spot’ is a unique effect of forcing from “wellmixed greenhouse gases”
as you assert, the “hotspot” is “a result of all, repeat all, warming from any forcing, it is not related to GHG warming in particular”.
But the ‘hot spot’ is absent. So,
(i) if the IPCC is right about the ‘hot spot’ then in recent decades there has been no significant global warming induced by forcing from “wellmixed greenhouse gases”
(ii) if you are right about the ‘hot spot’ then in recent decades there has been no significant global warming induced by forcing from any cause including “wellmixed greenhouse gases”.
Secondly, you make an irrelevant comment when you say, “there’s only one specific expectation that refers to the troposphere. It is that if GHGs are responsible for warming then we’d expect to see troposphere warming at the same time as cooling in the lower stratosphere. Which is exactly what we do see”.
If you were right that this is the “only one specific expectation that refers to the troposphere” concerning GHGs then it would have no importance. Only pseudo-science attempts to prove an idea (or ideas) about the physical world is correct by finding information that concurs with the idea.
The “tropospheric hot spot” indeed shows up if you turn any of the forcings in the models up to 11. It’s a result of the models’ assumption that relative humidity remains constant throughout the atmospheric column, and a symptom of the generally indefensible idiotically simpleminded treatment of the hydrological cycle in all of the models. Water vapor has to be modeled as a strong positive feedback if the models are to produce sufficiently terrifying temperature rises. Actual evidence, however, suggests that the overall effect of water vapor, condensation, clouds, and the rest is a negative feedback which reduces the radiative impact of CO2 by at least 50%.
The reason the hot spot only shows up in the IPCC’s CO2 graph is that the model runs chosen greatly exaggerate the effect of (only) CO2. These models are basically worthless, yet they constitute the sole argument for trashing Western industrial civilization. Is anyone surprised that a certain amount of skepticism and resistance has arisen?
The important feature of those diagrams is the cooling due to greenhouse gases, which you don’t get from other forcings.
Courtney also seems to be playing a switch and bait. The AR4 refers to temperature trends over a much longer period of time than he refers to.
No “switch and bait”. If the effect is real then it would be most pronounced in the recent decades for which empirical data is available because that is when the anthropogenic emissions have been greater. Indeed, over 80% of the emissions have been since 1950.
The IPCC model is for the period I stated (1880 to 1999) and if the model is correct then it could not a show any significant effect of the anthropogenic emissions prior to 1950 because (according to the AGW hypothesis) the emissions could not have had a significant effect prior to 1950.
And, as I also stated, the radiosonde data is from 1958 and the MSU data is from 1979. Since these data are for the recent decades when the emissions have been significant (according to the AGW hypothesis) then they should show the effect most clearly.
Your fallacious claim of a “switch and bait” is typical of the excuses used to pretend that the absence of the ‘hot spot’ is not clear.
Sorry, Ross McKittrick or Steve McIntyre (either or both) in the second paragraph of my comment at http://judithcurry.com/2010/11/09/the-scientific-method/#comment-10582 . I got the MMs mixed. Apologies.
[Moderator may make appropriate correction and delete this comment.]
Philosophy and logical reasoning is overwhelmingly and essentially an autistic passion, expertise and immensely successful long term project.
An alternative, viable, competitive viewpoint does not yet exist, notwithstanding thousands of years worth of effort struggling to construct such alternative, See A. D’Abro: Evolution of Scientific Thought, for elaboration concerning this.
IMO opinion, that’s all there is as of yet. Like or lump it.
Professor Manuel is just a troll, trying get interest in his baseless ideas by going along with AGW denialism.
You are living up to your pseudonym. What is the deep-rooted source of your extreme fear?
Here is an update on the Italian flag, for those of you following this (this thread is delayed another week by external events). Michael Welland is the geologist who introduced me to the IF, he posted this on JA’s thread:
Michael Welland said…
Since I was the person who initially raised the methodology of evidential reasoning as a potentially valuable approach to capturing and communicating uncertainty in climate science, I would suggest that this is a topic for serious consideration rather than derision.
Yes, the “Italian flag” nickname is vulnerable to satire, but the underlying principle is a powerful one. Rather than operating in a space of 2 components – the chance of something happening and one minus that (by definition, itself unexamined)- it operates in a 3-component space comprising evidence for, evidence against, and a gap in the middle that represents uncertainty/the unknowns.
The mathematical foundation of this methodology has long been established and it is routinely applied in a wide variety of contexts where a thorough means of capturing uncertainty is required. See, for example, http://www.quintessa-online.com/TESLA/ESLGuide.pdf and a quick Google of “evidential reasoning” will demonstrate applications and value.
I am a geologist, not a climate scientist, and have no wish to participate in this blogospheric fray. However, in following some of these threads, and attempting to sort the wheat from the chaff, my preference is for serious treatments of serious topics.
And here’s James’ reply:
> Michael, if you follow my links to my previous posts, you’ll see that I am by no means hostile to the principle of it. My criticism is that Judith Curry’s attempts at using it appear incoherent and nonsensical, and she has repeatedly chickened out of explaining what she means by it.
This criticism has not been answered.
McIntyre has a very interesting and relevant post
So, according to IPCC AR4 and as illustrated by the output of the PCMmodel the IPCC AR4 presents,
(1) the ‘hot spot’ is a unique effect of forcing from “wellmixed greenhouse gases”
(2) the unique effect of forcing from “wellmixed greenhouse gases” is so strong that it overwhelms the effects of all other forcings.
The “hot spot” is a phrase not referred to at all in the literature, it is cherry picking from skeptics. That they correctly predicted cooling of the stratosphere, which is predicted for GHG warming, and counter-intuitive as well, is significant. Recording historical temperatures at the surface of the earth is problematic enough as it is, trying to obtain a historical record of temperatures vertically as well is much harder. Radiosondes have been used, but they are even more inaccurate than thermometers. There has been significant work done on trying to get the data into a reliable form, but people at the moment don’t really know if they are being critical of the warming patterns that are expected according to models, or the accuracy of radiosondes, which have a large degree of error.
The unique pattern of a cooling stratosphere has been observed, and quite clearly, and that has been in accordance with predictions.
Your only points seem to be
1.your statement saying;
“The “hot spot” is a phrase not referred to at all in the literature, it is cherry picking from skeptics. ”
2. arm waving about the accuracy of radiosonde measurements.
3. your statement concerning GCMs saying;
“That they correctly predicted cooling of the stratosphere, which is predicted for GHG warming, and counter-intuitive as well, is significant.”
Firstly, OK, instead of the ‘shorthand’ phrase ‘hot spot’ let us say it is ‘greater rate of warming at altitude than at the surface in the tropics’ because that is what the IPCC AR4 Chapter 9 says.
That despatches your irrelevant point.
Secondly, you arm wave about inaccuracies of the radiosode measurements. But the completely independent MSU measurements agree with the radiosonde measurements.
Science tests ideas against the available information. Pseudo-science ignores or rejects information on the basis that the information fails to concur with an idea.
Prove the radiosonde and MSU data are both wrong or accept that they refute the AGW hypothesis. Assertions that the radiosonde data may be wrong do not cut it. Anything may be wrong (including the AGW hypothesis).
Thirdly, that something is consistent with something else is not relevant because it is meaningless as a scientific observation of possible causality: e.g. crop failures are consistent with malign activities of witches.
An observation that something is NOT consistent with an idea is important scientific information because it disproves the idea. The ‘hot spot’ predicted by the AGW hypothesis is absent: live with it.
as a non-scientist, I have to say I find your posts invariably extremely well written and well argued and I hope you will continue to spend time at Climate Etc.
I have seen the ‘hot spot’ argument hashed over and again on many blogs but never seen such a clear and detailed exposition of the skeptical position as you have set out here. On the face of it, we have here a clear prediction made by IPCC which does not appear to be supported by the subsequent evidence. I wonder if AR5 will reflect this.
This is a very neat summary of the evidence and theory about that hot spot. Which doesn’t matter a whole heap because it’s related to a changing lapse rate for heating from any forcing – not specifically related to GHGs.
It does directly relate to feedback from water vapor so is an argument regarding sensitivity. I can’t think of a more pertinent topic.
Please read my reply to your fallacious point that I provided above.
Your repeating an error does not make it right.
Richard, instead of spouting your usual disinformation, try actually reading what adelady and others have written on the subject. The lack of a ‘hotspot’ does not in any way “falsify the AGW hypothesis” (as much as you’d like these broad and sweeping statements to be true) and if anything, could conceivably mean a higher climate sensitivity from the way the vertical temperature gradient establishes absorption lines in a spectrum.
And the quality of data and robustness of the result is always important in science, not the nonsensical sound bytes you throw around. BTW, if you’re so religiously set on denial, try actually being one of the more convincing ones :-)
Your post is comlpetely devoid of any content except ad hom.
I have provided no “disinformation” and you cite no example of my having done so.
My points are clear and I completely refuted the fallacious point made by Adelady in my reply to her (that included reference and link to the pertinent IPCC chapter) above at
November 11, 2010 at 4:24 am |
If you wish to assert that I provided “disinformation” then take it up with the IPCC, not me: it is their information.
And if you can find any flaw in my argument then please state it because I want to know of it.
But I would be grateful if you were to not waste space on this thread with another silly rant of the kind you have provided and that I am now answering.
“Ozone and temperature trends in the upper stratosphere at five stations of the Network for the Detection of Atmospheric Composition Change ”
Authors: W. Steinbrechta et al
International Journal of Remote Sensing, Volume 30, Issue 15 & 16 2009 ,
“This non-decline of upper stratospheric temperatures is a significant change from the more or less linear cooling of the upper stratosphere up until the mid-1990s, reported in previous trend assessments. It is also at odds with the almost linear 1 K per decade cooling simulated over the entire 1979-2010 period by chemistry-climate models (CCMs”
ANYONE, like J curry, wishing to hone their elementary exposure to the philosophy of science can easily to much worse than Aussie Rafe Champion. An advocate of Popper’s philosophy of science, he is very easily read and concise.
as well as online and FREE!
Popper’s primary contribution was in re-establishing scientific realism – and defeating epistemological subjectivism – by adding sensible tests to all knowledge claims.
Wow! Long thread, interesting topic. The IPCC approach is actually borrowed from engineering, and is best described as “failure analysis.” The conceptual basis is that the earth’s thermostat has failed. Failure analysis proceeds through four steps, (1) validate the failure (Michael Mann’s job), (2) brainstorm all possible root causes (“attribution”), (3) use evidence objectively to support or refute the various possible causes (all the stuff about solar forcing, etc.), and (4) apply reductionist logic and parsimony to determine the most likely root cause. In this context, Mann’s hockey stick is (was) the sole piece of clear evidence that validates that a failure has occurred, which is why it is so important, despite widespread claims that it is not central to AGW. The current crisis stems from the evidence that the IPCC has not adequately validated the failure, and has failed to objectively assess the evidence. The fact that the IPCC approach is borrowed from engineering also explains why engineers figure so prominently in climate skepticism.
I like your description and in particular your recognition that, like failure analysis, climatology is reductionistic.
There is a discussion of the limitations of reductionism at http://redwood.berkeley.edu/w/images/5/5e/Scott2004-reductionism_revisited.pdf . The author points out that the basis for reductionism collapses in the case that the equations which describe the evolution of a system are non-linear. Generally speaking, the differential equations are non-linear. Thus, as a general rule reductionism states a false proposition. It follows that failure analysis is logically unjustified and so is IPCC climatology.
You failed at Step 1. It was not Mann’s job, the whole IPCC case does not succeed or fail on Mann’s work. It was realised over a century ago that CO2 could have a significant impact on the earth’s climate, there has been a ongoing work researching how right or wrong he was about that since. Weart has the history of the story laid out quite clearly.
How does climate science stack up to the different aspects of the scientific method?
Oreskes looked into this question in this presentation:
(starting at slide 40)
or this book chapter:
Mike, you really should stop writing bollocks about Popper:)
As a sociologist you might be able to explain how the philosophes have managed to make such a mess of his thoughts.
Great links! Thanks for bringing a wider view. I think part of the problem is that Popper was around for so long, published a fair bit, and actively engaged with his critics throughout his life. As such even those who know anything about Popper other than falsification still tend to get an abridged version, which is often somewhat of a caricature since it’s used to contrast supposedly more developed or accurate views like Kuhn, Lakatos, sociology of science etc. I’m certainly guilty of this and not familiar with the whole Popper oeuvre, but I do know enough that reducing Popper to falsification and sweeping him under the carpet is largely a matter of convenience. Maybe I didn’t entirely do him justice, but those I see citing Popper as justification tend to know even less than I do.
I certainly welcome any corrective – I do encourage people to check out your site if they’re interested in some more depth on Popper and the relative importance of falsification in his philosophy.
As a sociologist of science, your interest is in what scientists think the word “scientific” means. As a scientist specializing in the construction of scientific models, my interest is in what is meant by “scientific” in the context of the construction of a scientific model.
In the construction of a model, it is insufficient to claim that there are many different definitions of the word “scientific.” The builder of a model has to have a single definition of this word.
It is possible to quibble over Popper’s use of the word “falsifiability.” However, the argument that a model is “scientific” if and only if subject to disconfirmation by reference to observational data is, in my opinion, true. Whether we tie this principle to the name of Karl Popper is , in my view, unrelated to the real issue. The real issue is of whether or not this principle is violated by the IPCC climate models. It surely is.
Falsification is the easy bit to remember and the easiest bit of his philosophy to apply carelessly to any situation, usually forgetting that falsifications are only ‘falsification hypotheses’, which are themselves open to refutation and depend on such things as the accuracy of the instruments and so on. This is particulalrly pertinent to climate science given that we cannot go back in time and make measurements of arbitary precision. We’re stuck with the occasionally shoddy observations that posterity has left us. We have to make do and learn what we can form them.
But a few things he stressed repeatedly seem to me to be at least as important as falsification. First, the need for objectivity. By this I think he meant that one should write everything down clearly so that everyone knows what the theory, hypothesis, or whatever is. This is why it’s good to write papers (rather than churn things out piecemeal in blog snippets). The paper is a fixed point to anchor a particular discussion. Second, a kind of bravery in criticism, whereby one attacks only the strongest points of a theory, or argument.
Part of this is understanding the position you are attacking at least as well as the person who holds it, so that in refuting it one can first state it in the most forceful terms one can muster. Conversely when one holds a position it is incumbent upon them to help the would be critic understand that position. Popper was aware that any misunderstanding could lead to the situation whereby one party was convinced they were right and another convinced that party was wrong, only to discover much later that they were actually arguing about different things.
One final thing was that he advocated a constant reexamination of the means by which science proceeds. If one adopts a single particular method then there will be some special set of circumstances in which that method gives the wrong results, but you still believe them to be right. By sticking rigidly to your method, you end up eventually looking like a dumpling (see for example, Ioannidis’ papers on why most published research findings are false). It also means that the methods needed in different subjects will be somewhat different. There’s no use pointing towards the low quality of some weather stations (compared to say the most accurate measurements in Physics) and saying it invalidates everything. It’s what we have and it is necessary to work out the inherent uncertainties in that haphazard collection of data so that we know what statements it can support or refute and those concerning which it is agnostic.
The problem with some of these elements is that they require deep understanding, patience and honesty from all parties involved. All these require time and effort and are therefore only every approached but distantly.
In the first aforementioned page, there is this link:
These two chapters might provide some food for thought. Bayes has been mentioned earlier and Hacking undoubtably deserves more ice time.
> As a sociologist you might be able to explain how the philosophes have managed to make such a mess of his thoughts.
A plausible explanation is that everyone makes a mess of everyone else’s thoughts. Most of the times, it seems easier to reinvent the wheel than to learn to read.
I would like to disagree with the claim that Popper’s demarcation doesn’t work in practice. In my experience, the failure to make predictions, and particularly the lack of any predictions that can be refuted, is the sign of an immature field or one where noise or unobservables prevent critical testing. We fortunately do not live in a world where such immature fields are in the practice of constructing buildings or airplanes, though much of medicine is much more immature than we would like (e.g., we still can’t cure many cancers). This raises a dilemna in climate science because those who think alarm is justified want to assert that their field can make predictions, but at the same time they will not admit of any data that would refute their claims. I think this is very telling.
Quoted earlier by rafe Champion Sir Popper himself, in his **Logic of Scientific Discovery**, :
> In point of fact, no conclusive disproof of a theory can ever be produced.
Handwaving at Popper’s falsificationism, confusing models and theories, and building up false dilemmas might also be very telling.
In many fields, particular problems are addressed and specific predictions made. We can predict response of corn to fertilizer with enough accuracy that farmers can make a living. Chemists can predict the yield of various compounds with enough accuracy that an entire chemical industry exists.
In the climate change area, we have a different phenomenon. We have big models that make literally thousands of predictions, depending on how you want to slice the outputs. We may not be able to ask yet about true/false, but we can do 2 things: we can compare the models to each other and we can evaluate how good are the specific pieces of the output.
If we take any class of outputs, such as polar climate, the models disagree with each other as to how quickly summer ice minimum will decrease under a warming scenario. They also disagree on normal arctic weather. Some create a solid sheet of ice with no cracks (no open water), others have open water (which affects heat loss from the water). Sometimes the jet stream flows the wrong way. Behavior of the jet stream over Europe tends to be wrong in all the models. In other areas, such as ocean currents or ENSO phenomena, some agreement with data is ok, some not so much. On a big scale, the models differ by up to 4 deg C in the global temperature or similarly in global or regional precipitation. In NO CASE do we get flat out agreement between model outputs and the real world, only a general similarity at best. Even the overall trend in temperature since 1850 is not a precise match with historical, it is just SIMILAR to it. How similar? Good enough to make decisions with? Maybe, maybe not. But the failure to match on all the details (and the tropical tropo hot spot discussed above is not a “mere detail”) does not inspire confidence in the big picture. In contrast, the vague similarity of model outputs to the real world seems to inspire some to give full confidence to model predictions under elevated GHG, but detail-oriented people are not so impressed by the big picture. In a scientific-method sense, a big picture agreement tells me that one is on the right track, but not there yet.
In the context of a discussion of the scientific method and IPCC climatology, I believe it to be crucial for the IPCC’s distinction between a “prediction” and a “projection” to be preserved. Of the two, only predictions make falsifiable claims. According to the IPCC, its models make only projections.
The question (to paraphrase Alice) is can you do that? That is how can the IPCC claim it is not making predictions, in the scientific sense? The IPCC’s two other Working Groups are busy exploring the impact of these projections as well as life changing strategies for mitigation and adaptation, all for the benefit of the world’s policy makers. If these are not predictions then how can this be rational? It cannot.
I think what the IPCC means is simply that these predictions need not come true because humans control the future parameters. But as a logical form these are clearly predictions. This projection dodge is part of the IPCC’s studied ambiguity. The IPCC does no science. It has no models. But many of its writers do science and run models.
“Projection” is a form of shorthand. Without it we have to say given-certain-events-happening-in-a-certain-order-we-can-make-this-prediction.
Projection is a lot simpler, and it also makes more logical or common sense to have multiple projections. Multiple “predictions” sound a bit silly unless you use a longer explanatory form of words.
Exactly. Projections are predictions with a complex form. The confusion is that people think that they are not predictions at all. Then it becomes a rhetorical trick. Given that all predictions are of the form given A we predict B it would be much better if we just called them predictions, not projections.
There are some barriers to calling the IPCC’s “projections” predictions. Here are some of them:
* In an individual projection, your proposition B is the numerical value of a temperature.Viewed as a prediction, each of the IPCC simulations models is falsified by the temperature record.
* If B is not a temperature, then B remains to be identified.
* Your proposition A remains to be identified.
* Before A can be identified, the problem of induction must be solved.
* “Given A we predict B” overstates the available information in the empirical data and laws of nature. When a model overstates the available information, it fails when tested. To avoid overstatement, the prediction must be along the lines of “Given A we predict B with a probability value of C and an uncertainty in the limiting relative frequency of B of +/- D.”
All right, I’m a physicist. Yes, that means that I am an advocate of the Scientific Method. This business of saying that things (e.g. AGW) are too complicated, etc. are (in my view) no more than a thinly disguised attack on science itself.
It’s indisputable that proposed solutions to our technical issues need to be shown to be legitimate — and “consensus” is a bogus methodology (just look at history).
What is needed to be done is that a hypothesis should have a comprehensive, objective, transparent empirical-based assessment. I call that the Scientific Method, but you can name it whatever you wish.
What a fantastic idea! An overt, heads forward, all out robust attack on science itself.
Yes I might be able to do this. My target isn’t science but rather subjectivity. To begin it necessary to disassemble science and thence reassemble subjective + objective aspects in a joint universe of consideration. The one handed approach to reality has stumbled for centuries now going nowhere. Enough.
I’m not going to say much about the science problems with AGW as I am mostly ignorant in that regard. The subjective issues are immense and dominate.
Excuse me please. Be back later …
The IPCC is a political advocacy group disguised in the public mind as the voice of science. Is it subject to scientific method? Not really. It is doing scientific reasoning, not science, and there is a huge difference.
How is an IPCC report like AR4 different from Inconvenient Truth? Both have scientist authors. Both are political documents. Both do scientific reasoning, in fact they present many of the same arguments.
It is surprising that there is no discussion of Kuhn here, given that the issues he addressed seem far more relevant than Popper’s. AGW is a candidate “paradigm” for climate science, one that has made great headway. What is unique in the present situation is that AGW is what I will call a Politically Motivated Paradigm.
That is, AGW is the scientific support for the environmental movement’s push for new levels of political power, so there is tremendous pressure on science for its acceptance. I am not sure this combination of factors has ever occurred before in the history of science. Many of the important features of the debate can be explained by this unique confluence of communities, the political with the scientific. The key is to first understand how paradigms work in science. Then consider what happens when one is captured by a strong political movement.
Is there any actual evidence for what you say here, or is it simply you saying what you wish to be true?
Given that the basics of the theory have been around since 1896, I believe, how would you distinguish between
– A group of scientists working in a field wishing to raise the alarm about a threat to the welfare of the human population of said planet, faced with counter propaganda from assorted vested interests?
– A cabal of ‘environmentalists’ have engaged the services, secretly, of a large proportion of the scientific community, to falsify a vast amount of data and theories in order to promote some tangentially-related goal?
The problem with your first (presumably preferred) explanation, is that most of the evidence that has come out – the emails, the peculiar properties of the algorithm for making hockey sticks, the poor quality of the temperature data, that absurd mistakes, such as the Himalayan glaciers – are all established facts – not even in dispute. Regarding your second explanation, I wouldn’t call it a cabal – more a gradual drift away from the ethics of science.
To say the basics of the theory have been around since 1896, is deeply disingenuous. If significant heating from CO2 was a self evident fact, why was it deemed necessary to collect data to ‘prove’ the effect? Why also, were climatologists afraid of global cooling, long after CO2 was recognised as a greenhouse gas.
I’m not paid by big oil (LOL), in fact I am quite green at my core, and I realise that there are almost certainly real problems in store for the planet – not least, over population – and I am angry at the way very well meaning people have been distracted by a non-problem (at least it lacks good evidence) from real issues.
Andrew, not only can I distinguish between the two cases you suggest, but I can also show evidence why neither is correct. Do you defend one?
The question is what does it mean (for science) to say that that climate science has become politicized? There is no cabal, nor falsification of data, that I know of. Nor is it simply a matter of some scientists going political in their spare time. What I see is the theory of AGW being elevated to paradigm status prematurely. Now that the political movement has been stopped, at least temporarily, science has to answer for this political excess.
… Yes exactly.
How do paradigms work in science?
(Please be explicit about this.)
Framing the issue as –
“AGW is the scientific support for the environmental movement’s push for new levels of political power,”
Seems to avoid any consideration of whether AGW research follows credible scientific methods. It also turns a very diverse collection of groups and ideas under the falsely unifying banner of ‘the environmental movement’.
The speculation that – ” I am not sure this combination of factors has ever occurred before in the history of science.” would seem to be confounded by the previous occasions when what might be called ‘the environmental movement’ used science to extend its control via government regulation and international agreements to limit or ban atmospheric emissions of SOX/NOX and CFCs.
How does the scientific method used in the science of acid rain and the ozone hole compare with AGW?
DDT, SO2, NOx and CFC’s are obvious precursors and their history is well known. If you are not aware of the environmental political movement I can’t help you. The methods are the same with AGW, half science, half advocacy. But AGW is an entire science, a paradigm. In fact climate science funding, at $3 billion/year or so, is probably ten times bigger than it would be if this were just a scientific issue. Capturing an entire science is a first, so far as I can tell. What a trip!
With sociologists of science present – thanks – I’d like to convey my tentative impression that Geography’s integrating, quantitative program, born around mid 20th century, was simply overrun. It lost out somewhere in the formative years – 70s? – of serious GCM- and IPCC-building. That’s when it should have invited itself more prominently. You can tell e.g. (just one tiny example) by the way it was not part of the scientific community expected to rally around The Eleven.
Thanks for your generous response to my terse comment, you struck me at the end of a project to work though many dozens of philosophy books, starting at local libraries, on to uni and the net to find what introductions and “state of the art” books say about Popper. Almost without exception the introductions repeat the same mistakes. Many state of the art books don’t mention him at all (well, why would you if the introductory books got it right?).
So I have to cut people some slack, what are folk supposed to think when all the books say much the same thing, we don’t have time to check every single thing at the source. The professionals are supposed to do that! Besides “The Logic of Scientific Discovery” is not an easy book to read. Of course you have to be critical of Popper, he made mistakes and some he corrected. But the debacle of scholarship vis a vis Popper has exposed a very disturbing side of the philosophy profession.
Rafe, can I ask if you have read David Stove’s Scientific Irrationalism?
To quote Roger Kimball’s review in New Criterion,
I agree that Popper has been serially ignored or, when not ignored, often misrepresented, but I wouldn’t say it’s because he’s difficult to read. The only thing challenging about the Logic of Scientific Discovery is the weight of the thing. He wrote clearly and clearly wanted to be understood. Some bits of it are even funny.
Often when reading about philosophy it’s easiest to go straight to the source. If you want to find out what Hume said, read Hume. For Kant read Kant. It was an interesting experience reading Popper after seeing his ‘philosophy’ paraphrased in introductory texts and textbooks. It made me wonder if the authors had even read him at all.
Alex Heyworth asks Rafe about David Stove’s Scientific Irrationalism.
In Rafe’s stead, I can only ofer up the observation that Stove fell to a kind of stereotypical “straw man” where understanding Popper is concerned.
SEE for instance Rafe’s post here
Just as with animated cartoon super-heroes, “Straw men” are uniquely easy to dispatch; no hard thinking is required because the just outcome is at hand!
The distinction between Gettier’s traditional Western problem of “Justified True Belief” and Popper’s approach — ie, anti-foundational conjectural knowledge — warmly explored in Mark Notturno’s excellent “Science and The Open Society: The Future of Karl Popper’s Philosophy” (2000)
is essentially what Stove resolutely missed.
In the first, as with Aristotle and since, we begin with unprovable assertions (“axioms”); in the latter, we only employ them provisionally (if at all), because to do so leads either to epistemological subjectivism (eg, psychologizing motives, as Stove does) or else solipsism (eg, PoMo).
Thus, Stove came up with a book rooted in a philosophical version of “gotcha” journalism, lacking all substance if one knows enough of Popper to know his innovative arguments better.
I’m not sure what “”Justified True Belief” tradition” refers to. In any case, Gettier was certainly not a proponent of knowledge as justified true belief:
For what is worth, Luciano Floridi argued that Gettier’s problem is unsolvable : **On the Logical Unsolvability of the Gettier Problem**, Synthese, 2004, 142.1, pp. 61-79. Here is the paper:
There is no need to invoke all this formal stuff. Stove’s criticism of Popper was answered by Michael Rowan and Alan Smithson:
> D. C. Stove’s analysis of Popper’s theory of scientific statements1 is
vitiated by at least three errors, all of which stem from a crucial omission:
that whilst Popper’s theory of scientific statements is a theory of statements
in science, Stove’s restrictive analysis ignores the context of the statements
and proceeds as though they were related to each other by nothing more
than the logic of propositions, i.e. they appear in Stove’s analysis as
atomistic, as distinct from scientific statements.
Vol. 55, No. 212 (Apr., 1980), pp. 258-262
Published by: Cambridge University Press on behalf of Royal Institute of Philosophy
willard, I notice that the comments by Rowan and Smithson are in relation to a paper by Stove, Popper on Scientific Statements, which appeared in Philosophy in 1978. Rowan and Smithson’s comments appeared in the same journal in 1980. So they can hardly be said to have refuted a book which was first published in 1982.
Stove did not reinvent himself in that book.
Even so, such a brief comment can hardly be said to be the last word on the subject. They didn’t even quote any statements Stove made.
I have more time now. So here is a less cryptic answer.
Here is a relevant section of Wikipedia’s page about Stove.
The main criticism is that irrationalism refuses the commonsensical impression that knowledge is always growing. Stove might be right in saying that it makes no real sense. Stove might be wrong in considering that the two methods he devised to show that shows this.
Take for instance the one about the “sabotaging of logical expressions”. Stove argues that if we need to take scientific in context, they lose all logical force and become empty. This is just another facet of his criticism than the logical analysis he offered in the 1978 article. For his analysis still depends on an outlook on science that he was developing since the sixties, i.e. on the idea that scientific statements are logical constructs. That scientific theories are logical constructs have some importance for inductivism and verificationism.
While Aussie realism appeals to me, Stove’s equating of rationality to some kind of logic of science makes no sense except for polemical sake. It would be tough to disagree with the idea that Stove was calling oneself a rationalist, a realist and a conservative to frame himself as a contrarian. In that case, these epithets become themselves success words, in dire need to be neutralized.
We clearly can see in this very blog that science does not need logic, only analogies based on myths ;-)
Thanks willard, that makes more sense. I am puzzled still by your implication that success words “need to be neutralized”. Why should they not be used in their plain meaning? Is this because we have a concept or process for which we do not have words, so we have to appropriate others and then “neutralize” their normal meaning?
I only have time to refer to the first chapter of Stove’s **Scientific Irrationalism**, which is entitled Neutralising Success-Words. You can read the first section in Amazon:
The last paragraph of p. 23 shows the essence of Stove’s argument in the book.
Hello Alex and willard, yes I read Stove’s book in the original edition (very poor binding!) then in the better crafted edition produced by a Sydney publisher and author Keith Windschuttle. David Stove achieved something approaching cult status in some circles, having almost made a career out of attacking Popper, however the book fails for three reasons.
1. He attacks a straw dummy of Popper.
2. He knows next to nothing about science, asked for an example of a scientific law in a debate at Sydney Uni he offered “the atomic weight of lead”.
3. He is stuck in the “justified true belief” theory of knowledge, where “conjectural knowlege” is an oxymoron and philosophers make careers out of failing to progress the program to produce a criterion for justified true beliefs.
The late Bill Bartley was the leading exponent and elaborator of the critique of justified true belief. This explains at least in part why Popper’s ideas have made so little headway among the philosophers.
This may answer Willard’s question about “justified true belief”, which is a term mostly used in Popperian circles and hardly at all elsewhere because it is just assumed and is not on the table for discussion. BTW I am intriqued by Willard’s blog, what is a nice young rocker like you doing here with all these rough types from academia and philosophy?
This is a taste of Bartley.
This is probably more than you wanted to know!
On the role of myths as the beginning of science.
Rafe, thank you for your very comprehensive reply. As you may have noticed, in a reply to Orson I also noted the apparently oxymoronic nature of the phrase “conjectural knowledge”. However, I am very willing to be further educated on this question. I think I understand what the phrase is getting at, but it seems to me that phrases such as “partial knowledge” “conditional knowledge” or even “current knowledge” (with the implication that it is known it could be superseded) might better capture the flavour. Am I mistaken, and if so, how badly? Is “conjectural” being used in a way I misunderstand?
I hope this helps Alex!
The idea of conjectural knowledge is part and parcel of the theory of science as conjectures and refutations. It is the more significant innovation in Logik der Forschung (1935), far more important than the idea of falsification which everyone talks about. Bartley is the best source to find out the implications and ramifications of “non-justificationism”.
Conjectural knowledge is consistent with the idea of partial, conditional and current knowlege (that can be improved), except that it explicity acknowleges that our knowlege will always be partial and conditional, even as we develop better and deeper theories.
I think of this as one of the “turns” that Popper took which are much more interesting and important than falsification (important as it is). Falsification became a big thing because it was the most visible difference from the positivists, but the really important differences were about induction and the possibility of putting p values on theories. The inductivists refused to give up on induction because this was the fallback position from positively justified beliefs – that is, probably justified beliefs! Bayesian probability has become the next fallback.
Popper took several “turns” and I think it is helpful to address them and see what value they add.
Very helpful, Rafe. Thank you again.
> what is a nice young rocker like you doing here with all these rough types from academia and philosophy?
I thought it was a nice compliment, but then I realized that my auto-completion has entered
whereas my real blog is
Happy to make some promotion for a rocker.
Wel l I am pleased that you took it as a compliment (as indeed it was). :)
Orson, I will be very interested to read Notturno’s book. However, at the moment I am doubtful that the phrase “conjectural knowledge” has any meaning. It seems oxymoronic.
“conjectural knowledge” was Popper’s first term for what later was reformulated by friendly critics like Bartley
as “critical rationalism” (or CR) and then pancritical rationalism (or PCR).
This is in direct contrast to inductivists who claim a founding of justified knowledge on some unjustified base (or “axioms”). As Rafe argues, this means that Popperian’s claim that our knowledge claims are sounder if ALL foundations are also subject to criticisms – IF (and only if) they successfully withstand scrutiny.
Transitionally, by the 1970s, Popper’s friends tended to prefer “trial and error” to describe the growth of objective knowledge. (For instance, Madsen Pirie in “Trial and Error and the Idea of Progress,” 1978
Thus, Popper’s own late (after 80+ years) formulation of his epistemology preferred the term “evolutionary knowledge” to emphasize the growth and self-correcting nature of objective knowledge (eg, scientific knowledge), only after surviving critical and rational tests – something close to what people think of as “the scientific method” Which is very much a deceptive shorthand for not only scientific tests, but also the open-ended nature of social and institutional testing of fungible and repeatable knowledge claims.
This is where the IPPCs authority – and authoritarianism – crested, and the oft repeated claims of “scientific consensus” about CAGW veer into uncritisizable (save for the ‘saintly “climate science” contributing scientists) propaganda claims. Hence, the importance of Judith Curry’s ‘outing” and climategate in unmasking its saintliness.
This means that the characater and terms of critical discourse in climate science can be re-examined – finally. And as this thread testifies, to the continuing relevance of the late nonagenarian philosopher of science, Karl Popper.
> This is where the IPPCs authority – and authoritarianism – crested, and the oft repeated claims of “scientific consensus” about CAGW veer into uncritisizable (save for the ‘saintly “climate science” contributing scientists) propaganda claims.
Lots of success-words there: authoritarianism, crested, saintly, propaganda.
Here is an example of claims the IPCC made:
No success-word mentioned above applies to these claims.
For the sake of a rational discussion, it might be better to distinguish the scientific claims from the way they were promoted.
No success-word mentioned above applies to these claims.
For the sake of a rational discussion, it might be better to distinguish the scientific claims from the way they were promoted.
It is a fair enough distinction.
But consider this story: in July 2007, I shared a table with a physicist at NIST (the electro-mag division), drank beer – and listened to NCAR climatologist Greg Holland wax and wane about CAGW – before a Cafe Scientifique in Boulder, Colorado.
The worst worry he had was that Arctic warming might unleash methane to catastrophic effect. Neither me nor the physicist, knowing that the Holocene Optimum was significantly warmer in the Arctic than today’s temps, found the plaint compelling.
Of course, the Holocene Optimum was something I know was expunged from the IPCCs TAR.
So, could Holland have forgotten his climatology basics? Could that account for his near weeping anguish? Or does the IPCCs sloppy “science” have something to do with institutionalized corruption?
Thanks, Orson. Between your reply and Rafe’s comments upthread , I have a much better idea of what Popper meant and how his ideas developed. I will clearly need to do some more reading.
The epistemology of the climate science debate is quite different from those cases usually dealt with in philosophy of science. It is not a case of two competing theories, nor of the confirmation of a single theory (although that is certainly present). Rather it is a proposed theory versus known ignorance. We now know that climate changes naturally but we don’t know why so we can’t rule it out. AGW’s competition is what we know we don’t know. An interesting case indeed.
CAGW also suffers from ignoring known causes – both natural and anthropogenic – of climate change, especially those that are just being discovered and quantified.
E.g. summaries by NIPCC on evidence ignored and subsequent. Roger Pielke, Sr. details anthropogenic causes ignored. Roy Spencer highlights uncertainties in clouds, including uncertainty on which is the cause and which the effect. Nicola Scafetta highlights solar influences ignored. Anthony Watts highlights errors in the surface stations.
Henrik Svensmark is quantifying, “CosmoClimatology”, the impact of the sun’s modulation of galactic cosmic rays on earth’s clouds. etc. etc.
Geraldo Lino writes distinguishing the scientific method from “consensus”:
Climate Change: The Keywords (Part 2 of 3) Geraldo Luís Lino, special to Climate Change Dispatch | 14 November 2010
Climate Change: The Keywords (Part 2 of 3)
Well, my bozo opinion of the scientific method is that it works like this:
1. A scientist makes a claim, and provides the data, logic, math, code, observations and anything else that underpins his claim.
2. Other people try to find something wrong with the claim. The error could be in any aspect of the claim—the data, the math, any part.
3. If nobody can find anything wrong with the claim, it is provisionally accepted as scientific fact … until someone finally finds something wrong with the claim.
Now, there are several things that are required for the method to work. The most important of these is transparency. Unless all of the data and information and code and all the rest are available for everyone’s inspection, there is no way to determine if the scientist has made a mistake.
And this is why I strongly disagree with the idea that “science is what scientists do” … because in climate science, far too many “scientists” are doing nothing of the sort. They are going out of the way to hide their code and data.
A second important part of the scientific method is falsifiability. Note that falsifiability doesn’t necessarily have anything to do with experiments, or with observations. A flaw in the logic can be just as fatal to a scientific theory as a contrary experiment.
A third important part of the scientific method is a level playing field. Unless anyone can participate in the process and register their scientific ideas and objections, it is not valid. This is why the actions of Phil Jones and the other un-indicted co-conspiriators in trying to prevent publication of opposing ideas is so despicable. It is also the reason why RealClimate, the home of the kings of censorship, is not a scientific site in my opinion. You want science, you have to listen to scientific objections and make room for opposing theories.
Unfortunately, far too much of climate science pays little attention to these kinds of scientific niceties. See the latest joke passed off as climate science (and peer reviewed) here … if someone thinks that is science just because climate scientists are doing it, I fear we are talking about very different things when we say “science”.
Anyhow, that’s my $0.02 …
I agree with you, though I have a more compact version
There is theory, it has to be self consistent and describe/predict data.
Data must be given with error bars.
Data trumps theory.
Data trumps theory.
Out of the above three, the CET’s data is the least certain one.
For the rest some time in the future, till then it is only hypothesis.
I have lived in a world which is unable recognize and appreciate the distinction between ‘subjective’ reality in general and ‘objective’ reality as being a specific subclass of ‘subjectivity’ which emphasizes ‘visualization’.
I exist in a solitude.
I have no credibility, nor anything to contribute.