On the Uncertainty and the AR5 thread, Fred Moolten and Paul Dunmore provide starkly different arguments for reasoning about multiple lines of evidence. This issue gets to the heart of the source of much disagreement in the scientific debate about climate change.
Fred Moolten:
To more formally describe the importance of the convergence principle in support of anthropogenic causality, I would state it as follows:
A. Very few data sources on this conclusion are amenable to a probability estimate of the form: “The probability that anthropogenic causality is true is p. The probability that it is false is 1 – p.”
B. Instead, the vast majority take the form: “The probability that the data can correctly be interpreted as demonstrating anthropogenic causality is p. The probability that the data are insufficient to demonstrate this result is 1 – p.” Here, 1 – p is not the probability that the conclusion is false, but merely the uncertainty about its truth.
The distinction is critical. The confusion between A (disproof) and B (uncertainty) leads to a very substantial underestimate of a valid probability for anthropogenic causality based on the large number of data sources that fall into category B, and the few might be assigned to category A.
I have cited examples above of category B probabilities. How often will evidence for some alternative climate mechanism constitute an example of a Category A disproof type of probability value? Such would occur only when the alternative is inconsistent with the coexistence of anthropogenic causality. This is rarely possible, because quantitative estimates are rarely precise enough for one possible mechanism to exclude an important role for all others. As an example, early twentieth century solar forcing can be assigned a greater role than anthropogenic forcing in mediating observed trends, but the participation of both in proportion to calculated potencies, in conjunction with other factors, known and unidentified, is consistent with the data. Nevertheless, Category A examples should always be evaluated seriously based on the evidence in each case.
I hope it is clear from the above that what we are discussing are probabilities, and not certainties – indeed, the operative term that defines Category B is uncertainty. With that in mind, and with uncertainty as a focus, Judy, of what you are writing about, I hope you will consider adding the convergence principle to other perspectives on probability you are already addressing in formulating conclusions. Without it, I believe an important element of how we evaluate climate data will be missing.
Paul Dunmore:
Fred, I have been hoping that someone would tackle your “convergence” argument in this thread head-on, and various people have taken on bits of it. But it turns up in various guises, and it needs to be nailed properly.
The proposition you wish to establish with high confidence is that “anthropogenic greenhouse gas emissions are a major contributor to global warming.” No evidence can establish the truth of this proposition, because the proposition is incomplete. It cannot be tested until the word “major” is given an operational definition. (As a grammatical rather than logical difficulty, your statement contains two implicit propositions, the first of which (global warming is occurring) must be established before the second has any meaning. But that is minor.) This bears on the question of whether a particular study supports there being a “major” contribution. The IPCC has defined “major”, and if you accept their definition you would have to consider whether each of the studies you have read supports a “major” contribution or just a contribution. I don’t see that you claim to have made that distinction, and I not sure that you could have.
Next, you assert that there are large numbers of independent studies (“thousands” at one point) which provide evidence in support of this proposition. But many of the studies cannot provide such evidence: paleoclimate studies, for example, cannot directly give evidence about the effects of anthropogenic emissions, because such emissions were not happening at the time. What they can do is to allow us to test our understanding of the climate system, and that understanding is itself evidence about the causes of contemporary global warming. But you massively double-count if you present all of the studies that support our understanding and also count the models themselves (as you do, twice over, when you say that the models fail to reproduce trends without including anthropogenic forcing and also that we must accept high-end values for other drivers if we exclude anthropogenic effects; this is a single argument expressed in two different sets of words). In fact, the evidence here is the models, not the studies that support the models – the supporting studies may mean that we have high confidence in the predictions of a single model, but not that we can treat the predictions of the model in a new situation as having been confirmed thousands of times by the thousands of studies in quite different situations that we used to build up our confidence in the models. What we might have is 90% confidence in the description offered by one model, not 50% confidence in each of thousands of independent predictions of AGW.
Next, you misuse Bayes’s Theorem in the way that you combine your streams of evidence. Bayes’s Theorem allows us to update the probability that a hypothesis is true given evidence about related observations; it can be applied sequentially to each of many pieces of evidence, and the probability that the hypothesis is true builds up very quickly towards 1 in exactly the way you calculate. But what is needed in the update is not what you claim: it is NOT the probability that the hypothesis is true given the next piece of evidence, but the probability of observing the next piece of evidence given that the hypothesis is true (and also that probability given that the hypothesis is false). If our next piece of evidence, X, has a 50% chance of being observed if theory A is true, and has a 50% chance if theory B is true, then the Bayesian update does not bring the probability of A upwards towards 1, but somewhat closer to 50% (from either above or below 50% based on our previous evidence). In fact, if there is also a 30% change of observing X if theory C is true and a 10% chance if theory D is true, the Bayesian update will pull the probability that A is true down below 50%. And the contrary theories do not need to be the same in every iteration. It is possible that in an ice-bubbles study the alternative hypothesis is gas contamination, in a tree-ring study it is physical damage, and in a historical temperature set it is incomplete corrections for UHI. In each case, if the alternative is about as consistent with the evidence as the conventional explanation, the conventional explanation will not gain ground, even if it is a contender in every single study. That is why scientists need to design careful studies which demonstrably eliminate every alternative explanation in that particular study (that is, they must show that the probability of observing result X under contending theory B is some very small amount such as 0.01); and they have to repeat this feat again and again and again. There is no short-cut principle by which weight of numbers of unconvincing studies can eventually overwhelm all opposition: your “convergence” mantra is a fallacy, not a principle.
Finally, your formulation of the question at issue is wrong. You present it as a true-false hypothesis, but it is actually a measurement problem. Most people whose views deserve to be taken seriously have no doubt that adding CO2 and its friends to the atmosphere will warm the planet. The $64 question is: how much? If doubling CO2 will warm the planet by 0.1C, we have nothing much to talk about – the atmospheric scientists can be left in peace to get on with their work. If doubling CO2 will raise the temperature by 20C, we have a great deal to be concerned about (and I would want to withdraw a policy comment I made above). What matters is the magnitude, and the difficult problem is to measure the magnitude, given the complexity of the feedback processes and the fact that some of them (and even some of the forcings) are incompletely understood and even more incompletely calculable. There may be debate about which magnitude is to be measured: suppose it is the equilibrium temperature sensitivity. Suppose, further, that several careful studies of different kinds have estimated this to be 1.5+/-0.7, 4.2+/-2.9, 0.6+/-0.3, where the +/- ranges are 95% confidence intervals. (For the avoidance of doubt, I just made these numbers up.) All of these studies clearly support a value greater than zero, but they are inconsistent with each other. Are they supportive of the AGW hypothesis? Yes, of course, but that was never at issue. The average estimate is 2.1, but that is completely inconsistent with two studies and only marginally consistent with the other. Assuming normally distributed errors in each study, the maximum-likelihood estimate of the sensitivity is 0.77, but this most-likely estimate is still extremely unlikely, with a probability density there of only 0.001 (reflecting the mutual inconsistency of the estimates). That is, this made-up evidence overwhelmingly supports both of the propositions that (1) the true sensitivity is greater than zero, AND (2) we do not at all understand what is going on. The problems might be measurement errors, inconsistent definitions of what is being measured, or processes that we do not suspect. The only way to get a credible estimate is to work out why the estimates vary and to design new measurements that correct the problems; we have not done that until all of the estimates agree within observational error.
So when you say that you have read many studies and they overwhelmingly support the AGW hypothesis, can you tell us how many of these studies provide (independent) estimates of the sensitivity, and have you noticed whether these estimates are mutually consistent? Some such estimates exist, but there seem to be not very many of them. If the IPCC writers have cherry-picked the studies to which they give credence (or the literature is biased against publishing studies which give estimates away from the consensus), then the persuasiveness of the claims quickly becomes much less than is thought. So such allegations of (even unconscious) bias in the process are, if at all credible, very damaging to our confidence in the consensus views – these debates matter for scientific reasons, not just as political point-scoring. BTW, suppressing the highest of my made-up estimates would be as damaging as suppressing the lowest, because each action would convey the wrong impression that our understanding is better than it really is.
None of this is to say that your belief about AGW is wrong; I share at least some of your views. But the argument that you present for it is utterly bogus.
Fred Moolten:
For readers to understand the background without revisiting the other thread, I suggested a principle of “convergence” whereby a large number of approaches to a hypothesis (in this case anthropogenic causality) could add to the probability that the hypothesis is correct to the extent that they were independent or partially independent – the degree of independence affecting their contribution to the final probability estimate.
An essential element of the principle is that none of the individual approaches (e.g. studies reported in the literature) need necessarily be conclusive, but would nevertheless contribute if they provided partial support (e.g., an ability of prehistoric CO2 to elevate temperature or of current CO2 to correlate with temperature – obviously neither of these “proves” anthropogenic causality). This involves studies of the type described in your post, where the study is of the “B” type. I suggest that adequate counter-evidence would require multiple studies of a hypothesis that excluded anthropogenic causality, and therefore the number of conflicting studies is an important variable. In other words, the number of supporting studies is an important ingredient of the probability estimate in
addition to the strength of each study.
I cited a hypothetical example of the principle in which a very large number of supportive, independent studies of intermediate conclusiveness would constitute a dataset, D, that would be very improbable if the hypothesis being tested, which I labeled A, were false. This would occur if an alternative I labeled S were true. The Bayes formulation for this is given by P(A|D) = P(D|A) x P(A)/P(D). In this hypothetical case, P(D) is almost the same as P(A), because the alternative S in my example almost never yields D.
Paul Dunmore correctly states that I generalized about what constitutes “major” anthropogenic causality. I was primarily interested in bringing productive discussion away from the question as to whether anthropogenic causality was non-existent (or trivial) and instead directing it toward appropriate estimates of its strength relative to other climate drivers.
As this discussion proceeds, I expect that more of the elements that go into these estimates will emerge.
Judy – For readers to understand the background without revisiting the other thread, I suggested a principle of “convergence” whereby a large number of approaches to a hypothesis (in this case anthropogenic causality) could add to the probability that the hypothesis is correct to the extent that they were independent or partially independent – the degree of independence affecting their contribution to the final probability estimate.
An essential element of the principle is that none of the individual approaches (e.g. studies reported in the literature) need necessarily be conclusive, but would nevertheless contribute if they provided partial support (e.g., an ability of prehistoric CO2 to elevate temperature or of current CO2 to correlate with temperature – obviously neither of these “proves” anthropogenic causality). This involves studies of the type described in your post, where the study is of the “B” type. I suggest that adequate counter-evidence would require multiple studies of a hypothesis that excluded anthropogenic causality, and therefore the number of conflicting studies is an important variable. In other words, the number of supporting studies is an important ingredient of the probability estimate in
addition to the strength of each study.
I cited a hypothetical example of the principle in which a very large number of supportive, independent studies of intermediate conclusiveness would constitute a dataset, D, that would be very improbable if the hypothesis being tested, which I labeled A, were false. This would occur if an alternative I labeled S were true. The Bayes formulation for this is given by P(A|D) = P(D|A) x P(A)/P(D). In this hypothetical case, P(D) is almost the same as P(A), because the alternative S in my example almost never yields D.
Paul Dunsmore correctly states that I generalized about what constitutes “major” anthropogenic causality. I was primarily interested in bringing productive discussion away from the question as to whether anthropogenic causality was non-existent (or trivial) and instead directing it toward appropriate estimates of its strength relative to other climate drivers.
As this discussion proceeds, I expect that more of the elements that go into these estimates will emerge.
Paul Dunmore, not Dunsmore. My apologies.
Thanks fred, i will add this to your post.
“Consilience” is not part of my vocabulary, although I managed to fit together a few puzzling observations in my career – usually while not actively trying to do so.
The probability is extremely low that the initial consilience pattern will survive the next puzzling observation, but on rare occasions that has happened.
In climatology, not enough observations have been fit together to justify the promotional campaigns of world leaders, Al Gore and the IPCC.
Dunmore’s claim here is essential:
“So such allegations of (even unconscious) bias in the process are, if at all credible, very damaging to our confidence in the consensus views – these debates matter for scientific reasons, not just as political point-scoring.”
Even the possibility of selection bias in public statements by scientists (or scientific committees) or, worse yet, a selection bias in scientific publication (and/or funding) is devastating to the reputation of climate science.
I would think that anyone who understands evolutionary process should understand the power of iterated selection processes. Any system that systematically is more inclined to admit one type of evidence or argument rather than another tends to accumulate variations in the direction towards which the system is biased; in biological evolution this process results in dramatically new species which would not have come into being given a different selection environment. In epistemology it results in new perspectives on reality, epistemological perspectives which are the outcome of the selectionist process that produced them. Is alarmist AGW a real concern or merely an epistemological mutation that happens to be the result of a biased system? Each occasion on which we observe evidence of systemic bias in climate science increasingly leads us to the conclusion: WE REALLY DON’T KNOW. None of us knows, because the heart of the scientific process, scrupulous support for open disagreement in scientific debate (and, relatedly, funding of diverse lines of inquiry), has been betrayed. This is why Feynmann regards scientific integrity, or “leaning over backwards” to explore alternative interpretations, as the essence of the scientific process:
“But there is one feature I notice that is generally missing in cargo cult science. That is the idea that we all hope you have learned in studying
science in school–we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards.”
It is to Curry’s credit that she takes seriously just how deeply damaging even allegations of bias in climate science are to the scientific enterprise as a whole.
Michael, I went to your website, I am delighted that you have shown up here. Off the topic of this thread, I have become very interested in the ideas of social business and how this can be used to promote adaptation, I will spend some time reading what you and John Mackey have written.
To Michael Strong: W.r.t. “Social Business”, I believe Muhammad Yunus would like to see a class of businesses defined at operating toward some rigorously defined and testable social goal, which is not averse to making a profit, but neither legally required on behalf of shareholders, to maximize profit. Does that sound like what you are about? And would you say it is important to legally establish this class of businesses?
Science, going back at least to Newton (who was sometimes a good case in point) has always been practiced by human beings. It has its standards and traditions (Daniel Boorstin’s The Discoverers, especially the treatment of the Enlightenment gives us a primer on how instititions, culture and traditions of science contribute to its power — it is far more than that caricature of a “Scientific Method” we learned about in High School). But, there have always been from deviations: personal vendettas, sloppy experimental design and the occasional made up results, etc. Yet, it has gotten us to where we are today.
Today it seems to me we have a massive slander campaign aimed at paralyzing discussion of AGW. With all due respect to true skeptics, much of it is driven by the conservative think tanks that pose as alternatives to traditional academia. Ross McKitrick, who recently lectured us on how climatologists should be more like economists, works for one, the Fraser Institution, which has had its own criticism over the use of skewed data in a study of the effects of a minimum wage.
All the accusations I have heard about the Climatologist community and the IPCC seem pale in comparison to these “Think tanks”, heavily subsidized by big business and the ultra-rich, where you can get fired in an instant, as David Frum was from the American Enterprise Institute for not towing the party line. All of this piling on about “cargo cult science”, betrayal, whatever being “devastating to the reputation of climate science”, Marxists and liberal fascists is just that: piling on. It is cultivated in an atmosphere of commenting on comments on comments as a competitive sport. Those who come up with the biggest zingers become the biggest heroes. People who know very little about the substance of a discipline use overextended logic which does not work except with terms defined with mathematical rigour, which is why I once tossed into a discussion the “Proof that all natural numbers are interesting”.
I’m afraid the people who do most to pump this movement full of energy would not deserve the time of day from Richard Feynman. They are not interested in science being more rigorous and careful. They are interested in destroying the reputation of science so that the truth can be created by the people with the biggest megaphones, which at the higher reaches is a cozy clique of billionaires, the ones who pour hundreds of millions into the so-called think tanks and their own captive media.
Hal and Judy, I’m happy to discuss issues associated with Conscious Capitalism off-line so that we don’t distract from the conversation here.
Hal,
The reputation of science is ultimately based on the extraordinary successes of predictive science, physics and chemistry in particular. The norms and methods that resulted in the extraordinary predictive successes of the “hard science” have been ported to other fields with greater and lesser degrees of success. But in the absence of something that resembles “critical experiments,” the norms of appropriate scientific behavior become considerably MORE important. In a sense, without falsifiable predictions, the norms are all we have left to know that we are doing science.
Feynmann rightly questions the validity of much work in the social sciences. Recently much media attention has been devoted to evidence that most medical research is not well grounded due to systemic biases; see this article on the work of John Ionnidis in The Atlantic,
http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/
Much of economics, macroeconomics in particular, is not really science; see Russ Roberts’ analysis of macroeconomics here,
http://cafehayek.com/2011/01/what-is-economics-good-for.html
In all of these areas, as in climatology, one should be skeptical of all results. This is not an ideological issue; we’d all love to know the best diet to avoid cancer, and it sure seems like eating lots of vegetables helps, but it strikes me as absurd to put too much faith in the various medical studies purporting to tell us what to eat once one has understood the ways in which bias permeates medical publishing.
This doesn’t mean we should give up, but it does imply that we should work to improve quality, including work to improve the standards of discourse in climatology, as Judy is doing. Being righteous about the certainty of climatological research is neither defensible nor productive.
The concern that the free market think tanks are behind the skepticism strikes me as over-blown. It would be more compelling if there were no credible scientific skeptics; but the fact is there are credible scientists who disagree with various claims made by the IPCC. The fact that the number of credible scientists who disagree with the IPCC is a minority is not ipso facto a reason to disregard them or their arguments. This is especially true when there is evidence of bias in funding and publication, especially deliberate conspiracies as revealed by Climategate. It is even more true once one discovers the discrepancies between the rhetoric around the IPCC processes and its reality (well before the Himalayan glacier embarrassment, I followed Steve MacIntyre’s experiences as an IPCC reviewer, and IPCC behavior did not remotely resemble an “objective, open and transparent” review process).
I have advocated for Gore’s proposal for a revenue neutral exchange of payroll taxes for carbon taxes, and I respect those who argue:
1. There are potentially very serious consequences of increased carbon emissions.
2. There are many benefits unrelated to AGW to reducing carbon emissions.
3. Therefore we should reduce carbon emissions.
But to pretend that the IPCC process and consensus represents exemplary science, at this point in time, is an insult to science. I predict that the more hard scientists, who have hitherto trusted the IPCC, discover how deeply corrupt the process is, the more they will distance themselves from the IPCC consensus.
I would like to see a broader concern with the epistemology of non-predictive sciences more generally, including most social sciences, much medical research, and various other fields, including climatology, but that is not likely to happen in the near future. For institutional reasons unrelated to the epistemological value of these endeavors, thousands of professors will keep cranking out thousands of research articles indefinitely, regardless of whether or not a particular field of inquiry is well-grounded epistemologically or not.
For an interesting paper on the importance of prediction markets to improve science across many disciplines, see Robin Hanson’s “Could Gambling Save Science?”
http://hanson.gmu.edu/gamble.html
From an essay by Donald T. Campbell which addresses the epistemological points I was trying to get at in far more depth than I can here; read the entire essay for a more substantial presentation of his argument (section 8 is all that is strictly relevant, but section 7 shows just how deep are our propensities to support existing hierarchies, and thus why scientific norms that violate those hard-wired propensities are precious):
“All self-perpetuating belief communities are tradition-ridden, viewing current events through the spectacles of their pasts … But whereas most belief communities locate truth in a long-past revelation or … locate the ideals of life in some past heroic period, … science’s norms go explicitly counter to this, idealizing truth as lying in the future and decrying tradition as a burden and source of error … Do these antitraditional norms … not offer some advantage, however slight, to innovators … and make the sciences less tradition-ridden than other tribal groups?
The organizational requirements of group continuity and career attractiveness give administrators and leaders power … beyond what their declining competence and increasing rigidity merit. Such geriarchical and authoritarian biases scientific communities share with all other tribes. Thus off-the-record advice which young recruits to a thriving scientific laboratory receive indeed will usually be much like that received by an army recruit: “You’ll find that if you want to get ahead in this lab, you’d better go along with the old man’s ideas. He just doesn’t know how to take suggestions or criticism.” Yet, military communities and churches have explicit ideological support for this practice, while science’s ideology explicitly decries it …
In all social communities, narcissistic people with competitive egocentric pride are a problem. Cooperative people who defer to the majority, who get along and go along with others, and who hold the team together, get preferential treatment even if they are less competent. This is true of scientific communities too, contrary to scientific norms that encourage vigorous internal criticism even if feelings are hurt … Yet, scientific communities no doubt differ somewhat from other belief tribes in the rewards given competent arrogance.
No cult, sect, or other belief community can isolate itself from the larger society. Science is influenced by the external social system in many ways counter to optimizing scientific truth. Thus, the status systems of the larger society, based on political and economic power and social class, contaminate the internal status system of science. Given equal ability, it helps a young scientist … to be well-connected in the extrascientific real world … to have good manners, conventional social views, and to come from a high-prestige university. All such contamination violates important norms of science which hold that the contribution to scientific truth should be the only determinant of status within science. Should this norm be given up as hypocritical? Or has it in fact some effect, making science less subject to this contamination than it would otherwise be? …
These values of science I want to keep alive and available for use in the arguments that are made in the course of institutional decision- making. These values will, I believe, occasionally make a difference— a difference in favor of truth. Exposes demonstrating that science violates these values can go two ways: In shocked disapproval, we can try to advocate them more effectively. As a sociologist of science, I approve of this naive moralistic reaction and am thus sympathetic to the institution-preserving motives lying behind the outraged reactions Kuhn and Feyerabend have evoked … The exposes can also have the opposite effect, in a call to give up the hypocrisy by ceasing to affirm these values. This I do vigorously oppose.
But I do not want to exaggerate the effect of preached norms. Institutional arrangements that provide selfish incentives for norm- supporting behavior are more powerful. Honesty, for example, is an important norm for science as for all other self-perpetuating social groups. But the exceptional honesty of experimental physical scientists where science is concerned is probably not due to their superior indoctrination for honesty (though the sciences may recruit persons who have an exceptional desire for an occupation in which they can be honest). Rather, it is due to science’s exceptional punishment of dishonesty and to the possibility of … exposure … which competitive replication of crucial experiments provides … [R]epeated failure of others to be able to replicate a given experiment is cause for fear, shame, and anxiety … Fields lacking the possibility or practice of competitive replication thus lack an important social system feature supporting honesty …”
http://www.kli.ac.at/theorylab/Fulltext/campbell/campbell.html
Thanks very much for this post. Stay tuned for my next post, which is on the epistemology of disagreement, from Michael Kelly’s work. After that post, I will elevate your two comments into a post on the sociology of scientific agreement/disagreement. I am currently digging into exactly this topic, your comments and references are extremely helpful
Ack! It’s exciting to see you getting into these topics. I feel some regret with being too busy with papers to contribute or read all of the good contributions so far.
Michael Strong,
Your points are valid and I agree fully with the basic observation that the existence of mechanisms, which create and perpetuate bias in a specific direction, makes it very difficult, if not almost impossible to judge the value of the evidence. As this is in no way a proof that the results are wrong, a major effort should be directed to finding out, whether some limits can be given for the influence of this bias and whether in this way better certainty can be obtained on the limits of reliable knowledge.
One of the best ways of reducing the uncertainty is for the science community to show that they have understood the risk of bias and that they are fully open concerning their findings. This is the area, where climate science has not performed well. Mixing advocacy to presenting science adds to the doubts and such behavior is continuously visible either fully openly or in minor details of otherwise good scientific papers. When the scientists cannot avoid presenting signs of bias in their publications, how can we trust that the bias is not influencing also choices not brought up in the final paper?
The starting point of this thread was the discussion of combining evidence in the Bayesian spirit. Any bias as discussed here is likely to influence the subjective choices of the analysis, and as I described in another message most of the steps of Bayesian analysis involve subjective choices where the bias is likely to enter the analysis. This does not require conscious acts, but is probably mostly done unconsciously, and the problem is precisely in the repeated unconscious choices with a cumulative selfinforcing effect.
But again, with an issue of the potential significance of climate change, the conclusion cannot be forgetting all the evidence but an improved estimate of its significance. For that we would need an active open and transparent participation of the climate change community. It should not be done as an attack against science or individual scientists, but as an absolutely required activity neglected so far by too many.
Let’s not distract people from objective considerations of information.
“the number of credible scientists who disagree with the IPCC is a minority”
It is a very small minority.
“especially deliberate conspiracies as revealed by Climategate”
Setting aside the issues around cherrypicking and spinning that would be of interest to anyone who studies PR, if you have read the emails, you know your statement has no basis in reality. But I would agree that people need to appreciate that emails are not private and we need to consider the communication climate, at all times. I have to wonder why you don’t question the motivation of the criminals who hacked and manufactured this event. You apparently perceive their motives as focused on the public good — whistleblowing, rather than hype. It’s not clear what underpins your view of accountability and credibility. Or what part of the exhoneration of Phil Jones and other scientists you missed.
“before the Himalayan glacier embarrassment”
Publication mistakes happen. It’s close to a 1,000 page report. It’s a tiny error and not reflective of the evidentiary standards of the IPCC. Why turn your attention to an objectively tiny error, from a non-peer reviewed report that should not have been included (the IPCC standard is for peer-reviewed studies)? And setting aside this small error and how it happened – the evidence of glacial melt stands. I wonder why this fact isn’t the most relevant to anyone seriously concerned about the science, and the reputation and accountability of the IPCC.
“ I followed Steve MacIntyre’s experiences as an IPCC reviewer”
I am not sure why you would prefer to get your science from a mining consultant with documented ties to the market interests of the oil industry that he has attempted to cover up, in the past. Steve McIntyre produces nice charts and likes math, however, he is largely discredited by scientists every time he makes a post, for reasons based in his weak knowledge of the science. There are a few credible skeptic sources, but he is clearly not one of them if you wish to base your knowledge in science.
It is pointless to dispte that those who have to make changes to respond to climate change have made well documented efforts to manipulate public perception, especially in the United States. Even NAS participated in extensive industry bias of its reports for government, and this is well documented and acknowledged and now being corrected within NAS. Climate scientists are not participants in that.
To make a good case, it is necessary to consider alot of information, rather than narrow discussion to suit yourself.
“The excellent paper by Robin Hanson”
On the topic of ‘prediction markets’, I appreciate the point that market manipulation is already occurring, so why not make a market out of it. I am guessing that market manipulation by those who stand to lose from abatement policies will become too expensive, as temperatures rise. ;-)
‘I am not sure why you would prefer to get your science from a mining consultant with documented ties to the market interests of the oil industry that he has attempted to cover up, in the past’
For the ‘science’ it seems that Steve McIntyre is a better statistician than many of the climatologists. And his work in the mining industry has given him a far wider and longer experience of using stats than any self-taught paleoclimatologist. I’m not surprised that such people try to reject his work. As the famous phrase goes ‘they would, wouldn’t they?’ Especially since some have been publicly embarrassed by his revelations of their poor grasp of the subject.
As to his covered up links to ‘the market interests of the oil industry’, I had not heard of these before your citation just now. Please expand on your allusive statement that
a. they exist and
b. if they do that he has tried to cover them up in the past.
You state that this knowledge is documented. A link to that documentation, together with its provenance would be a good start.
And I fear that I cannot agree that a prediction that all Himalayan glaciers – responsible for a large proportion of all the world’s drinking water supplies – will disappear within half a human lifetime is a ‘tiny error’. It seems like a b…y great huge one to me, potentially having huge implications for much of Asia.
If you see this as tiny, please make a suggestion of the sort of error that you would consider ‘serious’ and then one that you would say was ‘major’.
Martha, stay tuned for my next post, on the epistemology of disagreement (forthcoming late this aft).
@Martha –
“the number of credible scientists who disagree with the IPCC is a minority”
It is a very small minority.
It only takes ”one” of that small minority to be right. Which is more than the majority” have accomplished in 30 years.
Or what part of the exhoneration of Phil Jones and other scientists you missed.
What exoneration? Three investigations of Jones and none of them asked a single ”hard” question. And then two “investigations” of Mike Mann – and again, not a single “hard” question. So, to translate – your exoneration = whitewash.
Why turn your attention to an objectively tiny error, from a non-peer reviewed report that should not have been included (the IPCC standard is for peer-reviewed studies)? And setting aside this small error and how it happened – the evidence of glacial melt stands. I wonder why this fact isn’t the most relevant to anyone seriously concerned about the science, and the reputation and accountability of the IPCC.
Because it was NOT the only error. Nor was it “small”.
Nor does the the evidence of glacial melt stand. In general, 50 % of Himalayan glaciers are melting (slowly) and the other 50% are growing. As are the Alaskan glaciers. And much of the melt is not due to Global Warming but to deposit of soot and dust on the ice. Jim Hansen pubished a paper on that about 6 years ago. As have others.
Steve McIntyre produces nice charts and likes math, however, he is largely discredited by scientists every time he makes a post, for reasons based in his weak knowledge of the science. There are a few credible skeptic sources, but he is clearly not one of them if you wish to base your knowledge in science.
McIntyre rarely gets into the “science” but he’s hell-on-wheels wrt statistics. As Mann, Steig and others have learned to their chagrin. Which is why they try (unsuccessfully) to discredit him. As for his lack of knowledge wrt science, I believe he may be a recognized expert wrt paleo proxies.
“Ross McKitrick, who recently lectured us on how climatologists should be more like economists, works for one, the Fraser Institution, which has had its own criticism over the use of skewed data in a study of the effects of a minimum wage.”
Try addressing the argument.
1. data transparency is good for science. climate science sucks at it
2. reproducible results is a good thing. climate science sucks at it.
Ross makes a series of arguments that others of us make as well. And we have nothing to do with any institutes. Try addressing the argument.
What odd notions.
“1. data transparency is good for science. climate science sucks at it”
What data don’t you have access to?
“2. reproducible results is a good thing. climate science sucks at it.”
There have been well-designed experiments where CO2 doesn’t trap LW radiation?
Where have you been, Jeffrey?
Code, tree ring data, temp data – just to name a few, have been consistently “hidden” over the last 10-15 years. And then there’s Phil Jones who threatened to destroy data rather than allow sceptics to have it. Climate science truly sucks at data transparency.
Mann’s Hockey Stick was not reproducible even when he gave up his code and data – even on his own computer. Climate science truly sucks at reproducibility.
Jeffrey Davis,
“There have been well-designed experiments where CO2 doesn’t trap LW radiation?”
HAHAHAHAHAHAHAHAHAHA
All the experiments I ever heard of showed the same thing. CO2 absorbs, emits, and scatters IR. Try a dictionary for the definition of TRAP and then show an experiment that supports that definition.
Hahahahahahaha!
The radiation which is re-emitted back to Earth doesn’t go to outer-space without warming some air!
Hahahahahahahahaha!
I love arguments with fruitless pedantic distinctions. Make some more!
I think I can offer Paul Dunmore some comfort.
1.0 Preliminaries
1.01 In the tradition and form of backward induction, let us begin with the end:
“None of this is to say that your belief about AGW is wrong; I share at least some of your views. But the argument that you present for it is utterly bogus.”
1.10 Since Paul Dunmore does not dispute Fred Moolen’s belief about AGW, and shares at least some of his views, our significant issue is whether Paul Dunmore has demonstrated that Fred Moolten’s argument in itself is bogus or utterly bogus.
1.11 And to be useful, whether using Paul Dunmore’s insight, we can correct this fault simply.
1.21 Due to its multiheaded nature, this issue appears most amenable to Bayesian Network Analysis using weighted binary tree logic.
1.22 One trusts all parties will find this method agreeably robust.
1.3 As ‘utterly’ will have the same ambiguity issues as a word like ‘major’, we ought use a method at least as stringent as Paul Dunmore’s approach to the word ‘major’ in Fred Moolten’s usage.
1.31 Our method (1.21) forces us to stop for a logical decomposition of Paul Dunmore’s treatment of ‘major’ in Fred Moolten’s usage:
1.32 “It cannot be tested until the word “major” is given an operational definition.”
1.33 Clearly, we can test bogus before utterly bogus, and if not bogus then not utterly bogus;
1.34 Likewise, we can test whether for our purposes the distinction of ‘bogus’ and ‘utterly bogus’ will be operational or trivial.
1.35 We can plainly find many cases where a bogus argument is of operational use: informational, dialectic, formative, conformative, abstractive, extractive, etc.
1.36 Similarly, Paul Dunmore’s treatment of major suffers under the microscope of logical decomposition in exactly like manner. We must dismiss the part of Paul Dunmore’s argument reliant on ‘major’ as a gloss, and so need not discuss either ‘major’ or ‘utterly’.
1.37 We cannot define bogus conventionally, as it means counterfeit, phony, fake or not genuine.
1.38 In context, it appears Paul Dunmore means ‘weak’ or ‘not sufficient to establish its argument’.
1.39 Only Paul Dunmore can say what he really meant by ‘bogus’, and we cannot rely on him finding sufficient substance to the preliminaries to clarify, so we will continue on the basis of 1.38 for now.
1.40 There is disagreement between Fred Moolten and Paul Dunmore as to what the argument is.
1.41 Fred Moolten says his argument is “I hope you will consider adding the convergence principle to other perspectives on probability you are already addressing in formulating conclusions. Without it, I believe an important element of how we evaluate climate data will be missing.”
1.42 Fred Moolten’s argument has the properties of a well-formed, clearly articulated, specific, measurable, achievable, relevant, time-bounded proposition.
1.43 Paul Dunmore’s version of Fred Moolten’s argument is “anthropogenic greenhouse gas emissions are a major contributor to global warming.”
1.44 Paul Dunmore’s version is not fully formed, ambiguously articulated, non-specific, presumes an answer to the meta-argument, and while it may be relevant it is neither well bounded by time nor known to be achievable without presuming answers to meta-arguments.
1.45 On balance of 1.42 and 1.44, we ought by weight of Preliminaries be able to stop our analyses here, and say nothing in Paul Dunmore’s argument is capable of producing the outcome he claims given the faulty initial conditions he himself sets out.
2.0 There is nothing fatal to the outcome of Paul Dunmore’s argument in the conclusion of the Preliminaries 1.0. It is still possible his outcome is correct.
2.1 The chore of dismantling and reassembling Paul Dunmore’s argument as Paul Dunmore intends given the information on hand is untenable, and falls only to Paul Dunmore to amend and submit again with these considerations addressed.
Paul should take very great comfort in this, as it leaves a wide margin for him to be very wrong in his assessment in ways more amenable to his sentiments.
Fred Moolten, on the other hand, deserves the deepest and best possible critique of his own argument, however high the bar Paul Dunmore has set, as it is our experience of him that when given sufficient challenge he sharpens his methods and improves his case in later submissions, and seems to find this a satisfying arrangement.
If anyone cares to offer such competent critique, I’ll be glad to review it in like manner to Paul Dunmore’s.
Bart – Thanks (I think). Seriously, your last point is reassuring, because in the other thread, I stumbled my way toward a more precise delineation of the argument, including the need to consider different levels of independence, from complete to partial. Paul cites an example of an early ccomment where I failed to make that distinction.
Fred Moolten, let me offer you a word of caution when considering Bart R’s comments. He says:
1.21 Due to its multiheaded nature, this issue appears most amenable to Bayesian Network Analysis using weighted binary tree logic.
1.22 One trusts all parties will find this method agreeably robust.
You would never use binary trees weighted or otherwise, for the sort of analysis you have in mind. It may have been a simple misunderstanding on Bart’s part, a casual slip-up when composing his text, or it may have been something more serious.
Whatever the reason, his propose method would certainly not be robust.
Wow. A missing comma in my first sentence and a missing “d” in my last sentence. I think I need a preview function or something.
Brandon Shollenberger
“You would never use binary trees weighted or otherwise, for the sort of analysis you have in mind.”
Erm.. Please to expand.
It wasn’t my intention that Fred Moolten use Bayesian Networks as his approach to consilience .. the mind boggles.
It was my intention to explain that I was attempting to employ Bayesian Networks to decide whether commentators of a certain type had by the weight of their arguments unseated Fred Moolten’s approach.
One believes that an ordered comparison point by point until the weight of failure breaks the credulity of the reader is worthwhile and robust.
How Fred wants to actually perform consilience, that’s a cipher still, I think.
Have I understood your objection?
Actually, I need to apologize to you. I had been hitting the Page Down key, and I happened to catch a glance of your comment. I didn’t read your entire comment so I misunderstood why you brought that subject up. I thought you were suggesting it for Fred Moolten’s proposed method of analyzing evidence, but now I see you were proposing it for something else entirely (examining people’s comments on a subject).
Now then, binary trees are somewhat related to a type of methodology one would use for Fred’s idea. To me, Fred’s idea seems best represented with an acyclic digraph (with causal relations). Due to the similarities between that and what you said, I assumed you had picked what seemed to be a simpler method of analysis by mistake. Instead, it was just a coincidence.
I’m sorry I made a rushed response without reading your entire comment. There was no reason for it, and my comment obviously makes no sense due to it.
Brandon Shollenberger
Are you kidding?
If my mistakes were half so erudite, kind and illuminating as yours, I’d be glad to make mistakes all the time. (Which some may suppose is my usual method ;) )
I count myself lucky to have been able to use the level-headed thoughts you introduced to suggest areas of improvement for Fred Moolten’s proposal.
Or, we could do science.
Fred Moolten
The foundational fallacy of your argument is that you fail to evaluate the null hypothesis that natural climate variations will continue to occur.
1) You first have to evaluate those natural climate drivers.
This includes the complex nonlinear chaotic factors.
2) THEN how have to show that anthropogenic causes are statistically different from natural causes, and that the effects are causative, not just correlative.
You have rigorously shown neither 1) nor 2).
Rather you have a handwaving argument claiming all evidence supports 2.
3) You have not addressed the full uncertainties involved, including Type B (Bias), not just Type A (statistical). The increasing evidence for bias in temperature adjustments alone can negate your arguments.
4) You have not accounted for the solar/planetary/galactic cosmic rays on clouds and thus on warming/cooling.
5) You have not addressed the foundational issue of identifying cause versus effect. i.e., does CO2 cause warming, or is ocean warming causing increases in CO2. Do cosmic/cloud variations cause ocean warming/cooling, or does CO2/ocean warming cause cloud variations.
Consequently, your arguments sound impressive, but have no scientific weight.
Start with the basics of science, and prove first that you can differentiate anthropogenic drivers from natural causes
(PS you cannot until you thoroughly know the natural drivers – the cosmic are just now being evaluated.)
Then you have to show both causation and magnitude on anthropogenic causes.
(e.g. CO2 “sensitivity” is currently not known within an order of magnitude amongst the different theories.)
I look forward to your working towards a sound scientific argument.
David,
You cut to the chase very clearly.
Not one. I repeat: Not one Climate science catastrophist has offered any credible evidence that passes the test of historic records or levels of significance that anything unusual is happening with any aspect of climate.
Uh..
Doesn’t quite rise to the bar set by Paul Dunmore..
But I’ll take a crack at this one, too, as it merits.
1.0 Preliminaries
1.01 In the tradition and form of backward induction, let us begin with the end:
“I look forward to your working towards a sound scientific argument.”
1.10 David L. Hagen thus disputes Fred Moolten’s case within the context of scientific argument;
1.11 That mounting a scientific argument is what Fred Moolten set out to do;
1.12 We must establith that David L. Hagen’s criteria for scientific argument are valid;
1.13 That Fred Moolten has failed to meet the criteria to mount such an argument;
1.2 We must thus show whether Fred Moolten must meet all of David L. Hagen’s conditions otherwise fail in the standard of a reasonable case.
1.21 And to be useful, whether using David L. Hagen’s insight, we can correct this fault simply.
1.21 Due to its multiheaded nature, this issue appears most amenable to Bayesian Network Analysis using weighted binary tree logic.
1.22 One trusts all parties will find this method agreeably robust.
1.31 The subject is “reasoning about multiple lines of evidence,” which is a metadiscussion of science, not a scientific case in and of itself;
1.32 Thus on 1.11, David L. Hagen fails to achieve a meeting of the minds with Fred Moolten and Judith Curry.
1.33 Is this fault fatal to David L. Hagen’s argument?
1.34 On 1.12 David L. Hagen’s points 1-5 sound impressive, but have no dialect weight that can be discerned.
1.35 David L. Hagen’s points 1-5 appear to be randomly collected prerequisites;
1.36 David L. Hagen’s points 1-5observe no logical order other than that imposed by arbitrary measure,
1.37 David L. Hagen’s points 1-5 seem most like an argument from authority;
1.38 We must dispose of David L. Hagen’s points 1-5 must be disposed of until such time as they can be better ordered, subject to logical decomposition and shown to be relevant.
On the whole, as an appendix to Fred Moolten’s discourse, something like what David L. Hagen’s points 1-5 attempt could be useful, and it is worthwhile building on this.
However, this is fairly fatal to David L. Hagen’s argument, by strict logic.
Bart R.
Until we seriously address the null hypothesis, identifying and quantifying the magnitudes of both natural and anthropogenic causes, the rest is worthless. Thus my rapidly throwing out some critical issues that Moolten’s argument fails to address, and thus falls.
When those are dealt with, I will think more about the logic.
David L. Hagen
We have another failure of meeting of the minds.
Fred Moolten is approaching the meta-, the ‘envelope’ around the scientific process.
One presumes that for consilience of science, every consilient vector, or salient, would be scientific in and of itself.
Each salient would have to meet, thereby, the rigor of the scientific method, including objections you noted.
This is why I believe your objections, once better-ordered and more completely framed and developed, might make a useful appendix within Fred Moolten Consilience.
Whatever he’s done right, Fred Moolten has, we must agree, left undefined (no doubt for reasons of space) much of what would be a salient qualifying for participation in consilience.
(I think? If he has it well-detailed, I’ve missed it.)
“Fred Moolten
The foundational fallacy of your argument is that you fail to evaluate the null hypothesis that natural climate variations will continue to occur.”
Bingo! Flashing lights, bells, etc.
I’m amazed that people can dress up their sophistry to the point where they think it’s a ‘sophisticated’ approach to evidence.
You cannot establish that AGW is happening unless you ‘know’ what the temperature (etc.) would have been anyway. We don’t. We do know that the recent (well, fairly recent) rise in temperature was not unprecedented in gradient in recent history (early 20th C) and we have no reason at all to believe that the absolute mean value should remain unchanged over centuries. It never has before.
All this stuff about combining evidence probabilistically is, um, garbage (sorry, I’m not trying to be rude, it just is). If you’ve done it, show us your workings. I strongly suspect you haven’t, and it’s just a hand-waving argument which allows you to reach ‘sciencey’ sounding conclusions without having to justify them.
In the end, this isn’t about what you (Fred) choose to believe – that’s completely up to you. It’s about what you (and others who believe the same thing) are trying to make the rest of us pay for (especially those in developing nations). It is not enough to declare that through some (completely hidden) process you now believe this and we should too: you have to present the evidence (i.e. that hidden ‘probabilistic’ mechanism) in a way that convinces others.
Go on – go to Google Docs, start a spreadsheet, show us how the evidence combines… (no, you’re not allowed to say ‘The IPCC has done this, look at AR5’. They haven’t).
You cannot establish that AGW is happening unless you ‘know’ what the temperature (etc.) would have been anyway.
This is simply untrue. There is no inherent need to know the baseline trend of something to know whether or not another trend has been added to it. If the only method used to establish the existence of AGW was a correlation between various factors and observed temperatures, your position would have some merit (but still be wrong).
However, your position is completely invalidated by the fact calculations can be made from physical laws without relying upon observational data. That this approach could be taken means one could conceivably ignore all observational temperature data and still be able to establish the existence of AGW.
All this stuff about combining evidence probabilistically is, um, garbage (sorry, I’m not trying to be rude, it just is). If you’ve done it, show us your workings. I strongly suspect you haven’t, and it’s just a hand-waving argument which allows you to reach ‘sciencey’ sounding conclusions without having to justify them.
Here you dismiss Fred Moolten’s proposed approach to quantifying certainty by talking about the results (or lack thereof) which have been generated. This is illogical. Fred Moolten proposed a methodology and said what he expects to happen if the methodology is used. He hasn’t drawn any conclusions from his proposed methodology. You are exaggerating his position and shooting down something he hasn’t said.
Fred Moolten has an idea. He is talking about the idea. That’s all. Perhaps in time he, or others, will use his idea to generate results. However, he has made it abundantly clear such hasn’t happened yet, so your complaints are basically just strawmen.
Nope, for the real Earth, not yet. And not anytime soon. Not even in future decades. The range of important spatial and temporal scales are simply far too large; it’s a multi-physics, multi-scale, inherently complex problem. And in fact some critical phenomena and processes can not yet be described from first principles.
It is not a computational physics problem. It’s a problem for which a process-model approach is necessary. There will always be parameterizations which replace some of the modifications and simplifications applied to all the fundamental first-principles equations.
Very, very unlikely. Observations are the essence of science.
Thinking about it some more, I understated. Without observations, there is no science.
Nobody has suggested anything would be done without using any observational data.
Dan Hughes, what complexity are you talking about? There are only two things which need to be established. First, increased CO2 levels leads to warming if all else is kept equal. Second, there is no negative feedback which completely dampens (or even overwhelms) the warming from CO2. That’s all.
If those two points are established, AGW is happening. It’s simple.
Brandon,
if the warming is only .0001 per century it doesn’t matter. If we could tell whether the feedbacks are negative or positive it might matter. We apparently can’t and have not determined their magnitude. You are trying to compute something insufficeient data.
kuhnkat, you are talking about something not relevant to the point being discussed. Ceri Reid said, “You cannot establish that AGW is happening unless…”
Being able to establish AGW is happening is different than quantifying its extent or effects. There is no reason to conflate these things.
Brandon,
actually no one has proven that the major increases in CO2 are due to human intervention, so, sorry, you again are too simplistic. Chemists and atmospheric physicists tell me that the ocean releases CO2 partially based on the partial pressure of the gas already in the atmosphere. if humans are emitting CO2, which we are, we are increasing the partial pressure of CO2 in the atmosphere and SUPPRESSING some of the CO2 that would have otherwise been released naturally.
Until I see someon give a reasonable quantification fo this and other issues around CO2 I am not wasting my time with those telling me the science is settled here either.
There are only two things which need to be established. First, increased CO2 levels leads to warming if all else is kept equal. Second, there is no negative feedback which completely dampens (or even overwhelms) the warming from CO2. That’s all.
That much might be sufficient. However why do you claim it is necessary (as in “need to be established”)? Science has tools for analyzing situations where multiple factors are in play simultaneously.
Were science obliged to play by your rules it would be seriously handicapped in every situation where “all else is kept equal” is either impractical or impossible.
‘That this approach could be taken means one could conceivably ignore all observational temperature data and still be able to establish the existence of AGW’
I though that was part of the climatologits creed already! Ignoring 400 years of science and discovery, they are obliged to rely solely on computations and will be excommunicated if seen using real observations. Unless, of course, such observations have been ‘correctly adjusted’ by trusted high priests.
Or have I missed the point there? Is there a true wealth of experiment and observation that conclusively prove that AGW is real and a serious problem?
Because when I first got interested about three years ago, I set out to spend a quiet afternoon finding the stuff that made the scientists so certain that it was settled. 1000 days later, I’m still looking.
And I still have a great deal more respect for Richard Feynman as a scientist than for anybody I have encountered in my trip around teh climatological testament. A while back he said:
‘The test of all knowledge is experiment. Experiment is the sole judge of scientific “truth’ And he was right, however much you may chose to ignore him.
I’ll take one Feynman over a zillion calculating climatologits any day.
I’m hoping ‘climatologits’ was a typo :)
hmmm, not so sure. I’m sure I’ve seen that version before ;)
Keep hoping! A slip of the finger perhaps.
First off, I find your tone rather very unappealing. You are implicitly accusing hundreds or thousands of people of intellectual dishonesty. It’s poisoning the well at its finest. Second, you say:
Is there a true wealth of experiment and observation that conclusively prove that AGW is real and a serious problem?
This is a non-sequitur. Nothing I said indicates support for the idea of AGW being “a serious problem.” The issue being discussed was simply the requirements for establishing the existence of AGW. Your question has absolutely no relevance to this.
The dismissive and insulting tone of your post is rather silly given the fact nothing you said had any relevance to the comment you were responding to.
Brandon
“one could conceivably ignore all observational temperature data and still be able to establish the existence of AGW.”
To form the models, you have to both identify and quantify ALL the physics involved with ALL anthropogenic contributions.
That requires validation – which includes all observational data – not just temperature.
Get back to science.
Not only is irrelevant to what I said, it is a massive exaggeration. Models don’t require perfect knowledge to be useful. They don’t need to know “ALL the physics” about anything.
validation does not require all observational data.
neither does a valid model require all the physics involved.
understand that “validation” is a technical term. Understand also that models are validated for a variety of purposes with different degrees of accuracy. Models are just theories in code. Like all theories they have their limits. None are true because no theory is true. They are useful, more or less.
“The increasing evidence for bias in temperature adjustments alone can negate your arguments.”
huh? I’ve seen no evidence for bias. None. Been look at it since 2007. Papers, code, data. Every last bit I can put my hands on. increasing evidence? increasing? since when and how much.
Just so you know Mosh, I for one am happy to hear that and trust you on it. The big questions lie elsewhere, for me. But appearance of bias can easily be picked up from casual browsing of sceptical blogs. We all cherry pick, all too easily, as the Chief has said. But thanks for this testimony.
stay tuned for my next post on the epistemology of disagreement (later tonite), bias abounds!
Convergence and divergence, ambiguity paradox? Bias is interesting, but it’s not even required to explain the divergence in some situations.
try to close tag
Sorry
I’m speaking only of adjustments. What I do see is a failure to account for the added uncertainty due to adjustment.
steve mosher
Re: “I’ve seen no evidence for bias. None.”
For definitions of the bias “Type B” errors I am referring to, see:
NIST Technical Note 1297, 1994 Edition, Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, by Barry N. Taylor and Chris E. Kuyatt. (Most articles barely address Type A (statistical) errors, and rarely get to Type B. International guidelines require BOTH to be stated separately. I strongly encourage you do discover what Type B errors are and why they are not being published in the papers you have read.)
Willis Eschenbach found a “minor” 6 deg C / century increase after 1941 imposed on falling raw data at Darwin Zero!
Australian Senator Cory Bernardi et al. have asked the Auditor General to audit the BOM and CISIRO records.
See the full request showing the analysis of numerous sites.
The New Zealand Climate Science Coalition entered a legal challenge showing substantial warming change after raw data was adjusted. Consequently, the
http://jonova.s3.amazonaws.com/audit/anao-request-audit-bom.pdf
Can you provide any evidence that all these objections to adjustments in temperature records have no merit, and that the adjusted warming trends are correct?
To have any credibility, Fred Moolten must address both:
1) the probabilities of Type B errors in the raw data AND
2) the probabilities that Type B errors were introduced by the temperature adjustment process.
On top of all this is evidence that the increasing Urban Heat Island is biasing urban temperature records upwards.
See also the poor siting etc. in the US temperature data system.
http://www.surfacestations.org
These all appear to be strong evidence of “anthropogenic warming” –
but NOT caused by anthropogenic CO2.
None of these issues have been properly addressed. Do you have any evidence that they have?
Without addressing all these, Moolten’s posited probabilities of anthropogenic warming that rely on these national and global temperature records have little meaning, regardless of their apparent sophistication.
Sorry, You’ve merely cited the existence of adjustments and provided no evidence that they are biased. I had a look at the new zeeland stuff at readers requests and was unimpressed.
Let me tell how I go about this. I get the raw data. I get the adjustment code or reconstruct the method described in the paper. In every case I’ve looked at the adjustments are well founded in science. The mere fact of an adjustment is not evidence of bias. What is missing from most analysis is an adjustment of the final uncertainties. Also, we can check for bias in adjusted data by merely using only raw data. been there done that. No difference. On the darwin case you need to follow all the subsequent investigations on that. not very impressive. Finally, if adjustments caused all the warming I would not expect to find the coherence that one does between satellite and ground.
To be clear, I think there may be small biases in the record ( UHI for example, microsite for example) but nothing that erases all of the warming we have seen.
to wit: I believe in an LIA and I believe that we are warmer now than in the LIA. Do you believe in an LIA? How much warmer are we now? how certain are we of that? good questions. But the adjustment question does not erase the warming since the LIA. Now, you ask me why has it warmed? Well any theory that puts all of the warming down to “natural variation” and attributes none to increased GHGs is a theory that has a problem with foundational physics and observation. So, basically, Its warmer now than in the LIA and GHGs play a role in that. If that’s a common ground you want to join me on, then we can talk. if not, peace be with you.
Steven Mosher,
You have not cited any evidence that those adjustments are appropriate for all locations. Additionally there has been no research to validate the single papers backing the primary adjustments.
Try again.
steven mosher
1) “but nothing that erases all of the warming we have seen.”
The issue addressed is not “the warming we have seen”, but the warming trend due to the adjustments.
2) “In every case I’ve looked at the adjustments are well founded in science.”
If the adjustments had a net random impact, they might be believable. That average of most adjustments shows a warming trend without justification gives at least the appearance of bias.
3) “What is missing from most analysis is an adjustment of the final uncertainties.”
I agree. That includes little if any analysis of evaluating the Type B (bias) uncertainty.
Nor have I seen evaluation of the magnitude of the bias if any of the adjustments.
4) Regarding “Consilience of evidence”
If 1,000 agree on using warming adjustments and one holds to using the raw data, does “consilence” determine that the warming adjustments are “scientific”?
If these issues are not addressed, can we have “consilience of evidence”?
5) “Do you believe in an LIA?”
How are my beliefs relevant to science?
5.1) The evidence I have seen suggests that there has been a warming trend since about AD 1700 AD, as well as an overall warming trend since the end of the last ice age.
5.2) Seeing we are about in the middle of the interglacial period, is it then reasonable to consider that the natural temperature drivers will be peaking and declining (cooling) towards the next glacial period?
6)
I can agree with that.
Further on the “Consilience of evidence”
7) The challenge is HOW much warmer is it?
7.1) Popular wisdom cites evidence that <a href="http://en.wikipedia.org/wiki/Little_Ice_Age"."The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period."
7.2) Do you agree or dispute that there is evidence for a Medieval Warm Period prior to the Little Ice Age?
7.3) How do we evaluate the probabilities of the relative magnitude of the global temperature in the Medieval Warm Period vs the Little Ice Age, vs that of the last 50 years?
7.4) On what basis do we compare the paleoclimatic temperature evidence versus the raw temperature data versus the adjusted temperature data?
7.5) What basis is there for using the full paleoclimatic proxy temperature evidence vs truncating it? (cf “hiding the decline”)
7.6) For splicing that together with the modern temperature record?
8) Beyond the radiative impact calculatable by Line By Line (LBL) models, based on the best lab evidence, what role do the GHG’s play?
8.1) Do GHG’s warm the ocean that increases water vapor and reduces the clouds?
OR
7.2) Do natural drivers cause the changes in clouds, which warm/cool the ocean which rises/reduces CO2?
(e.g. solar heliosphere/planetary & Length of Day/earth’s magnetosphere/ modulating galactic cosmic rays/modulating clouds?)
OR
7.3) We have both drivers happening.
7.4) How do we detect / discern which the cause and which the effect?
7.5) How do we determine the magnitudes of the warming/cooling, cause/effect?
7.6) If we have not confidently determined the sign let alone the magnitude, can we confidently state the magnitude of the anthropogenic warming effect?
7.7) How do we decide? On what basis?
If we have clear lab evidence for the radiative impacts, but little for the cloud and galactic cosmic ray effects, does Consilience of evidence determine that we use the radiative evidence but not the cosmic rays (as the IPCC did in AR4)?
7.8) What if the natural causes drive cloud changes which control the long term temperature and consequent CO2 changes?
7.9) Do we dismiss that possibility because of current lack of evidence?
7.10) Or do we say that the uncertainties are so high we cannot determine the relative magnitudes?
8) As measurement abilities improve, physicists reevaluate evidence, and test for the force of gravity and the general relativity.
8.1) On what basis should we ever consider the young climate science ever be beyond question?
8.2) With so many questions and uncertainties, what scientific basis is there for stating a 90% confidence that anthropogenic causes contributed to > 50% of the global warming in the latter half of the 20th century?
9) The Scotts have three choices in legal cases:
Guilty
Not Guilty
Not Proven
See: The “not proven” verdict in Scotland
http://www.parliament.uk/briefingpapers/commons/lib/research/briefings/snha-02710.pdf
Based on current evidence I have seen, I hold both sides in climate change are “Not Proven”!
Consequently all the “consilience” of evidence discussion and related probabilities have not addressed all the issues, and therefore cannot be properly claimed at this stage in climate science.
David,
The foundational fallacy of your argument is that you fail to evaluate the null hypothesis that natural climate variations will continue to occur.
But the case for AGW does not preclude natural variations continuing to occur. It says that over longer (multi-decadal) timescales the effect of increased GHG emissions will be dominant over that of natural variations.
andrew adams
“AGW . .increased GHG emissions will be dominant over that of natural variations.”
That is the hypothesis. It sounds reasonable, until you begin to “kick the tires”.
The issues are first in attribution, and secondly in quantification.
Unfortunately, much of the IPCC’s case is an argument from ignorance.
However, until we know the natural causes and their magnitudes, and the issues I address above, we cannot accurately model the anthropogenic causes, nor their relative magnitudes. Without all that, the IPCC case fails. The IPCC policy further falls by failing to recognize that adaptation is much more cost effective than mitigation!
See Climate Change Reconsidered by the NIPCC for some of the evidence ignored by the IPCC.
David,
The IPCC case is based on imperfect knowlege but certainly not on ignorance. The radiative properties of CO2 are well established, as it the “no feedbacks” effect of a doubling of CO2 levels. Much research has been done on the subject of feedbacks and the overall level of climate sensitivity. This knowlege stands regardless of how much we know about natural influences (and our knowlege there is not negligible).
As for your assumption about the costs of adaptation regarding mitigation, this is dubious to say the least.
andrew
I recognize those issues.
However, to know the portion due to anthropogenic warming, you also have to know the corresponding degree of cloud warming/cooling which IPCC explicitly ignored/punted on.
The combination could be warming OR cooling – we don’t know yet. See the massive Forbush event in progress. As an example of the impact of the sun on clouds – ignored by the IPCC.
On costs of mitigation, see the Copenhagen Consensus.
See http://www.FixTheClimate.com
On adaptation, the numerous benefits of higher CO2 have not seriously been considered. See a summary of:
55 benefits of CO2.
Richard Toll shows conventional efforts for early CO2 control would cost $17.2 trillion, while doing R&D and spreading it out would cause $2 trillion for greater benefits.
For sequestration vs efficiency, Tad Patzek shows how power plant mitigation by CO2 sequestration will cause 50% MORE CO2 generation. Conversely, upgrading to the latest high efficiency coal power generation would achieve 50% LOWER CO2 emissions. See:
Patzek, Tad W., Subsurface sequestration of CO2 in the U.S: Is it Money Best Spent? Natural Resources Research, Vol. 19, No. 1, March 2010 DOI: 10.1007/s11053-010-9111-3
http://www.springerlink.com/content/p532631858407g24/
Note that rapid siltation of the Ganges delta from Himalayan rivers is raising the delta faster than the ocean is rising.
See:
Modern sediment supply to the lower delta plain of the Ganges-Brahmaputra River in Bangladesh, M. Allison and E. Kepple
Geo-Marine Letters Vol 21, Nr. 2, 66-74, DOI: 10.1007/s003670100069
Compare 3 mm/year sea level rise rate.
Why should we “mitigate” when the Ganges delta is already rising faster than the ocean?
Solar thermochemical fuel has the potential to be less expensive than petroleum – giving a benefit rather than an enormous cost.
etc.
You have to compare hypotheses with each other by what they predict. For example AGW predicts that the stratosphere will cool, the land areas will warm faster than the ocean, the Arctic surface will warm faster than the tropical surface. These seem to be confirmed to some degree. Do other theories make such predictions. For example, if the sun is getting warmer, would it warm the polar areas faster, or if the ocean is in a warm phase, would the land be warming faster? These kinds of comparisons are very revealing and can actually be used to eliminate alternatives. Do we have alternative explanations that match all of what we are already seeing, specifically why is the stratosphere cooling, why is the Arctic warming so fast, and why is the land warming faster than the ocean?
To get back to the point, this is part of the consilience or convergence being referred to. Growing evidence does converge to a hypothesis that explains it all, while excluding more and more alternatives that don’t. What we are left with is a unified AGW explanation, or a mish-mash of alternatives that explain each piece of the observations separately, but have to work together somehow to explain it all.
I believe the stratosphere isn’t cooling much over the last few years, possibly due to more stratospheric water vapor being present than the AGW models predict.
Any cause of planetwide warming will have more effect on the poles than at the equator, simply because the entire atmospheric circulatory system is dedicated to pumping heat away from the equator.
Likewise any cause of planetwide warming will have more effect on land surface temperatures than on ocean STs, because of differences in thermal inertia. There are, for example, no “urban heat islands” in midocean. (Excluding, of course, sci-fi scenarios like a three-orders-of-magnitude increase in suboceanic volcanism.)
And again, we still really don’t understand the global climate system well enough to be able to say with certainty the result of changing any one of the huge number of contributing factors. The AGW models, for example, by and large assume that surface warming will increase the water vapor content of the upper troposphere. Turns out it doesn’t; the UT has actually dried out slightly over the last decades. Nobody has any actual evidence to suggest why this is happening.
This is the underlying problem of AGW theorists: their only argument remains, after more than 20 years, “Well, if it’s not CO2, I can’t think what else it could be.” (Jones, 2010, BBC). In the interim, of course, plenty of other researchers have discovered all kinds of things it could be; the real job of climate science is to slowly and painfully sort it out. But this can’t happen as long as the AGW lobby keeps supressing discussion and research.
This is an example of the mish-mash of ideas I was talking about. It is either AGW or a combination of AMO/PDO/UHI that explains the warming over the Arctic Ocean and land areas. The more you research these latter ideas, the more they go away, as Spencer already found for UHI, so I am all for research to see what holds up. On the other hand, the evidence for AGW strengthens with each warming decade.
JimD,
considering that Dr. Spencer found that UHI was much stronger for small growing towns than large cities I wouldn’t exactly say that UHI went away. I would say that the current method of measuring temps would be even more subject to UHI then if just large cities were affected.
The main thing discovered is that each site is unique and needs individual assesment to find what is happening. As they smear standardized adjustments over everything we have no idea what is really happening.
@CG The AGW models, for example, by and large assume that surface warming will increase the water vapor content of the upper troposphere. Turns out it doesn’t; the UT has actually dried out slightly over the last decades. Nobody has any actual evidence to suggest why this is happening.
Source? According to the article “Water-Vapor Lidar Extends to the Tropopause” in Crosslink 1:2 11-17 (2000), ” Using a ground-based lidar (light detection and ranging) system, which operates much like radar, the researchers discovered significantly more water content in the upper reaches of the troposphere than was previously thought to exist.”
“In processing water-vapor data derived from SSM/T-2 measurements, serious discrepancies were observed between microwave and radiosonde water-vapor data. The problem was traced to radiosonde humidity transducers. Errors show up in the water-vapor data derived from satellites because satellites are calibrated against radiosondes.”
Seems there is recent doubt as to which direction upper troposphere water vapor is heading.
I think there may be some doubt of global warming being a necessary outcome of a spatio-temporal chaotic system – http://www.ncdc.noaa.gov/paleo/abrupt/index.html
Even in the short term there is considerable doubt about the potential for global warming.
‘A negative tendency of the predicted PDO phase in the coming decade will enhance the rising trend in surface air-temperature (SAT) over east Asia and over the KOE region, and suppress it along the west coasts of North and South America and over the equatorial Pacific. This suppression will contribute to a slowing down of the global-mean SAT… A near-term climate prediction covering the period up to 2030 is a major issue to be addressed in the next assessment report of the Intergovernmental Panel on Climate Change ” http://www.pnas.org/content/107/5/1833.full
The reference to the PDO is proof enough that these guys remain clueless as to cause.
The insistence one’s own claim to indubitable truth is, well, laughable. The nature of human psychology, the Achilles heel of certitude, is that of qualitative selection. We each gravitate to those ‘facts’ that reinforce an emotionally held position. It works very much the same on both sides. That’s why I insult everyone.
There is no divine truth. We don’t know what we don’t know, and in climate there is a lot that we don’t , and those bits we do know are selectively chosen to support a deeply held and emotionally precious position. There are reputations and self esteem on the line. There is the ordinary superior, self satisfied smugness of both sides.
Consilience is the wrong word – it is more a matter of building ramparts to defend a position. In my case it is more like a bower bird arranging pieces of coloured plastic in a bower – but I don’t delude myself that it is truth unless in a metaphysical truth is beauty kind of way.
Where did real scientific skepticism go? We need to doubt everything especially our own knowledge – otherwise we are simply cognitively dissonant and not even nearly consilient.
Top drawer, CH. The bower birds have it, for me.
Of course, to do science … we would need to have a strongly stated, falsifyable hypothesis for the CAGW idea.
I’m not really sure that all of this helps a lot.
Going back to Fred’s original point about statement A:
The probability that “AGW is true” is either 1 or 0, not p and 1-p. (Putting aside what the exact meaning of the term in quotes is, but assuming it is of the true or false variety).
It is only our ability to measure the state of the world that introduces probabilistic notions. So the issue then is simple statistical inference. Fred’s type B statement degenerates to “Based on the data, with what confidence can we say that ‘AGW is true'” (or false if that’s your preference). At the point we have the normal old stuff about Type I and Type II errors, and Fred’s complex argument disassembles.
There really isn’t any confusion about proof and uncertainty.
I should add that Fred is also dining out on not being particularly accurate about “AGW is true”. At one point he is arguing that this is an absolute statement aka “man made stuff does something to the temp” on others he is arguing that it is a matter of degree and therefore the law of the excluded middle doesn’t apply. Whichever you choose however the above points apply.
Finally there is this idea of convergence. Evidence builds. The nature of experiments that try and prove and disprove that “AGW is true” is that they can be collected together into a bigger model with more explanatory power and that model can then be used to test the “AGW is true” proposition. Statistical techniques (incl Bayes) then take care of the issue of whether more variables increase the power of the model and in many cases the marginal power is negative.
That’s why I am still waiting for you back at the Uncertainty and the AR5 thread to show me that the hypotheses you have developed over CO2 and warming from GCMs actually provides additional explanation in the empirical record over and above a model without it.
Without that you could just be relying on correlations, not causality or over fitting a model.
There are indications that model scenarios were generated (ala MAGICC and SCENGEN) under the UNFCCC assumption that anthropogenic global warming is driven by CO2 and other greenhouse gasses. The objective of the UNFCCC is cited on this blog here:
http://judithcurry.com/2011/02/13/uncertainty-and-the-ar5-part-ii/#comment-43864
Hulme, Mike, Tom Wigley, Elaine Barrow, Sarah Raper, Abel Centella, Steve Smith, and Aston Chipanshi. 2000. Using a Climate Scenario for Vulnerability & Adaptation Assessments. Climatic Research Unit,, May. http://www.aiaccproject.org/resources/ele_lib_docs/magicc_scengen_workbook.pdf
Section 2 (page 7) describes “key functions” of the software.
Carter, T.R. 2006. General Guidelines On The Use Of Scenario Data For Climate Impact And Adaptation Assessment. IPCC, June. http://www.ipcc-data.org/guidelines/TGICA_guidance_sdciaa_v2_final.pdf
See: 3.2.1 (page 36). Detail of Criteria for selecting climate scenarios
Four criteria that should be met by climate scenarios if they are to be useful for impact researchers and policy makers are suggested in Smith and Hulme (1998):
This is a very important discussion I think- Fred, I understand what you’re trying to say, though I’m inclined to side with Paul at the moment- perhaps I’ve misread or haven’t quite grasped your argument though.
As I see it, you’re arguing that although there is no direct (holy grail-esque) evidence that man-made co2 is dangerously warming the planet; there is still significant supportive evidence that lends more credence to that view.
Can you expand on this further?- I posted something about this on a previous thread (and I’m perhaps showing my ignorance here) but if you’re looking for the relationship between A+B, it matters not how much you can measure/infer either (in this narrow context- though for the position as a whole it matters a lot) if you still do not understand /cannot define the relationship.
Equally, you cannot use the measurement of related factors, C, D and E to prove the relationship between A+B, and taking this weak analogy further- modelling the relationship between A+B will not work if factors C-E are not included or understood.
This then brings me to my next point- how do you come to the level of significance for this supportive evidence and then assign a probability to the theory it’s supposed to support? Do you take the error limits/probability of the ‘weakest’ evidence as the probability of the whole, or do you assign the theory the probability of the ‘strongest’ evidence? Or is it in fact, something far more complicated?
Thanks in advance for any response!
There is an infinity of ways of setting the question to be answered on climate change and everybody has his or her favorite questions or hypotheses to test. Most have only a vague intuitive favorite, few have strict definitions, and even fewer have a good rational arguments to support the choice or choices.
Probabilities and evidence building are a very complex field of problems. The only basic philosophy that I understand for the use of probabilities as expression of strength of evidence is Bayesian. Determination of Bayesian probabilities starts on choosing a pdf to describe prior knowledge. The difficulty of this is obvious to everybody, who starts to use Bayesian approach understanding what he is doing. (How strong its effect can be is not always appreciated, and I am still ashamed on my own hasty and belittling comments on Annan & Hargreaves paper.) I would, however, argue that even larger consequences relate often to the difficulties of determining the likelihoods of observations assuming that a particular hypothesis is valid or not valid.
It is common to use statistical analysis of empirical observations to determine the accuracy of the methodology and base the likelihood estimates on that. This means that all other uncertainties of determining the likelihoods of different outcomes assuming the hypothesis are forgotten, which is seldom a correct assumption. Sometimes known instrumental problems are added to the uncertainty or one single model is used to describe the hypothesis, but even the last most complete approach is correct only, if the hypothesis is considered equivalent to the validity of the model.
All the above concerns estimating the likelihood assuming that the hypothesis is valid, but this is often the easier part. In Bayesian approach it is no less important to estimate the likelihood of the outcome in the space of all possible hypotheses. This likelihood enters the calculation with equal weight to the likelihood assuming the validity of the hypothesis.
Reading papers on estimating the climate sensitivity and determining a pdf for its values, I have not been convinced that the likelihoods have been considerer with sufficient care to avoid very large errors in their values. This means that the numerical support given for a hypothesis may well be in error by a factor of two or five in either direction rather than by 10% or 30%.
All in all I do not think that the analysis is on a level that could be properly described by numerical values of probabilities or by concepts “likely” or “very likely” as defined by IPCC. Instead I think that the conclusions should rather be described only in terms of subjective confidence of scientists, which is the other alternative recognized by IPCC.
Nothing in what I say is meant to imply that the science has not produced valuable results or that no or very little weight should be given to the results and to the arguments in favor of their reliability. I attest only that expressing the combined understanding as numerically specified likelihoods is misleading. The results of each empirical analysis or of each study based on climate models are quantitative and should be given as such. The problems are related to the connection of the observation with the hypothesis and to the estimates of the reliabilities and accuracies of the results. Most of them are affected by systematic errors whose size can only be guessed within wide limits (and often using models whose validity cannot be independently assured). In some cases it is possible to present strong support for smallness of such uncertainties, but in most cases we lack valid arguments for that.
This is one side of the problem. For me the strongest single argument on the climate sensitivity is our ability to calculate the radiative forcing of CO2 rather well. Further we can transform forcing to a change of effective radiation temperature of Earth as seen from outer space without any loss of accuracy. These steps are reliable and accurate at a level of 10-20%. This is a strong reason for taking the AGW seriously. It is enough for telling that serious consequences are possible at a level of likelihood that makes disregarding them irresponsible. It is not enough for telling, what we should do. For that we must consider many other important factors including both additional knowledge on the strength of the AGW and the other significant consequences of any major action proposed.
“Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating.
“In the meantime, while I avoid making forecasts for tenths of a degree change in globally averaged temperature anomaly, I am quite willing to state that unprecedented climate catastrophes are not on the horizon though in several thousand years we may return to an ice age.”
— Dr. Richard Lindzen, MIT, November 2010
Subjective confidence of which scientists? Is science now a democracy, where whatever the majority believes is accepted as scientific truth? Many scientists believe in the existence of God, many do not; shall we take a vote to finally determine the “scientific truth” on this question?
Like everybody else, scientists may believe all kinds of things to various and unquantifiable degrees of subjective confidence. The whole point of the scientific method, as developed since the Enlightenment, is to avoid subjective opinion. Look up René Blondlot and n-rays for an object lesson.
“This is simply untrue. There is no inherent need to know the baseline trend of something to know whether or not another trend has been added to it.”
But you (and Fred) are begging the question. You’re assuming a ‘baseline trend’; we don’t know that at all. I assume mean global temperature resembles a random walk. There is no ‘baseline trend’ that you can show (anthropogenic) ‘deviation’ from. You have to ‘know’ what the random walk sequence would have been if you want to show an anthropogenic influence. And if that sounds like a difficult thing to do, why, yes it is. It’s the difficulty that climate science has been unable to cope with. In my opinion.
“This is illogical. Fred Moolten proposed a methodology and said what he expects to happen if the methodology is used”
No, I see Fred justifying his views based on his supposed methodology. He’s not just ‘proposing a methodology’ – he’s saying that his views result from this supposed methodology. Otherwise, where do his views come from?
(Actually, I think you’re half right – Fred’s views come from a variety of places known/unknown to Fred, but almost certainly not from the internal probabilistic dialogue he would like to imagine has occurred).
And in any case, the challenge still stands: if you/Fred believe that the methodology will give the answer you’re disposed to expect, go ahead & demonstrate it. Otherwise you’re just lazily assuming that the methodology will give the answers you prefer.
I guess a simple analogy for what is being argued on this thread is the meta analyses typically performed on population studies of medical interventions. This enables a whole bunch of seemingly weak individual studies to be combined together to enable a much more reliable and robust outcome to be declared. Of course the problem with this approach is that it doesn’t reduce any systematic effect which could apply to all the studies under consideration. The most common error is of course publication bias. However, one shouldn’t assume, without evidence, that this sort of bias occurs in climate studies.
”
The most common error is of course publication bias. However, one shouldn’t assume, without evidence, that this sort of bias occurs in climate studies
”
In the world of decisions and uncertainties, bias should be assume to be present unless there is very good reason to believe otherwise.
Oddly, it appears fairly obvious to many of us that if publication bias is the viper under the bed in medical research, it’s Godzilla trampling Elm Street in climate studies.
Even if we accept the hypothesis that anthropogenic greenhouse gases are a “major” contributor to global warming, we do not know what future course the global climate would take in the absence of anthropogenic greenhouse gas emissions. If, for example, we accept the hypothesis that the current interglacial is “nearing” an end, then we may want to emit more greenhouse gasses into our atmosphere in order to delay or prevent glacial advance over of most of North America and Eurasia. If, on the other hand, we accept the alternate hypothesis that natural climate variability is leading toward a “dangerously” warmer planet that is inconsistent with global human prosperity, then we may want to immediately curtail all burning of fossil fuels and begin relocating all societies to higher ground.
Before we institute draconian policies that may or may not curtail an impending doom, we may want to ascertain which impending doom we wish to curtail.
Not only are we in disagreement about the hypothesis that AGGE is a “major” contributor to GW, we are also in disagreement about the direction of natural climate variability. I believe the current interglacial is already the longest of the last 800,000 years. We may want to invest in land on the continental shelf.
Of course, there is also the question of whether there is anything we can do about AGW, even if we all agreed about the meaning of “major” and knew our alternative future with certainly.
If AGW were true, you would not have a cyclic and linear global mean temperature (GMT) data shown below
Cyclic: http://bit.ly/ePQnJj
Linear (a warming of 0.6 deg C per Century): http://bit.ly/fsdF7t
The AGW hypothesis (0.2 deg C warming per decade) is based on ignoring the cyclic component of the GMT. The warming due to the cyclic component is temporary so what is permanent is the global warming of only 0.6 deg C per century.
AGW will be verified in the next couple of years. If the GMT continues to drop and the winters become severe in the next decade, it will disprove AGW.
http://bayes.cs.ucla.edu/BOOK-2K/index.html
looks like a really good book
I think we all need to keep in mind that the ultimate purpose of the argument is the policy recommendations, so consider the arguments from that perspective. Dumore comes close to addressing the key policy purpose when he points out that the word “major” must be given an operational definition in the proposition “anthropogenic greenhouse gas emissions are a major contributor to global warming”.
From my perspective of policy implementation concerns I don’t think we need to argue whether temperatures are warming, that greenhouse gases have increased or that the increase in greenhouse gases will cause warming. However, it is absolutely critical that we estimate how much of the observed warming is due to human-induced increases in greenhouse gases if the policy recommendation is to reduce greenhouse gas emissions 80% by 2050, for example. Without that information how much confidence do we have that there will be a reduction in global temperatures? It seems to me that Dunmore’s arguments better address the information that I think is necessary.
Pekka while I agree with much of your post and particularly with the argument that pdfs are barely defined and therefore errors in this kind (Moolten’s) of arguments are factors 2 or 5, I disagree with the conclusions or more precisely with what the conclusions may imply for a not sufficiently careful reader .
For me the strongest single argument on the climate sensitivity is our ability to calculate the radiative forcing of CO2 rather well
Then your belief stands obviously on a very shaky ground if this is the strongest argument.
There are billions of things that one can calculate well and that prove nothing and mean nothing. It’s like saying “The strongest argument for turbulence theoryof X in the frame of fluid dynamics is that we can calculate it well in a laminar flow .”
Sure we can calculate it and even well . But it has between little and no relevance to a turbulent flow which is our concern .
In the same vein is the statement :
Further we can transform forcing to a change of effective radiation temperature of Earth as seen from outer space without any loss of accuracy.
It is also irrelevant for the fully dynamical system and in this case it is even worse. I believe that the statement itself is wrong. I do not think that we can do it “well” even in the most idealized unphysical case.
Can you link a paper that you think is doing what you proposed best?
I will have a go at it.
It remains a mystery to me how the ability to calculate radiative energy transport in a homogeneous mixture of simple gases has come to be equated to being able to calculate radiative energy transport in the Earth’s atmosphere. It is not the same problem. Even ignoring all the other critically important phenomena and processes that make up the more realistic inherently complex, real-world problem.
Tomas,
I believe that the formulation that I presented is on a strong basis. Combining well known physics of molecular interactions with electromagnetic radiation (well known both theoretically and confirmed by a huge number of empirical observations) with general understanding of the atmosphere, which is also well understood theoretically at the required level and determined empirically, we know that a hypothetical calculation of radiative forcing can be performed and is certainly accurate at a level better than 20%. This calculation is hypothetical, because it makes unrealistic assumptions on the atmospheric feedbacks, but it is still a relevant calculation. This is the calculation of radiative forcing that I need for my argument. It can be done in many slightly different ways using different definitions of “no feedback” as discussed here in more than one thread, but all these choices give similar enough results.
I do not need knowledge that the feedbacks are positive to reach my conclusions. A reliable knowledge that the feedbacks are strongly negative would make the issue moot, but not a possibility of that.
I state that the chain of arguments is sufficient for requiring serious consideration of the issue. This means also that we have broken the null hypothesis that the indications are too weak to justify serious consideration. This means also that the formal arguments that many skeptics have presented to exclude the need for any further considerations are moot. It is a fallacy to claim that the potential risks of AGW or the need to consider them can be excluded as a logical fact. (I can use logics to exclude the sufficiency of logics, but logics is not sufficient for drawing conclusions about the physical world.) How extensive research is required or how to act are decisions that must be done on relative arguments, logics does not suffice. Neither do the difficulties of constructing reliable models or the impossibility to estimate objectively the reliability of model results tell anything about the potential risks. All these difficulties affect decision making, but they do not give unique answers.
Evidence?
This is a point subject to logic, not evidence (except that the statement that the issue would be moot with confirmed strong negative feedback, might for a extreme alarmist require evidence, but I do not think, you are on that side).
Hmmm…Logic would suggest that for all other things being equal, increased CO2 will increase evaporation which will increase cloud cover and increase the Earth’s albedo.
There is no “simple” logic to be used when considering our climate. Whichever way you look at it, its complex.
Judith,
A few mistakes climate science has done is cherry pick for a pattern and use a time frame of .0001 % of the planets life time frame.
Mechanical understanding is non-existent as theories have skirted this area.
I could name easily 10 areas of study not included in climate science even though they do have some effect. This is why climate science has deemed the planet to have a chaotic climate when not all the parameters are not understood.
It really is a evolution of this planets life.
Well said Tomas – and thanks for the smaller scale example. I hope you receive a link on the best way to calculate from outer space ‘without any loss of accuracy’. I’d love to see such optimism fisked.
Very few of those billions of things are to the least comparable to the strongest influences that human societies have on our environment or the Earth system. The rapid increase of one important GHG is without any serious doubt one of the potentially strongest influences. There is solid basis for stating that it is a potential major risk. How many others do we have? Certainly some, but only few.
Evidence?
Trivial.
… and therefore easy to state. Of course, this depends on both the terms “solid” and “potential”, the exact meaning of which may or may not render certain assertions “trivial”.
Judith,
WHERE DID SCIENCE GO WRONG?
Choosing a balance and an equal opposite theory base of science in a Universe that is in constant change.
If the Universe did not change than basic science would be correct.
Generating massive theories to a natural process of planetary evolution.
I think the funniest one is that the moon slows the planet. So, science thinks the planet was made to exist forever, never changing.
Judith,
This planets eventual outcome MUST be COLDER due to the movement away from the sun.
Roger Caiazza is right. How does the science, and the uncertainty surrounding that science, impact on policy decisions?
Factoring that and many other uncertainties into policy decisions to be made here and now – what size of power plant, for what purpose in the market, with what fuel, how to finance it? – is not done in a nice scientific way, but by assessing risks based on the combination of knowledge and experience. If necessary, I can put a number to the assessment, but that will be judgmental.
I think I tend to use Fred’s approach in coming to a recommendation. After all, if the probability is that windpower will provide inadequate security of supply, will cost customers or taxpayers too much and will deliver only trivial CO2 benefits, why recommend it? And if CO2 mitigation generally is costly and ineffective, then choose the best alternative, such as gas rather than coal, for now, even if coal is cheaper.
Power plant projections are similar to climate projections/predictions. They are normally wrong, but some are more wrong than others – typically. Hydro, wind, solar and nuclear are always over optimistic – odd that. Climate catastrophe predictions are over pessimistic – as the Maldives have not sunk beneath and I have experienced plenty of snow in the last couple of years (much to the chagrin of many friends) – so that has to be factored in too.
Horribly unscientific, I know – but then developing countries need power for the prosperity of their people. And they cannot just keep asking taxpayers and customers for more money. And just how much can developed countries afford? The more spent on ineffective CO2 mitigations, the less there is to spend on scientific research.
jheath | February 18, 2011 at 8:44 am:
“Roger Caiazza is right. How does the science, and the uncertainty surrounding that science, impact on policy decisions?”
I suggest reversing the question: How does political policy impact the science and the uncertainty surrounding that science?
Review the trajectory of the IPCC, the UNFCC, the SBSTA, et al for their attribution of root cause of warming and their solutions of the “problem”.
http://judithcurry.com/2011/02/13/uncertainty-and-the-ar5-part-ii/#comment-42345
These are political organizations, true to form. “For instance, Mr. James Mill takes the principle that all men desire Power; his son, John Stuart Mill, assumes that all men desire Wealth mainly or solely.” domain1041943.sites.fasthosts.com/holyoake/c_co-operation%20(11).htm
Publication bias isn’t the only source of bias. Choosing particular lines of research by researchers leads to a bias in what is published. Fred’s mistake is assuming that the second paper, and each subsequent paper, adds the same amount of information. This is easy to see with a coin flip, if you are just trying to tell if heads are more likely than tails (AGW more likely than not). You actually need a large number of flips to get a decent handle on whether heads is more likely than tails. It doesn’t help if you just get 10 different coins and flip them at the same time. Here you have both the properties of the coin itself as well as the flipping and reading process as factors, so an added source of variation.
Fred
I find your high level argument to be similar to those raised by the MetOffice’s Peter Stott in his meta-reviews of scientific papers. These essentially appear to be: (Model predicted effect of scale X, but on review the effect is of scale Y, Y>X) * Z; therefore AGW is real and worse than we thought.
I find this type of argument unpersuasive because:
1. Current warming is slight (.8’C) and compatible with the NULL hypothesis (climate sensitivity is low 2’C).
2. Therefore finding that a model underestimates the effects of that warming does not strengthen the AGW hypothesis.
3. Because the arguments are weak, repetition does not strengthen the AGW hypothesis.
However I found the detail of your logical proposition interesting, and I’ll digest further.
Editor mangled the definitions I used in my response:
NULL hypothesis: climate sensitivity is low, less than 2.1’C
AGW hypothesis: climate sensitivity is high, greater than 2’C
If we actually had a well stated, falsifyable hypothesis for the CAGW idea, then it could be rigorously tested against observations.
STEVEN MOSHER says “I’ve seen no evidence for bias. None. Been looking at it since 2007. Papers, code, data. Every last bit I can put my hands on. increasing evidence? increasing? since when and how much.”
I see you repeat this opinion often on different blogs/threads, and I agree the belief of tampering and adjusting is overblown. I appreciate the hard work you’ve put in to show that most temperature records are reliable. Nevertheless, I think your statement is also overblown. How about the New Zealand temperature records. Wasn’t that a clear case that contradicts your statement? Perhaps Australian records were similarly adjusted, but we don’t know yet. How about Hansen’s 1934 temperature record which keeps going down while his recent temperatures have been adjusted upwards. If I am wrong about the above, tell me why.
“No evidence for bias…none.” Let’s get real.
“How about the New Zealand temperature records. Wasn’t that a clear case that contradicts your statement?”
Apparently not:
http://hot-topic.co.nz/nz-sceptics-lie-about-temp-records-try-to-smear-top-scientist/
From that November 26, 2009 article: “The cranks in the NZ Climate “Science” Coalition have sunk to new lows in a desperate attempt to cash in on the far-right driven furore about the Hadley CRU data theft.”
From this (http://www.scoop.co.nz/stories/SC1012/S00054/climate-science-coalition-vindicated.htm) December 20, 2010 article: “NIWA has abandoned the official national temperature record and created a new one following sustained pressure from the NZ Climate Science Coalition and the Climate Conversation Group.”
Nope, no bias there folks.
A couple points here relative to the “adjustments”
First I would criticize the scientists in NZ for not creating a totally REPRODUCABLE product. That is raw data, adjustment method, and detailed documentation. This leads to problems
Second The “adjustments” made for the station moves dont really have to be made if you apply more modern statistical methods. It looks like they are splicing records by offset methods. In anycase, I dont find any evidence of BIAS. I find ( as usual) evidence of poorly documented work which opens charges like those made. not excusing either.
“How about the New Zealand temperature records. Wasn’t that a clear case that contradicts your statement? Perhaps Australian records were similarly adjusted, but we don’t know yet. How about Hansen’s 1934 temperature record which keeps going down while his recent temperatures have been adjusted upwards. If I am wrong about the above, tell me why.
“No evidence for bias…none.” Let’s get real.”
I looked at the “document” purporting Bias in NZ. Not very impressed. for me EVIDENCE of bias would entail looking at the raw NZ data. Looking at the adjustment algorithms and then drawing a conclusion. papers are ADVERTISEMENTS for evidence they are not evidence, at least to my way of viewing things. Still, I found no compelling case in the paper. Further, NZ doesnt matter one wit to the global average.
As for Hansen 1934, the 1999 estimate of 1934 differs from the 2010 estimate of 1934 for these reasons.
1. the stations in the 1999 paper are different.
2. the algorithm is different.
You actually have to read the papers to understand the changes made.
As more data becomes available, as better QC is done, as the algorithm improves you can expect the estimate of the past to change.
Now in 2007=8 when we first found this problem with hansen we put it down to him being evil. Aquaintance with the code and data and the facts disabused me of this easy dismissive skepticism.
To me, Fred Moolten seems to be trying to make the argument that “more garbage in = less garbage out”. As an engineer, I know that I ought to make sure my measurement is correct before trying to combine it with any other measurement. To use another old adage; “measure twice, cut once”. But Fred seems to be trying to say that many incorrect measurements that are heading in the right direction will ultimately expose the true measurement. And he thinks that this will work for a massively complex dynamical system. And with imperfect and flawed human beings doing the measuring.
Let’s hope IBM gets Watson out and about soon. Hopefully he gets a crash course in logic and the Scientific Method.
Fred, I’ve thought about this further, and think my argument really equates to:-
I believe AGW proponents see multiple lines of independent evidence that strongly support the proposition only because they “forget” the hypothesis is bootstrapped by the assumption of low internal variability (the spatial temporal chaos of the system is dampened over climatic time scales – +30 years).
If your starting position is that the degree of internal variability is unknown and that the system could be chaotic at all time scales; then I believe the evidence supporting the proposition is substantially weakened.
Let’s take as our input the news articles, reports and papers that global scientists have published that made specific predictions about the future. Use the last 30 to 40 years of that “data”. Now, use Fred’s theory and apply it to those “scientific” predictions. I would be willing to hazard my own prediction that Bayesian analysis would result in a low confidence that the climate scientists can predict anything about the future. OK, perhaps they’re getting better at it. So now lets do another analysis to see if they’ve gotten any better at their predictions. I would be willing to hazard another guess that Bayesian analysis would also show a low confidence that they’re getting any better at it. Now that might be OK because, in many fields of scientific endeavor, the depth of understanding remains low for long time periods before a dramatic (often exponential) increase occurs. The increase in understanding is generally acknowledged to have occurred only when the science moves out of pure into applied and finally becomes predictive in the basic sense. Since climate research is so young and has previously held opposing views (think 1970’s cooling) as the “current wisdom”, I would conclude that Bayesian analysis of current climate research is much like doing the same analysis against clairvoyants or tea leaf readers. Why? Because climate science understands climate about as much as the Egyptians understood atomic physics.
A few points.
“Now, use Fred’s theory and apply it to those “scientific” predictions.” I’m viewing prediction as a separate question. Some things can be understood to a degree in hindsight (like a car crash), but predicting the future is a different matter altogether. Like watching a race, the scientists have freeze framed the action, noted that there was contact between two cars, and are predicting that the car will crash into a wall. Even with such a good understanding of the physics involved, the actual outcome is very uncertain.
For the current discussion, the topic is more like disagreement about how much damage was done in the initial contact and how sure we are of it.
I mean “ANCIENT” Egyptians. Goodness, I don’t mean to upset our modern day Egyptians please.
As entertaining as philosophical and epistemological models for the scientific method might be, in the matter of Anthropogenic Global Warming they are but abstractions, angels dancing on pinheads. The issue is concrete, based in physics. It is not a matter of weighing probabilistic evidence among models with parity, like Watson’s trivia dilemmas on Jeopardy.
>>Water vapor is the atmosphere’s single most important greenhouse gas, and it is correct to say that an accurate prediction of climate change hinges on the ability to accurately predict how the water vapor content of the atmosphere will change. It would be a gross error, however, to conclude that the effect of CO2 is minor in comparison to that of water vapor. Pierrehumbert, R.T., H. Brogniez, and R. Roca, On the Relative Humidity of the Atmosphere, published on line, 4/5/07, p. 150.
The authors split more than an infinitive here. Tacitly assumed in that one paragraph is that these greenhouse gases are the cause rather than the effect of climate.
An accurate prediction of the climate requires an accurate prediction of solar radiation, coupled into a fitting model for Earth’s response to solar radiation, a model respecting the ocean’s heat capacity and heat distributing circulations. Manifested as cloud albedo, water vapor is the most powerful feedback in Earth’s warm state, rapidly positive to solar variation and slowly negative to surface warming. In Earth’s cold state, surface albedo is controlling because water vapor has condensed and frozen on the surface, nullifying the greenhouse effect, and creating a powerful negative clamp against solar radiation.
The gross error is to neglect the laws of solubility, which make CO2 a lagging indicator of global surface temperature. CO2 is not minor, but neither is it a cause. What is indeed minor is anthropogenic CO2, an unmeasurably small cause of warming, and far less than IPCC’s climate sensitivity, calculated open-loop with respect to the negative water vapor feedback.
Having severed cause from effect while ignoring leads and lags, AGW advocates were left with mere correlation, for which they set about to demonstrate an opposite cause and effect.
To establish its identical, non-scientific presumption of AGW, IPCC adopts novel physics: pre-industrial climate equilibrium, attribution of natural processes to humans, surface layer equilibrium, Revelle’s buffer factor, water vapor amplification of warming, human fingerprints from fossil fuel emissions, long-lived/well-mixed atmospheric CO2, correlation feedback. For these, the probability in the decision matrix or Bayesian analysis is zero. Meanwhile, IPCC omits most of the probable, fact-based essentials of climate: dynamic cloudiness, solubility, and natural leads, lags, and thermal capacities.
“The issue is concrete, based in physics.”
Yes indeed.
“Tacitly assumed in that one paragraph is that these greenhouse gases are the cause rather than the effect of climate. ”
The assumption is a solid result of physics. First you require using physics, then you deny it.
It is one thing to measure and calculate the behavior of CO2 in the laboratory and quite another to determine its effect as a volumetrically minuscule part of an unimaginably enormous, chaotic sytem the entire effect of which is to transport huge quantities of heat an moisture from point A to point B, for a near-infinite number of values of A and B.
Perhaps this explains why Arrhenius’ views on climate were discarded for 30 years, then briefly revived, then discarded again until they offered a golden opportunity to ambitious politicians.
You are wrong.
Physics and knowledge on atmosphere are on a totally different level that during Arrhenius time. Presently these calculations are well understood and reliable beyond doubt at the level I specified.
Assuming the unassailable characteristics of current knowledge of atmospheric physics, then, why have the climate models, which presumably incorporate all such certain knowledge, proven wrong in every particular, from ocean temperatures to precipitation?
One question that has puzzled me from the beginning in the AGW debate, and for which I’ve tried seek answer is, that how the knowledge based on well known radiative physics is translated into more or less local _absolute_ temperatures.
As far as I’ve understood, all objects radiate their heat out proportional to the fourth power of their temperature (in K) multiplied with the emissivity constant. Wouldn’t this suggest that if we are to quantity the heat fluxes (energy out, energy in) at TOA, we need to have the absolute temperature of the surface, either sea or land and the air column temperature very well correct?
What is commonplace in articles comparing model performance and global temperature are _anomalies_. What I am saying is that if we have good match with anomalies, the obvious next question is how the modelled anomaly was calculated? I.e. is the surface temperature same as measured, and how big the difference is; and is the variation of temperature across latitudues roughly correct.
Let me stress that I’m not suggesting anything like “being able to predict the annual Tavg in New York for 2045”. Anyway if we have an error of several Kelvins in the surface temperature, the error in radiative fluxes is in the fourth power of that error; and as we are considering 2K as some kind of critical value, I’d suggest that we ought to be able to get the surface temperatues firstly modelled more precisely than that 2K. Otherwise we might end up having good correlation in anomalies (modelled vs. observed), but our results have been calculated with wrong values.
If this is the case – that we have modest (the strongest adjective I could choose based on what I’ve read) anomaly correlation, but the absolute global average temperature along with it’s latitudial distribution is several degrees K off, it would suggest heavy curve-fitting style parametrisations in the GCMs. Or is it just assumed that errors average out on global scale, so this doesn’t matter?
A climate modeler can give you a more precise answer. Lacking that, I would say that in order to compute temperature change as a function of changes in CO2, solar irradiance, or other forcings, we do need to know absolute temperatures, but at the level of each grid space in a model analysis rather than as a global mean. The starting surface temperatures are those that are measured, and the final temperatures are those that are computed, based on solar irradiance, albedo, radiative transfer codes, CO2 and humidity concentrations, and other variables. The difference between initial and final temperatures – the anomalies – can be averaged to arrive at a global mean temperature anomaly. It is impractical in global climate models to try to measure temperatures at a variety of altitudes, but these can be computed from the other inputs and the relevant equations.
Thanks very much for your response Fred.
To summarize, that although we know the radiative part of the modelling fairly well, and for its purposes even account for variables like sea/land differences in absorbtion and emission, the different atmospheric path lengths, reflection angles, we are likely to do substantive errors also in radiative transfer part of the modelling if we are unable to have accurate absolute temperature. Which I believe is the case, but will as usual stand corrected if this is not a valid claim. Just wondering from the point of view of error estimation, has this been addressed in publications? From what I’ve read, this part of the analysis (estimation of total error per output variable) is usually bit lacking. Yeah, I realize it might be very hard and tedious thing to do, let alone it might not look very convincing on the journal either.
I do know the global anomalies are averaged over grid cells, and having some background from other domain doing grid-based modelling, know that small errors over single cells might average out very well. After all, the GTA is the central index what we are looking at, and the most usual metric depicted in model articles. Given the points I presented above I’d be more convinced of the correctness of the calculations (for anomalies) if absolute temperatures were given instead. That way the reader might be able to better judge whether the results have been obtained from ‘first principle based physics’ with correct coefficients, radiative spectras etc, or with valid looking results that have been obtained via some kind of sophisticated parametrization effort – which for the hindcasting purposes is basically curve fitting, or “parameter optimization” if the you find the fitting too provocative.
As a more real-life example of this, probably anybody who have studied ‘hard sciences’ recalls having calculated correct results, but obtained them by using e.g. incorrect constants, rounding errors or whatever – zero points. Wonder how many points we might give for a GCM.
While the above is naturally a silly comparison and oversimplification of the issue, it is IMHO quite illustrative — more information is needed so that the reader could see if the conclusions (e.g. GTA) has been obtained via valid calculations. For example, I looked at e.g. NASA site for supplementary data on article Climate simulations for 1880-2003 with GISS model E (http://data.giss.nasa.gov/modelE/transient/climsim.html) and found only anomalies.
Anander,
The problems that you are referring to are closely related to difficulties of understanding clouds. The unsatisfactory description and knowledge of cloud formation and of the cloud feedback are acknowledged problems. Without the influence of clouds the temperature profile of layers, where the radiation that leaves the atmosphere originates would be much better known and the accuracy of the radiative calculations certainly quite good.
Pekka,
My point was that the total out flux (assumed that in flux is more or less constant) is not only a function of CO2 concentration, but function of _absolute_ surface temperature, (in addition to specific humidity, CH4 etc) and of course clouds, their height, form etc.
So the statement that radiative transfer part is correct in terms of absolute values, not just theoretical foundation, is in my opinion not valid – as for example has been claimed that, a 1-2% percent change in cloud cover is larger than the entire post-industrial increase in downwelling radiation flux. Too many unknowns still remain; we basically have only the CO2 concentration roughly correct.
We may characterize much of the evidence for AGW as weak when considered as individual items. The consilience argument is then used to argue that the whole picture hangs together and we can make a strong statement. But as Dunmore argues very nicely above, all this gets us is a GENERAL picture that seems okl, whereas the true question is about HOW MUCH? and is not answered by a general consilience of evidence. This is even more true when much of the strongest supporting evidence such as the warming since 1980 is also consistent with a natural pattern of temperature oscillations (like PDO), which has specifically NOT been proven to not exist, and can not be replicated well by the models. Here we encounter an argument from ignorance–since we can’t fit the temp history with our models without CO2 it must be CO2. Not very inspiring.
The problem is that trivial + trivial = catastrophe is false. Our AGW promotion friends do not offer a case that withstands critical review.
Another consideration about consilience is that it is ok as a guide to general reasoning and getting closer to agreement, but it is critical not to overlook blatant contradictions (which in the Bayesian framwork above will shoot your probabilites way down). As an example, if Antarctica is mostly not warming, then this is a pretty big contradiction to theory saying the poles should warm most and soonest. On the other hand, Svensmark’s cosmic ray theory says the 2 poles should behave sort of oppositely due to different effects of clouds over ocean vs ice.
Please someone help me on a key issue I do not understand!
What is the evidence to support the position that a warmer world is worse for humanity? Why is there ANY discussion of implementing the dismantling of coal based power plants, or CO2 cap and trade policies or carbon taxes or anything else, before there is a consensus that a warmer planet is actually bad for humanity (and probably more importantly for the individual nations that would actually have to implement the proposed policies)
Don’t we all agree that there is no climate model available today that can even semi reliably predict what the rainfall will be in any specific location or region as a result of higher CO2 levels in the future? Isn’t rainfall the most important single issue relative to potential climate change and any negative impact to humanity?
From everything I have been able to read, the models used to predict a dryer future have universally been shown to be unreliable for even short term forecasts, much less for the long term. In spite of this we continue to discuss minor side issues, but not the 800 pound gorilla.
I know this is not the topic of the tread, but it is the most important issue on the topic.
If you look at the IPCC reports, this area is the weakest by far. If you take at face value the IPCC numbers for temp rise and sea level rise (and not Hansen’s numbers) you get very modest impacts, many of them positive. The IUFRO report in 2009 and even the IPCC sections on agric & forestry stated that no impacts in these 2 sectors could be observed so far, and that over the next 100 years impacts were likely to be positive (more crops and trees). Claims about malaria are false, since the malaria vector mosquito depends on stagnant water, not heat (it was endemic in Siberia just decades ago Finding any specific impact worth worrying about in the IPCC report is difficult. It is down to handwaving.
I appreciate the many comments above on this issue. A number of them address the quality of evidence for what I refer to as “anthropogenic causality”. Others comment on the need to quantify what is meant by that term – it’s a point Paul Dunmore emphasized and with which I thoroughly agree. However, my argument for the “convergence principle” (“consilience” in the lexicon used by Dr. Curry) differs from the question of what results emanate from its application. I would like to return to the principle itself, which simply put, is about counting the pieces of evidence for or against a proposition. It states that all other things being equal, the ratio of the number of independent pieces of evidence supporting the existence of a hypothesized effect, E, to those supporting the absence of E is an important metric for estimating the probability that E exists. If that is correct, one can then turn to the evidence to ask questions such as “how many?” and “how independent?”
To further the discussion, I would prefer to start with a hypothetical example, and then ask how well real world data might apply. In this example, 20 independent studies are undertaken, where “independent” signifies that no study depends on the assumptions, results, or input data of another. In this example, all studies are equally strong in supporting or negating the effect, E. Each of course could be wrong due to random variations in the data. Each can be thought of as randomly selecting evidence from a data field, where in the absence of E, a selected item might tend to support or negate E with equal probability.
In this defined scenario, what is the probability that E exists if 18 studies are supportive of E and 2 support the absence of E? If we assign equal prior probabilities of 0.5 to E and not-E, we can compute a standard deviation for the probability distribution with mean 10, whereby the SD of (10 x 0.5 x 0.5)^0.5 = 1.58. The relative deviate = 8/1.58 = 5.06, which rejects not-E at p<0.001.
A real world example of E might be a climate sensitivity value exceeding 2C for doubled CO2. Of course, more than 20 studies have been devoted to estimating climate sensitivity, and as discussed elsewhere, they differ in their final estimates of most probable value and extent of the range of probable values. For convergence, the most critical question will be their independence, which will rarely be complete but often sufficient for each study to add to the weight of the total. Evaluating independence in this particular case will be daunting, and I would not attempt to try it.
More generally, however, it is fair to conclude that of the many tens of thousands of studies relevant to the ability of CO2 to mediate sufficient warming to make a difference in our lives (with the term "difference" deliberately left vague), there may well be hundreds independent enough to be subject to the convergence principle asking how many support an important impact and how many do not. I don't know the exact numbers or their ratio, but I'm familiar enough with both the published literature and the data emphasized in blogs to assert, after subtracting duplicate claims, that the number supporting an impact outweigh the number rejecting it by a
large margin.
That is the extent of what I am trying to accomplish with convergence – to emphasize its importance after all other variables have been evaluated. It tells us that studies that are informative but indecisive can combine to strengthen the conclusions drawn from each individually. More particularly, it emphasizes the point that demonstrating that individual study after study falls short of proving a hypothesis does not refute a very high level of confidence derived from the results of all studies combined. I don't believe this last point is sufficiently appreciated in discussions that tend to focus on individual studies, which therefore tend to overestimate uncertainty.
Fred,
I find your description of the calculation insufficiently specified.
Switching to Bayesian description, we could describe each experiment to tell that the positive observed result is twice as likely when the hypothesis is true, while the null hypothesis is favored by the same factor with the negative result of one observation. Then the posterior subjective probability of the hypothesis would be strengthened by the relative factor of 2^18 / 2^2 = 65536 based on the combined evidence. With prior probabilities of 0.5 the posterior probability is thus 0.9999924. The details of experiments and the hypothesis tell of course, whether the ratios 2:1 and 1:2 describe correctly the relative likelihoods. In reality each experiment would lead to a different factor. Combining the results through the average and standard deviation of normal distribution does not appear as a good approach to the hypothesis testing that you describe.
Pekka – I like your Bayesian analysis. In composing my previous comment, I struggled with this, because a Bayesian interpretation is something I prefer, but I settled for not trying to estimate likelihoods because I found it hard to visualize experiments in which there would be anywhere near the same likelihoods for the experimental result with the null hypothesis vs the alternative. However, your 1/2 vs 2/1 assumption is very conservative – appropriately so – and therefore your high posterior probability emphasizes how greatly the number of experiments affects the result.
Fred,
The principle is clear, but unfortunately the real world application is difficult.
Late take one empirical study that gives as result some warming trends with estimated experimental errors (systematic and statistical). To apply Bayesian method all possible mechanisms that may affect warming should be included. Particular emphasis should be given to the description of such mechanisms, which may explain the observed results. Their properties should be determined in order to be able to calculate the likelihood that they would lead to the result, if they are indeed true effects. The prior probabilities of such mechanisms should also be determined. Only when we have a comprehensive list of all prior probabilities, can we determine the likelihood of the new result without the requirement of any particular hypothesis being true. This likelihood is then compared to the likelihood of the result under assumption that the main hypothesis is true.
Because the set of all possible hypotheses contains with certain weights also those alternative mechanisms that are in good agreement with the result, it is difficult to estimate without major effort, what the value of this factor is. Furthermore any new plausible proposal, which would explain well the observation may modify the conclusions. All alternative explanations are weighted by their prior probabilities, but it is quite possible that successive empirical results that support our hypothesis support also the same set of alternative explanations. Thus their weight in the set of all possibilities grows and may remain at the same level in comparison with the main hypotheses even after many independent applications of Bayesian analysis.
In my original description (see Dr. Curry’s post), I invoked a Bayesian analysis to estimate a posterior probability for a hypothesis, A, which would yield a dataset, D, with very high probability if true, and almost never if false. However, this analysis was not based on individual studies within the dataset, because I considered it difficult to assign such probabilities within those individual studies. That is why I continued in my later comment to simply dichotomize experiments into those that support or fail to support the hypothesis.
My reasoning is based on the prevalence of studies such as the following: Climate Sensitivity Constrained By CO2 Concentrations Over The Past 420 Million Years
The study shows a strong CO2/temperature correlation over long intervals of paleoclimatologic time, after adjusting for confounding variables (e.g., CO2 degassing).
The study addresses the actual range of climate sensitivity values. However, if the hypothesis I wish to consider is simply that CO2 exhibits significant warming potency as opposed to a warming capacity that is non-existent or trivial, I find it almost impossible to assign a numerical likelihood of yielding the observed result if the hypothesis is true, or a likelihood if it is false, except to conclude that the likelihood would be higher in the former case than in the latter. I therefore found it useful simply to assign the study to the “supportive” rather than the non-supportive category. The study is not a proof of the hypothesis, but it is more a proof than a disproof. This approach is conservative, but I think the cumulative evidence from multiple studies reported in the literature (and blogosphere) is nevertheless adequate to provide strong support for the hypothesis.
Fred,
As long as you cannot determine
– the prior probabilities
– the likelihood of the observation given hypothesis true
– the likelihood of the observation given prior probabilities
you cannot judge quantitatively the significance of the new observation.
Dropping the Bayesian approach and using some other way of calculating the probability does not solve the problem, it is just a way of being sloppy by disregarding the problem.
The other point from my previous message is so important that I repeat it in more words. Every step of repeated Bayesian reasoning changes both the probabilities of the hypothesis and the null hypothesis or the probabilities of multiple hypotheses. Unless all hypotheses are not very narrowly defined, also their definitions should be changed after each step to reflect the better knowledge prior to the next step. In particular a wide null hypotheses may be modified essentially, when the first observation is taken into account. The best approach may be to split the null hypothesis into a large set of alternative hypotheses so that these detailed hypotheses each correspond to a unique value for the likelihood of the first observation.
If some alternative explanations lead in many respects to similar results than the main hypothesis, even a very large number of observations may be incapable of telling which is true. Using one fixed null hypothesis for every step gives in this case totally wrong results. Thus my example of fixed ratios 2:1 and 1:2 would be much stronger than an alternative, which starts with 5:1 and 1:2, but changes to 1.1:1 and 1:1.5 after a few successive applications, when the null hypothesis starts to be dominated by the alternatives with a similar likelihood structure than the main hypothesis.
Concerning the distant past, the nature of the correlation becomes a serious problem. In those cases the empirical data cannot present evidence for the causality, only for correlation. All conclusions on correlation are purely model based and the knowledge about all factors that might influence the model are lacking.
The fact that we have data of many different origin does strengthen the trust in the conclusions, but the processes used in the analysis contain so many subjective judgments by the scientists that I cannot really see, how quantitative statements on probabilities could really be accepted as anything more that clarifications on, what the particular scientists themselves think subjectively.
The problem of bias brought up by Michael Kelly is essentially the same that I have also brought up multiple times in these discussions. I interpret Kelly to agree with my own attitude that the existence of bias is certain, but it’s significance is very difficult to estimate. There is, however, so strong indication of systematic bias that it makes all quantitative subjective judgments of the scientists suspect as long as they cannot or do not open fully their reasoning for public scrutiny. In an ideal world that would happen dispassionately within an extended scientific community, but such an open dispassionate discussion doesn’t appear a real possibility for the climate issues in near future.
Pekka – I find myself in partial agreement with your comment, but with some reservations. I completely agree on the desirability of updating probabilities as more information accrues – that is the essence of a Bayesian approach – and I agree that dividing up the hypotheses into more than two would be desirable. I further agree that we need good information regarding likelihoods to arrive at dependable conclusions, and such information is often lacking.
Where I differ somewhat involves the following description of my perspective:
The paleoclimatologic correlational data cited by Royer does constitute important evidence for causality. It is not proof of causuality, but that is a different concept. Correlational data are an essential ingredient of all forms of “epidemiologic” analysis, whether applied to medical science or climatology. They are enhanced to the extent that confounding variables can be corrected for, as Royer did to some extent. The entire convergence concept resides in the principle that studies that are not decisive can add up to conclusions that are more reliable than each individual study.
A step by step update of probabilities as new data emerge is critical when new data are emerging. The point I was trying to make related to data that already exist within a large number of known study results. There is no iteration involved when all those results are already available.
Although I failed to make it clear, I did not “drop” a Bayesian approach for an analysis of proportions. Rather the proportion analysis is something I found useful when dividing studies into supportive vs non-supportive ones, and then applying a Bayesian approach to those proportions as I did in my original description. Here is a hypothetical example that assumes that the prior probabilities of effect E and not-E are both 0.5, that if not-E is true, 50 percent of individual studies will support E and 50 percent not-E based on random variation of the data, and that if E is true, each individual study will add an additional 50 percent probability of supporting it beyond what would happen randomly. Under these conditions, if E is true, 75 percent of studies would be supportive of E and 25 percent supportive of not-E – i.e., the observed proportion will tend to average a value of 75 percent rather than 50 percent – a not unreasonable expectation.
With these priors, one can compute likelihoods, based on sample size, that an observed proportion will differ from the not-E proportion of 50 percent by more than two standard deviations. This will always be ~0.05 if not-E is true, but if E is true, the likelihood of exceeding 2 SDs will asymptote toward 1.0 as sample size increases. For a sample with a given number of studies, these values can be used to compute a posterior probability for E, given the observed proportion. Of course, assigning a value to priors is always a problematic endeavor in the absence of exact data, but a fairly neutral approach like the one illustrated here is a useful start.
I certaintly agree that bias always remains a risk to be guarded against.
How do you handle the 10s of thousands of studies that were not reported due to findings that concluded nothing out of the ordinary occurred? Do you just ignore them as if they weren’t done?
Fred,
One problem in this discussion is that the subject is a hypothetical set of observations, which is not fully specified. This leads both of us to fulfill the missing pieces and doing it differently. As the conclusions are dependent on those pieces, we cannot agree on conclusions. To proceed we might need real world examples, but then we may be hampered by the fact that we may lack detailed knowledge on them. Some paleoclimatic observations might be appropriate in principle, but real discussion is likely to require more background information than even a careful reading of published papers can provide for an outsider to the field.
The other problem for me is that you formulate your examples in a way that I cannot interpret precisely in Bayesian framework.
Adding more observations affects conclusions if they provide information that complements significantly earlier information, but if they just add precision to a result well confirmed by earlier data and incapable to differentiate conclusively between the hypothesis and one of the alternatives, then the additional data is not any more significant.
The question is can the type of data, which is available really differentiate between the hypothesis and every alternative explanation with any level of statistical accuracy. If two explanations are consistent with the same set of actual values, then adding data on on these observables does not confirm ever the hypothesis beyond certain fixed probability. In the case of data from distant past, uncertainties on the general state of the past Earth may allow for alternatives that cannot be excluded by more data of earlier type.
When data from different observations is combined it is important that the correct Bayesian approach is used taking into account everything that each observation has shown. I do not think that the empirical scientists themselves start from what earlier observations have told about alternative hypotheses. My impression is that they use rather a simple prior, which disregards much of the earlier evidence. Combining the results properly should then be done as a separate research project. Knutti and Hegerl (Nature Geoscience, 2008) comment on using Bayesian methods to combine observations, but they tell that the process has not really been completed.
Gut feelings are a poor guide in deciding how well multiple observations confirm the conclusions.
One critical question will be the independence of their underlying assumptions; another critical question will be the empirical validity and independence of their methodology; and a third will be the nature and extent of actual relevant physical measurements in confirming their conclusions.
Few of the consensus AGW sensitivity studies meet these criteria to a sufficient extent to inspire confidence.
“I would like to return to the principle itself, which simply put, is about counting the pieces of evidence for or against a proposition.”
Please define your terms.
What is “evidence for a proposition”?
What is “evidence against a proposition”?
What is a “proposition”? How does a “proposition” differ from a scientific hypothesis?
What is the “proposition” wrt CAGW?
There are certain topics where the type of combining of evidence you discuss makes sense. But in other cases, we must also consider contradictory evidence as weighing heavily against a case. For instance, in a crime, we may have a witness account etc etc but if the suspect was on television at a ball-game at the time of the crime, this wipes out all the other evidence. In addition, hundreds of studies that support the general concept of AGW does not provide the precision we need, and would not be accepted when giving a permit for a nuclear power plant.
To repeat the “convergence principle” isn’t valid. It is based on the idea that there is some kind of uncertainty that can be reduced by the number of studies that are done, even if they fail to add direct additional information to the hypothesis being tested.
Multiple studies can help prove of disprove a hypothesis that CO2 sensitivity in the 20th century was greater than 2C by pooling data using well understood techniques, or by Bayesian inference (but note there are very tight constraints), or by developing better models of causality and testing these.
But outside this kind of analytic framework multiple studies add nothing.
If this framework is what Fred has in mind to establish the studies are “independent” then well and good, but said like this no debate would follow.
The danger of being sloppy here is that people end up thinking that hypothesis get validated by weight of numbers rather than empirical science.
And if we did rigorously test a strongly stated, falsifyable hypothesis of the CAGW idea, we would learn what we know about the CAGW idea, and what we do not know about the CAGW idea. We would be doing science.
Instead we rabbit trail around in “post normal science” and “Christiensen’s solution to the problem of induction” and “the conciliation of evidence argument”. That last one would be better referred to as “Moolten’s solution to the problem of induction” given that it is nothing more than another personalized, errant formulation of the non-scientific ‘many lines of evidence’ meme – all of these passing themselves off as science. Why do we do that?
Why dont we actually do some sicence? And accept what science is actually able to tell us, which is most often going to be “We do not know”? And admit that we do not know what we might wish to know when we seek to make decisions – instead of dressing up our ignorance in “concilliation of evidence” costumes to pretend that we do know that which we do not?
It might be worthwhile to explore “major”. The IPCC / UNFCCC would have you understand it as “catastrophic”. The proposed remedy is political/bureaucratic control of 70% of the sources of energy in the United States. Given the bureaucracy’s track record, that would truly be catastrophic. Include the UN, cataclysmic.
For example, a 1oC or even 1.5oC increase for a doubling of CO2 is a non-problem without net positive feedbacks. The difference of average temperatures between Boston and NYC> is:
170% of the effect of the low estimate (1oC) of Co2 doubling
110% of the effect of the high estimate (1.5oC) of Co2 doubling
The “catastrophic problem” depends upon the feedbacks as estimated for and used in the models.
Once again, a merry chase through the dictionary and Google, this time for Consilience (see below).
The problem, in my view, is that Consilience assumes “different areas of research”. This seems to require that the (selected) research is independent and not centrally directed to a particular conclusion. If this supposition is correct, does IPCC et al meet the requirement?
Consilience (not defined by stanford.edu, but used in discussions of feminism, evolution, religion, etc.)
http://en.wikipedia.org/wiki/Consilience
Steve Milesworthy- your article is from a warmist site. Does that make it wrong- no. However, if you look at the NZ temperature record scandal on dozens of other sites, I’m pretty sure you will agree that not only were there adjustments made to inflate the warming, but that NIWA was caught red-handed, so to speak, and have not even tried to defend the “adjustments.” After much criticism, NIWA replaced their adjusted temp record which showed substantial warming with their original unadjusted records which show little or no warming. I would like Steve Mosher to comment on this or perhaps some one from down under.
For the AGW issue, we have data supporting and not supporting the claim (ignoring for now that we really want to know how much, not just “that” it is warming). One problem is that the “not” supporting data is dismissed or not cited. Proper weighting requires that all evidence be considered, not just the evidence one likes. This is most egregiously so in the area of future impacts.
Very few of those billions of things are to the least comparable to the strongest influences that human societies have on our environment or the Earth system.
Sorry Pekka but this is just handwaving of the worst kind .
This doesn’t even begin to approach anything scientific.
I didn’t use the word feedbacks and don’t care about feedbacks at this level yet.
Only wanted to tell you that the ability to compute numerically radiative transfer in an idealized unrealistic world is not going to convince anybody about anything concerning the dynamics (e.g predictions) of the real fields in the real world.
But I give you benefit of doubt.
Just post a link showing that
Further we can transform forcing to a change of effective radiation temperature of Earth as seen from outer space without any loss of accuracy.
Then I will look at it how relevant and “good” it is.
But take the best you have because I don’t want to waste my time with just handwavings.
Tomas,
My point here is not that the full analysis should be well understood. The argument is based on the fact – and I consider this definitely a strongly proven fact – that the well defined calculation of radiative forcing gives a value that makes it plausible that a serious risk is about to form.
To make this observation significant no certainty is needed, no ability to calculate the dynamics is needed. It is enough to conclude that the probability of significant warming is non negligible and that the worst plausible scenarios project serious damage. This is sufficient for taking the matter seriously and analyzing it further.
That is all that I claim here.
Looking at additional evidence I have personally reached some further subjective conclusions, but that is not the point here.
Fred Mooten’s logic is not convincing.
Subsitute the word “fairy” for “anthropogenic” in his comments above and see that we get the following-
“The probability that fairy causality is true is p. The probability that it is false is 1 – p.”
and
“The probability that the data can correctly be interpreted as demonstrating fairy causality is p. The probability that the data are insufficient to demonstrate this result is 1 – p.” Here, 1 – p is not the probability that the conclusion is false, but merely the uncertainty about whether fairies are truly the cause.
So, using that line of argument suggests that fairy causality is a reasonable alternative.
Not very convincing.
To get anywhere with that argument you have to find some argument for “probability of fairy causality” being non-zero. If you believe probability of AGW causality has precisely the same problem than you are arguing by starting with what you intend to conclude. That, of course, won’t take very much argument at all because it’s just pushing words around.
Your complaint makes no sense. It is apparently an attempt at reductio ad absurdum, but it relies on a non-sequitur. That happens here:
So, using that line of argument suggests that fairy causality is a reasonable alternative.
Nothing in any of the preceding text suggests this. If the value of p is 0, the conclusion is “fairy causality” is not “true.” This is perfectly sensible. There is nothing about his methodology which would suggest “fairy causality” is a reasonable alternative (unless someone finds lots of evidence for it).
With Fred Moolten’s approach, the stronger the evidence for a position, the higher the probability it is true. It can be used to assess the probability for something outlandish, but it will give a low probability for it.
“With Fred Moolten’s approach, the stronger the evidence for a position, the higher the probability it is true. ”
In reality a hypothesis is either true or false (unless it is undecidable). Our assessment of the probability of this being the case depends on observations that are uncertain. So the stronger evidence required is observations that reduce that uncertainty. Its not the volume of studies its the information they add to the testing of the hypothesis that counts.
So Fred’s approach can’t be used for anything apart from to confuse what is already well understood.
Why do people keep claiming Fred Moolten’s proposed method doesn’t make sense? None of the complaints seem to even be based upon his method. For example:
So the stronger evidence required is observations that reduce that uncertainty.
Multiple observations with the same inherent uncertainty can and do decrease uncertainty. This is fundamental to any sort of analysis of uncertainty. It’s the very reason large sample sizes are preferred over small sample sizes. Incidentally,
So Fred’s approach can’t be used for anything apart from to confuse what is already well understood.
Fred’s approach is actually the same approach generally used to evaluate things. For example, in criminal cases, a piece of circumstantial evidence means little in and of itself. However, each piece of circumstantial evidence can increase the certainty of a person’s guilt.
A piece of circumstantial evidence could give a 5% probability of a person being guilty. Some important piece of evidence, say the location of the crime, could give a 50% chance of the person being guilty. Fred’s proposed method just gives a way to combine these two values. He says the chance of the person being guilty when considering both pieces of evidence is 52.5%.
How is that unreasonable?
Perhaps read my comment at http://judithcurry.com/2011/02/17/on-the-consilience-of-evidence-argument/#comment-44259
You will see there that I say either Fred is simply saying that to the extent that studies add information to the make the hypothesis more certain (your “large sample sizes are .. over small sample sizes” is a case in point) then it is obvious, and had he said it in that way there would have been no debate.
But if he is claiming something more than that, then we’re into hypothesis testing by vote.
What he said sounds more like the latter because of the language he uses. So you shouldn’t be surprised people object.
Finally on a technical point re your example that you should perhaps reflect on: the person was either guilty or not, the probability of that doesn’t change because of the evidence.
Perhaps I am missing some source of confusion because the method he proposed is one I’ve suggested multiple times. Indeed, I have (informally) used it for as long a I can remember, in any number of topics. My familiarity with it may be making his comments seem more clear than they would otherwise. However, I honestly still cannot see where the confusion comes from.
Finally on a technical point re your example that you should perhaps reflect on: the person was either guilty or not, the probability of that doesn’t change because of the evidence.
I am well aware of the distinction. It is just a pain to make sure everything you say indicates it. I probably should have been a little more precise in my wording, but without a preview or edit feature, I find it difficult to proof-read my posts well.
Speaking of which, that may well be part of the problem you’ve had with Fred Moolten idea. When he wrote about it, it was just in comments on a blog. Those rarely have as much precision as you would want for explaining a concept like the convergence of evidence.
Well perhaps help me.
Is Fred talking about my first example that all of us know and use (and potentially leads to a legitimate convergence of evidence), or is it something else that goes beyond this?
On your second point it wasn’t just your mistake, it is a fundamental mistake made by Fred all the way back in his original post and builds from there. I note in passing that below on this point Fred is moving his ground so he is now talking about applications in predictions, not hypothesis testing (I think).
HAS – The probability of the truth of a conclusion is a function of our knowledge about that conclusion. It changes as our knowledge changes. For example, what is the probability that a particular coin toss will come out heads? We know it either will or won’t, and if we had perfect knowledge, we would say that the probability is 1 (if our knowledge tells us so) or zero (if our knowledge informs us in that direction). Instead, in the absence of knowledge of anything other than the average behavior of coins, we must conclude that the probability is 0.5.
Interestingly, that is true even after the coin is tossed, as long as we don’t know the outcome. For a person observing how the coin landed, the probability is either 1 or 0, but for us, it remains 0.5.
Fred
You have just moved your ground from talking about testing hypotheses to predictions.
In predictions we are talking about what we will observe in the future eg “will the jury find him guilty?” The outcome is uncertain, and the use of multiple assessments is commonplace, although with varying results.
Your suggestion hardly seems novel.
Back on hypothesis testing we are talking about what actually happens eg “did he do it?”
A coin on the table having been flicked by a thumb is (usually) heads or tails. To test the hypotheses that the odds are 50:50 it is trivial to say that if someone does 50 flicks and gets one answer and someone else does 40 and gets another you can combine these two experiments into an ensemble that gives you a better estimate of the odds.
If that is all you are saying why bother?
I’m not sure I understand exactly what point you’re making. All my previous comment meant is that probability depends on how much we know about a conclusion, and so it changes as our level of knowledge changes. It is not a fixed quantity defined exclusively by the nature of what is being evaluated, independent of our knowledge.
Regarding guilt or innocence, we are evaluating the probability that a particular verdict will be the correct one. The more knowledge we have regarding what actually happened, the more our probability value is likely to move in one direction or the other.
And all I’m saying (if this is all you are saying) is this is effectively a tautology and that science and statistical inference provide a set of legitimate tools to assess that increase in certainty.
Also, since Bayesian analysis is an ingredient of this thread’s discussions, I can’t resist posing an interesting question about coin tosses:
If 99 consecutive coin tosses all come out “heads”, what is the probability of “heads” on toss number 100?
And as I mentioned way back at http://judithcurry.com/2011/02/17/on-the-consilience-of-evidence-argument/#comment-44259 Bayesian techniques are one of the legitimate (although not uncontroversial) ways of taking account of prior information.
Brandon,
I may have misunderstood Fred’s point, but, my POV on it is that he is talking about things like melting ice, sea level rising, increasing temps, higher humidity, and various other indicators.
These would be fine IF they could be tied together to the basic issue of ANTHROPOGENIC increase of CO@ in the atmosphere.
The idea of a lot of circumstantial evidence swaying a decision is fine unless it points to 10 different people as the murderer instead of just one. (providing the ten weren’t all conspirators of course)
Finally, a point to discuss consilience from. The crime scene is a perfect example:
Eyewitness evidence- notoriously unreliable, but in this specific crime 4 out of 6 eyewitnesses saw the defendant with a gun in his hand, pointed at the victim, and saw him shoot.
Forensic Evidence- a bullet recovered from the body matched the gun found at the crime scene. The gun had the defendant’s finger prints on it, in a position consistent with shooting the gun.
The defendant had gunshot residue on his hands.
Alibi- all the people who said the defendant was someplace else at the time are friends, family, or customers of the defendant.(little confidence in this evidence).
Circumstantial- video evidence shows the defendant entering the area of the shooting just prior to the time of the shooting. A news crew close by for another reason recorded gunshots noises at the same time.
One might say that in this case, the consilience of evidence is pretty strong. My pet peeve with the court system is that it allows a defendent to be judged guilty on the basis of just one kind of evidence. Many folks in jail for murder are there on the basis of one piece of forensic evidence, one eye witness, or purely circumstantial evidence.
The evidence for AGW:
1. A physics model for absorption of radiation that is fairly good, and predicts an increase in temperature of ~1 deg. C. for a doubling of CO2 concentration.
2. The measured atmospheric temperature has increased since 1800, ~1.5 deg .C. In approximate agreement with the radiation model.
3. The correlation of the warming with the Milankovitch cycle is poor.
4. The correlation of various ocean temperature oscillations can explain a large fraction(66% or so) of temperature variations since 1900.(negative evidence)
5. No large body of evidence for such phenomena as cosmic rays controlling clouds, magnetic effects from the sun, position of the solar center of mass and the solar system center of mass, etc. All are plausible causes or correlations with current climate conditions, but not complete.(not actual negative evidence, but circumstantial evidence that the total climate picture may be incomplete.)
6. Climate models support the hypothesis that the radiative effects of CO2 have some physical basis in the real world.
7. the changes in GAT over the last 100 years are not outside the range of temperatures seen in the geologic/ice records.
Consilience of evidence- you make the call. Slightly on the side of AGW, but hardly a strong case in my view.
I agree with Craig Loehle that circumstancial evidence in a murder trial can add up point by point. But the recorded TV picture of the defendant at a ball game at the time s/he should have been committing the murder does not make the score 20 for and 1 against conviction. It is conclusive that all the other stuff is just circumstancial.
All the major climate models predict that increasing non-water-vapor “greenhouse gases” will warm the atmosphere mostly at around 10km above the tropics. Weather balloons have been measuring temperature and humidity since the 1950s, and are individually calibrated to 0.1 degrees. There are hundreds of thousands of measurements from around most of the world.
Compare the model predictions to the weather balloon measurements. The graphs are not remotely the same. The models are wrong. The “Hot Spot” is missing. The net effect of the warming due to man-made CO2 has been exaggerated. Jo Nova has explained this many times and there has been no credible scientific rebuttal.
Yet Fred would have me believe this is just minor because he has found 100 other studies which support his position.
Alan
Alan – This thread is not the best venue for discussing “tropospheric amplification” – the expected accelerated temperature increase at high tropospheric altitudes (about the coldest parts of the atmosphere) compared with surface warming. However, a few points are worth making briefly.
First, the evidence is conflicting. Amplification has been observed over short intervals, but radiosonde studies failed to demonstrate it over longer intervals. MSU satellite observations, which are subject to fewer technical distortions than radiosondes, have shown more warming than the radiosonde data, but still less than expected. These observations are not direct, but derived from complicate algorithmic processing of oxygen microwave signals, and involve technical problems of their own. Wind shear data have supported the amplification expectation, but these data are also indirect.
Probably the most important point to be made in a brief summary is that amplification is not related to the cause of surface warming – it is an expected result derived from the Clausius-Clapeyron equation, and should be apparent whether the warming is from solar changes, greenhouse gases, or other forcings. In other words, it is not a test of greenhouse gas principles. It is also not dependent on climate models. Rather, the models quantify what is already inherent in basic physics. It appears likely that this issue will be resolved as the technical problems applicable to the measurement methods are resolved. Refinement of our understanding of upper tropospheric dynamics may also be necessary, but radical revision of this understanding won’t be necessary, given the modest disparity between predictions and current measurements.
The so-called hot spot problem may be a sign that the tropical oceans haven’t warmed as much as the the models predicted. On the other hand the Arctic has warmed more than expected, losing sea-ice more quickly. I think these are related to each other. The earth is adjusting to warming differently due to the enhanced Arctic sensitivity. [My speculation only].
Jim –
I’ll repeat what I told Andy Lacis recently. Ice, whether Arctic, Antarctic or glacial, doesn’t melt due to air temp. It melts due to wind, rain, black soot and water temp if it’s immersed (as in the Arctic). Not that there’s “zero” melting wrt air temp, but it’s very little comparatively. Most high altitude hikers and climbers know this. But somehow it seems to have escaped notice by the science establishment.
CO2 also doesn’t directly affect air temperature (except to cool it actually). It warms the surface including oceans, which in turn affect the air.
Jim D – There does appear to be more extratropical than tropical warming – in both the surface and satellite data. Export of heat from the tropics may account for some disparities between modeled and measured data from the tropics.
Fred,
I think you mischaracterize the wind shear data. The results were stated as not excluding the possibility of a warming in the upper trop. This is very weak.
Since the model work would seem to indicate that the upper trop should warm at multiples of the surface I simply do not understand how y’all can continue to hang on to these WEAK indicators. They aren’t weak, they are non-existant in relation to the prediction.
If I computed the amount of heating an acetylene torch would make on a piece of metal, applied the torch, and found that there was only 1/2 the amount of heating, I would need to go back and find out where I had screwed up. When are the climate scientists going to go back and find out where they screwed up instead of making excuses about data?
Tropospheric amplification is a subject that deserves at least a thread of its own. Radiosonde, MSU, and wind shear data all disagree with each other, and all suffer from artifacts. The most important point that can be made briefly is that amplification is a result expected from the basic physics of surface warming regardless of what causes the warming – CO2, the sun, or other variables.
Amplification is a manifestation of one of the negative climate feedbacks – the lapse rate feedback due to high altitude release of latent heat. This partially offsets the positive water vapor feedback, and if it is less than predicted, it implies less counterbalance to positive feedbacks. Alternatively, both the positive and negative feedbacks may be somewhat lower than expected, although tropospheric water vapor measurements indicate that the water vapor feedback is probably estimated reasonably well. I expect that most of the discrepancy between measured and predicted data may be resolved by improvements in methodology. The fact that as methods have improved, the discrepancy has lessened is consistent with this expectation. It does not exclude the possibility that high troposphere climate dynamics still harbor phenomena that remain to be fully defined.
Fred,
you have explained all this before and I have read all this before several times not just form you. you still need observations that support your explanation of these possible characteristics. You do NOT have supporting Stratospheric cooling for the last 16 years and you do NOT have supporting upper trop warming of a required magnitude. The water vapor makes no difference with 2 items in conflict with the theory.
Please stop repeating things not supported in reality.
Climate Science needs to move past this failed theory and work on something new instead of attempting to force observations to fit their pet theory like Cosmetology does.
Kuhnkat:
That is a concept that doesn’t exist. What exists is theories of physics, empirical knowledge on atmosphere and other Earth systems, and a large variety of theoretical results and models of atmosphere. The knowledge and understanding of Earth systems is incomplete, it is built largely on valid physical theories, but it is too complicated in comparisons to our capabilities in obtaining precise results or build accurate models.
There is no failed theory that should be abandoned, but the details of the general approach are erroneous or missing and improvements must be searched and better understanding must be searched also to our knowledge on the location and types of remaining errors.
No climate scientist claims that the understanding of the atmosphere is perfect, but there are clearly widely differing views on the reliability of some specific results including the climate sensitivity.
Similarly I cannot agree that there is any question on the existence of AGW. There is no real doubt that adding CO2 to the atmosphere raises the temperature, but this is irrelevant, if the warming is very weak. The existence should be accepted and the discussion moved to its strength and riskiness. That would make reasonable argumentation possible, if not easy. Very many skeptics agree, but total deniers persist.
There is absolute nothing in what we have learned about climate that would raise justified doubts on the validity of present theories of physics. All the problems are in the application of these theories to the extremely complex Earth system and in the additional phenomenological models that are used to complement basic physics, when we cannot solve the problem from basic theory alone.
Pirilä 2/20/11 at 3:27 am,
On the 18th at 10:53 am, I enumerated eight examples of IPCC’s invalid physics supporting AGW, and one of them was a dual example. At 10:01 pm, your dismissive response was, “First you require using physics, then you deny it.”
Now you make explicit what you implied above: “Similarly I cannot agree that there is any question on the existence of AGW.” Are you denying my existence, my arguments, or both? Or, are you trying to say that no challenge to the existence of AGW can be found in your peer-reviewed climate journals? That I would give you, but another matter altogether.
You say, “No climate scientist claims that the understanding of the atmosphere is perfect, but ….” Who argued against perfection, and when and where? In similar straw man fashion, you argue, “There is no real doubt that adding CO2 to the atmosphere raises the temperature, but this is irrelevant, if the warming is very weak.” Here, though, your hypothesis is false. You might have said truthfully that there is no real doubt that adding CO2 to the atmosphere SHOULD raise the surface temperature somewhat under the open-loop greenhouse theory.
The problem includes that no evidence exists for the fact that it has happened. No measurements support the failed conjecture that anthropogenic CO2, whether from fossil fuels or deforestation, has caused warming. In fact the predicted warming of the climate over the last 40 years has failed to materialize notwithstanding accelerating CO2 emissions. See Schwartz, SE, et al., “Why Hasn’t Earth Warmed as Much as Expected”, and see my papers on Solar Global Warming (“SGW”) and “IPCC’s Fatal Errors” on rocketscientistsjournal.com. IPCC boils AGW down to the single parameter climate sensitivity, and it, along with AGW, has been invalidated. Kuhnkat was right.
The bulge in MLO CO2 is coincident with a rise in temperature, but IPCC never made the lead/lag computation necessary to sort out cause from effect. The alleged human fingerprints IPCC found on the bulge are outright frauds. The notion that the MLO data are global IPCC bases on yet more miscalculations championed into fraud: the long-lived hence well-mixed submodel, and the assumption that MLO is remote from any CO2 sources. To make the bulges anthropogenic, IPCC zeros the on-going natural rises in temperature and CO2 upon initialization of its model circa 1750. These are merely some bits of IPCC’s phony physics behind AGW, all of which you endorse.
You say, “All the problems are in the application of these theories to the extremely complex Earth system and in the additional phenomenological models that are used to complement basic physics, when we cannot solve the problem from basic theory alone.” Indeed! The problem is trying to make microparameter and mesoparameter models, i.e., GCMs and IPCC’s radiative forcing paradigm, produce what it formulated at the outset as a macroparameter problem, better known as a thermodynamics problem.
To be more explicit, take a look at the Kiehl & Trenberth so-called radiation budget on which rests IPCC’s radiation forcing paradigm. (Their budget is not purely radiation because it includes thermals and evapotranspiration. Nor is it either a heat model or a thermodynamics model, because it has radiation (a form of heat) flowing from a colder body to a warmer body.) The radiation budget is an acceptable energy model (and at that, the balancing featured at the top of the atmosphere and the surface boundary isn’t satisfied in the clouds and atmosphere). It is a lumped parameter model, and it is global, comprising entirely macroparameters. IPCC defines radiation forcing as changes to that radiation budget. Regardless, it expends most of its effort below the global level, trying to perfect its overly parameterized model of the atmosphere, and rationalizing regional phenomena like El Niño/La Niña.
IPCC repeatedly surrenders, offering the excuse that climate is nonlinear, even calling it “highly nonlinear”, a meaningless expression, and characterizing the climate as chaotic, in essence meaning unpredictable. Its authors disparage box models and heat flow models as “toy models”, but those models have a chance of success. IPCC tries to perfect its modeling with ever finer granularity horizontally and vertically, sinking into a quagmire of model-dependent nonlinearity. The problem is that, given enough resources, a climate calculation might be Earth-like, but it fails as a scientific model when it cannot be validated. Its model is open loop in both the hydrological and the carbon cycle, so is doomed. At best, its model might hit the climate right at a point, but an open loop model fit to closed loop data has no chance of demonstrating predictive power.
The distribution of heat, or humidity, or aerosols, or clouds, in any of the three dimensions is not unique to climate. It’s a many-to-one problem, where many is multidimensional and continuous. Precision in characterizing the atmosphere locally, or in calculating radiative transfer through it, does not lead to accuracy in modeling climate. Getting a model to look like climate is not too difficult, but a matter of luck and persistence, not physics.
The problem is indeed epistemological. Linearity and predictability, and hence their negations, are properties of models, not of the real world. The problem lies with scientists, and not just climatologists, who don’t realize that they are dealing not with the real world, but always with models of it, and built on samples of it. The challenge is to find a model that has predictive power. Sometimes that involves finding the useful thing to measure, and always working in a tractable coordinate system.
Jeff,
You are building strawmen and attacking them.
Pirilä 2/20/11 at 09:36 am,
You are building strawmen and attacking them.
That’s it? No evidence, no example, no discussion, all in response to a categorical list and discussion of AGW errors?
This offhanded, dismissive response is arrogant, and casts doubt on everything you say about physics. You are an AGW advocate, that much you make unequivocal. But you put yourself among others within IPCC who color their science with beliefs, and refuse to come forward to answer responsible questions. Judith Curry’s blog exists to counter that tactic.
Pekka,
“What exists is theories of physics, empirical knowledge on atmosphere and other Earth systems, and a large variety of theoretical results and models of atmosphere.”
Although Dr. Glassman states it so much more capably than I ever could, years ago there was a theory about how fire worked called Phlogiston. This was an excellent theory accepted by most people. It explained most observations and was predictive. As new knowledge was gained it was extended so that it continued to be a good theory until the day it was superceded by the current theory.
I do not even allow the current hypothesis put forward by the IPCC as being nearly as good as Phlogiston was in its day. It has no predictivity, it fits observations poorly, and is continually having OBSERVATIONS fiddled instead of the supporting models.
Do yourself a favor and research the phlogiston theory and compare it to the IPCC stuff. You may figure out why so many reasonable people refuse to agree with you even though you claim so much goodness of the physics it is based upon. Models that cannot be validated and verified are nice games.
Here is an interesting comment I ran across today at Dr. Spencer’s that is very apropos:
Dr Norman Page says:
February 19, 2011 at 4:43 PM
“…The IPCC science section AR4 WG1 section 8.6.4 deals with the reliability of the climate models .This IPCC science section on models itself concludes:
“Moreover it is not yet clear which tests are critical for constraining the future projections,consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed”
…
The root case of the problem is that the models are not framed correctly in the first place – their inputs are set up on the assumption that anthropogenic CO2 is the main drver so naturally that is what comes out at the other end.
They attribute everything but a simple TSI measure to anthropogenic forcings and feedbacks. Check the IPPC figure 2-20 http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter2.pdf
(page 203)
This figure begs all questions and is a scientific monstrosity.Scafetta’s analysis of the temperature power spectrum gives a much better idea of the empirical controls on temperature. http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf
IPCC Fig 2-20 should show a solar activity driver with subsets under TSI, UV variability, GCR fluctuations and solar-lunar tidal effects with solar derived feedbacks of atmospheric water vapour, Albedo (Gcr related) etc. The anthropogenic effect should be limited to anthropogenic CO2.,halocarbons, land use etc.”
GIGO
Kuhnkat,
Do yourself a favor and try to learn instead of fighting against well established physics. Matters that I have written in recent answers to you or Jeff Glassman are true beyond reasonable doubt. They are solid knowledge of physics.
In physics best experimental verification is done usually through accurate laboratory measurements. These results can be used in other connections. Our technologically advanced world is based on this.
All the knowledge and theories needed for these comments are described in physics textbooks and other openly available sources. They can be verified by millions of people and really many have done it a way or other in their daily work.
It is not possible to teach readers of a blog through comments, but the books are available for those interested enough.
Why do you keep on fighting against facts that are so well established?
Pekka,
what facts that are well established am I fighting against??
Kuhnkat,
You know that I returned the words from your message to me.
You started by referring to Jeff Glassman, who had just repeated the same old nonsense as many times before.
Some times I keep on trying to explain even when the other side clearly does not want to learn anything, but chooses a full denial mode.
I just cannot make myself to believe that all these comments are sincere.
Fred
Thank you for your courteous reply. However, I was trying to illustrate a couple of things. The first is confirmation bias. We know that the world has been warming because the thermometers say so. We know that there is no tropical hot spot because the radiosondes say so. If the latter are accurate it is quite possible the warming of the past has been overstated.
I tend to think the radiosondes were scientific instruments operated under scientific supervision, whilst the temperature readings over 150 years are more subject to doubt. My confirmation bias. You tend to think the radiosondes are wrong and will look for wind shear to prove that. And the windshear study I saw even had color coding to show nothing happening as an orange. So the truth of the matter is that you are a believer and I am not.
How do we treat radiosonde data under your consilience proposal? I say it is pretty damning of the severity of any warming let alone AGW, or CAGW. You would like to treat it under your proposals as what? Or we could perhaps have a vote on it and try and reach a concensus.
The difficulty you are trying to grapple with is that I, you and all the other scientists do not know enough about the climate to make any decisions whatsoever.
Alan
Alan – None of the data refute the observations that warming has occurred on the surface, nor that it has also occurred at all levels of the troposphere. The issue relates to whether or not it has warmed faster in the upper troposphere than at the surface. That issue remains unresolved. Please see all my other comments on this for the significance of that uncertainty. It is real but its implications shouldn’t be exaggerated. In particular, it is not a test of whether warming has occured, which is well documented with good quantitation at the surface, nor a test of the cause of warming. It does raise questions about some specific climate mechanisms operating at high altitudes, as well as the accuracy of meaasurements at those altitudes.
With respect, your conclusion that because climate science doesn’t understand anything, it does not have enough information for decisions (including inaction as a decison) strikes me as illogical.
Fred
What you wrote in reply to me was “Probably the most important point to be made in a brief summary is that amplification is not related to the cause of surface warming – it is an expected result derived from the Clausius-Clapeyron equation, and should be apparent whether the warming is from solar changes, greenhouse gases, or other forcings. In other words, it is not a test of greenhouse gas principles. It is also not dependent on climate models. Rather, the models quantify what is already inherent in basic physics. It appears likely that this issue will be resolved as the technical problems applicable to the measurement methods are resolved. Refinement of our understanding of upper tropospheric dynamics may also be necessary, but radical revision of this understanding won’t be necessary, given the modest disparity between predictions and current measurements.”
This was the point I was referring to. If the radiosondes do not show the tropical hotspot, and the Clausius-Claperetron equation says it should be there if there has been warming, a possible (even probable) explanation for it being missing is that there has not been as much warming as we thought. The difficulty is that you ride right over this and state confidently that you will find it. Maybe so, but maybe not. Your bias versus my bias.
In saying we still do not understand the climate to make decisions about it, perhaps I was not clear enough in referring to Policy Decisions. You may say this is illogical, but Congress agrees.
Alan
Surface warming is not in doubt. The “hotspot” is irrelevant to the role of CO2.
Fred,
the problem is that the IPCC tells us that ANY surface warming will cause a hot spot. We are left with the question of what the IPCC is getting wrong since you say there was no warming yet there is not a high probability of a hot spot!!
Fred
Again you disclose bias. “Surface warming is not in doubt”. Then pray tell me what the BEST program is for? Is the “warming ” a function of the incorrect treatment of the UHI?
My perspective is that if the science was so certain, IPCC climatologists would not have needed to deceive and manipulate.
Alan
The “wind shear shows heating” theory has been quite skilfully eviscerated-
“Over the interval 1979 to 2009, model-projected temperature trends are two to four times larger than observed trends in both the lower and mid-troposphere and the differences are statistically significant at the 99% level.”
http://rossmckitrick.weebly.com/uploads/4/8/0/8/4808045/mmh_asl2010.pdf
See my above response to Kuhnkat for the main points. Additionally:
The article did not address wind shear data.
Tropospheric amplification is derived from basic physics – it is not a model-generated phenomenon.
The last sentence in the article is wrong, perhaps reflecting the fact that the first two authors (and probably the third) are not climate scientists and are not familiar with the physical principles involved.
Fred,
your response above is the same nonsense you keep repeating. The theory is only valid for the ASSUMED conditions. If the conditions differ you will NOT get the results you expect. This is so simple I am surprised it has to be stated.
Regarding the last sentence in the linked article, I shouldn’t overgeneralize. The tropical troposphere is informative, but what it does not do is relate tropospheric amplification to the cause of surface warming. In other words, it is not a test of the validity of explanations involving the warming potency of CO2.
Regarding all other points, I have tried to cover them already in my several comments above, and I hope interested readers will visit those comments for my perspective on this topic.
” In other words, it is not a test of the validity of explanations involving the warming potency of CO2.”
Agreed. It actually a test about the accuracy of the IPCC ensemble of climate models.
Since there is no hot spot to be found, either their models are wrong or the surface temperature has not increased as much as they claim.
Figure 9.1.f is the relevant one.
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter9.pdf
The tropospheric amplification isn’t dependent on the IPCC models. It arises (as far as I know, which ain’t that far) from simple physical reasoning about latent heat release.
If there is no observed hotspot then several things might be wrong:
1) surface temperature trends (over land, the oceans or both) are too high
2) tropospheric temperature trends are too low
3) Both 1 and 2
4) The simple physical reasoning is too simple.
5) The simple physical reasoning is just wrong.
6) The hotspot is obscured by the year to year variability
7) All of the above in various combinations.
8) probably other things too…
It’s an interesting situation from the point of view of the opening post because there’s a whole bunch of conflicting and confusing (rather than consilient) observations. When measured directly, the hotspot seems to be there at short time scales, but it fades away at longer time scales when biases in the data often become the dominant uncertainty. There are known or suspected problems with the different temperature data sources (surface and tropospheric). The hotspot does however show up in ‘parallel’ analyses such as the thermal wind analysis and a more recent one looking at convective precipitation.
These parallel analyses are nice in so much as they provide a completely different angle on the problem, but do they get a deciding vote? It’s hard to judge, but their origin is interesting.
From physical reasoning and climate models, a hot spot is predicted to exist. Now people have looked for it directly and hit up against the problem that the data are perhaps not good enough to resolve it. However, one can look indirectly too by applying the following logic straight out of science 101: if the hot spot exists then so and so will happen. e.g. the winds will blow in a certain direction, or precipitation will behave in a certain way. Testable hypotheses. Then you go out and see if that actually happened in the real world.
Whatever the eventual explanation of all these phenomena is, it will have to explain all these different observations and many others besides.
That strikes me as a reasonable perspective. To the list, I would add that model skill varies according to what is modeled. The models do a poor job with ENSO, and a better good job with long term global temperature trends, but do not accurately distribute the warming regionally. Failure to capture the magnitude of heat transfer from one latitude to another probably contributes to the mismatch between tropical troposphere measurements and model predictions.
To reduce your list to a reasonable conclusion, the total affect of CO2, in the amounts likely to be ever reached, is too small to be a problem in the earth’s atmosphere.
The hot spot is an expected symptom of warming regardless of its cause. Warming from solar effects, warming from ocean cycles, warming from CO2 would all lead to a hot spot. It’s something quite basic, so understanding it’s apparent absence is interesting.
Absolutely.
I think I became aware of this through Sceptic blogs about 3 years ago. The Warmers were trying to claim that the absence of the hot spot was not a problem because it could be caused by anything. I am still amazed at that logic now!!
Why couldn’t they see that if there is no hot spot there is no appreciable warming from ANYTHING!! Since we know there was slightly increased solar insolation during the same period and continuously increasing CO2…
First they claim that warming WILL cause a hotspot because of basic physics.
Next they claim we ARE significantly warming based on temperature series that make extensive use of statistical modelling and proxies along with direct observations. (yes I include satellite)
Next they claim OBSERVATIONS are bad and we cannot be sure there isn’t a hot spot.
Finally when all this fails they claim it is HIDING in the deep oceans in someway that is hidden from the Argo float observations also.
I don’t know, why do I have so much problem with this??
Having read all the responses to my silly little post above, I now realize my error.
Instead of the term “fairy” which I used, I should have used the term “natural variation”.
So, let us revisit Fred Moten’s again and see if it makes more sense this time.
“The probability that natural variation causality is true is p. The probability that it is false is 1 – p.”
and
“The probability that the data can correctly be interpreted as demonstrating natural variation causality is p. The probability that the data are insufficient to demonstrate this result is 1 – p.” Here, 1 – p is not the probability that the conclusion is false, but merely the uncertainty about whether natural variation is truly the cause.
Did I get it right this time?
Hal Morris, Brandon Shollenberger, et al, feel free to comment.
I don’t see any relevance in your change. The problem with your post was you said the method proposed by Fred Moolten “suggests” a particular argument was “a reasonable alternative.” It does no such thing. Changing what particular argument you use as your example doesn’t change anything. Moolten’s proposed methodology doesn’t suggest anything is “a reasonable alternative.” It simply says, “If you have these numbers, they combine to give this value.”
This means whether or not something is reasonable depends entirely upon the numbers being fed into his methodology. If the evidence suggests something is reasonable, his methodology gives a result which suggests that thing is reasonable. If the evidence doesn’t… In other words, his method doesn’t determine what is and is not reasonable. The evidence does that. His method just makes it easier to understand what the evidence says.
Also, not that it is particularly important, but his last name is spelled “Moolten,” not “Mooten” or Moten.”
Fred,
I think you have to ask yourself what exactly is the purpose of AGW research. If the existence of AGW can be taken for granted, given the properties of CO2, then very little research is needed into the existence of AGW-induced warming, only research into its possible consequences.
However, if you concede that research is needed to establish that extra CO2 in the atmosphere will lead to significant warming, then it seems to me that letting a large amount of inconclusive data ‘converge’ to a result is extremely likely to produced a biased result.
I’d just like to add that, for me, Paul Dunmore’s analysis hits the nail on the head!
David – You raise good questions. I don’t think there is anything that could be called “AGW” research, but climate research aims at refining our knowledge of climate dynamics and arriving at more precise quantitation of how climate evolves in response to perturbations. This research has already provided more than ample evidence in my view to justify a general conclusion that our anthropogenic emissions are warming the climate to an extent that will affect our lives (see all my above comments). Further research will presumably add more data bearing on this, but it is not at this point done in order to “prove AGW”.
Finally, I don’t follow the logic of your assertion that accruing pieces of evidence that are informative but inconclusive individually will necessarily lead to a biased result. Biased results are possible from “cherry-picking” which evidence to report or how to interpret it, but not merely from the fact that no single piece of evidence can be 100 percent conclusive.
“This research has already provided more than ample evidence in my view to justify a general conclusion that our anthropogenic emissions are warming the climate to an extent that will affect our lives (see all my above comments). ”
Above, I have asked that you provide an explanation as to what constitutes ‘evidence for’, and what would be considered by you to be ‘evidence against’, as well as what it is that these ‘evidence’ are for and against.
Another problem that jj raises above is that IPCC is quick to compile evidence “for” but very reluctant to give any statements as to what would show it to be not as bad as they think–that is, what could constitute evidence “against”. they won’t put it in writing. ever.
Here is a question about CO2 that seems to be ignored.
Given that there is a finite amount of IR radiation, limited to the maximum amount of direct SW radiation reaching the earth. (usually less than maximum due to albedo from clouds), how much CO2 does it take to absorb/intercept/deplete the available IR in the narrow bands of the IR spectrum?
Most people are misled by the impression (and not corrected by the pro-warming crowd) that CO2 acts in a linear fashion, with each additional amount (ppm) causing a commensurate amount of warming. One may dispute the exact amount, but due to the efficiency of CO2 in absorbing IR radiation, and using Beer’s law Prof. James Barrante, Dr. Heinz Hug and others say that half the available IR is absorbed by as little as 80 ppm. After that, the law of diminishing returns takes over, ie- less and less IR is available as it is depleted logarithmically by additional amounts of CO2 until most all of it is gone by the time CO2 reaches 250-300 ppm.
How can it be said, without decieving people, that ever increasing CO2 will cause more and more warming?
I happened to just write this comment
http://judithcurry.com/2011/02/16/mid-20th-century-global-warming-part-ii/#comment-44878
Your error is not the main point or my message, but it is also mentioned.
Macdonald 2/20/11 at 3:37 pm,
The IR radiation reaching the Earth is 342 W/m^2, a finite power density but energy limited by the life of the Sun. The amount reflected by clouds is 77 W/m^2, so the incoming SW radiation is not, as you say, less than the maximum due to cloud albedo. Of the amount that is not reflected, 67 W/m^2 is absorbed in the atmosphere, but none of any significance does IPCC attribute attributed to CO2. The amount reaching the surface is 168 W/m^2, which is the limit of the total of all radiative forcings. See AR4, FAQ 1.1, p. 96.
Also contrary to your claim, IPCC points out that CO2 does not act in a linear fashion, but instead that the radiative forcing is logarithmic in CO2 concentration. Beer’s Law is one of the laws of physics IPCC ignores (Henry’s Law being another notable omission), and Beer’s Law saturates whereas the logarithmic dependence does not. IPCC admits that CO2 absorption is at present saturated in the middle of the 15 μm band, but claims that the band wings continue to absorb with additional CO2 concentration, causing the logarithmic dependence. TAR, ¶1.2.3 Extreme Events, p. 93.
Band wings are implemented in the HITRAN database, although their true shape and extent is a matter of debate even among AGW advocates. See Pierrehumbert, RT, “Principles of Planetary Climate”, on line, 11/19/08, p. 178. The in vogue probability density for the molecular energy in the extreme wings is Lorentzian. If you compute the radiative forcing of CO2 using David Archer’s online MODTRAN calculator, you’ll find that the radiative forcing for CO2 reaches a maximum of about 75 W/m^2 at 100% CO2, the limit of the calculator, and still far short of the ultimate of 168 W/m^2. If you wanted the atmosphere to be opaque with zero transmittance to a few significant figures, you would have to pressurize the atmosphere beyond 1 bar at the surface.
The MODTRAN result is convex up, meaning that it is more severe than logarithmic. It is to a good approximation piecewise logarithmic, with a minimum slope for a doubling of CO2 of 3.0 W/m^2 in the 1976 Standard Atmosphere and 3.5 W/m^2 in the tropical atmosphere, both minima including the modern atmosphere. Fifty six doublings and nothing escapes.
When IPCC relies indirectly on the Lorentz probability density, or directly or indirectly on the dependence of the logarithm of concentrations, it is relying on impossible idealizations. Each approaches infinite energy, but worse, neither is validated by experiment even as approximations. IPCC doesn’t concern itself much with the scientific necessity to validate its models. What is important is defense of the logarithmic assumption, not just a cornerstone of AGW, but the model’s ultimate frightening conclusion.
“When IPCC relies indirectly on the Lorentz probability density, or directly or indirectly on the dependence of the logarithm of concentrations, it is relying on impossible idealizations. Each approaches infinite energy, but worse, neither is validated by experiment even as approximations. ”
There is absolutely no problem of impossibility or infinity in the range these formulas are used. They are both very well validated by empirical data for the cases, where they are used.
(In the case of Lorentz form the infinities are cut off by the Planck function. Therefore there are no infinities at any level. For the logarithmic dependence it is stated in all relevant places that it is not valid at all concentrations. It just happens to be a very good approximation over the relevant range.)
Pirilä 2/20/11 at 5:20 pm,
You claim repeatedly that what you believe to be true is validated by experiment. I have yet to see a reference from you for such claims.
As I understand the model, molecular collisions modulate its IR radiation, and that investigators model the vibration as Lorentz. That is what is impossible, and not, as you say, the Planck radiation limitations. The Lorentz distribution in momentum or displacement cannot occur in nature.
There are both theoretical and observational reasons to believe that the very far tails of collision broadened lines die off faster than predicted by the Lorentz shape. A full discussion of this somewhat unsettled topic is beyond the level of sophistication which we aspire to here, but the shape of far-tails has some important consequences for the continuum absorption, which will be taken up briefly in Section 4.4.8.
In the simplest theories leading to the Lorentz line shape, the width of a collision-broadened line is proportional to the mean collision frequency, i.e. the reciprocal of the time between collisions. The Lorentz shape is valid in the limit of infinitesimal duration of collisions; it is the finite time colliding molecules spend in proximity to each other that leads to deviations from the Lorentz shape in the far tails, but there is at present no general theory for the far-tail shape. Bold added, Pierrehumbert, RT, “Principles of Planetary Climate”, on-line, 11/19/08, p. 178.
So much for your “well validated” assumption.
It has been suggested that the absorption by CO2 is already saturated so that an increase would have no effect. This, however, is not the case. Carbon dioxide absorbs infrared radiation in the middle of its 15 μm band to the extent that radiation in the middle of this band cannot escape unimpeded: this absorption is saturated. This, however, is not the case for the band’s wings. It is because of these effects of partial saturation that the radiative forcing is not proportional to the increase in the carbon dioxide concentration but shows a logarithmic dependence. Every further doubling adds an additional 4 Wm-2 to the radiative forcing. Bold added, TAR ¶1.2.3 Extreme Events
, p. 93.
Humidity is important to water vapour feedback only to the extent that it alters OLR. Because the radiative effects of water vapour are logarithmic in water vapour concentration, rather large errors in humidity can lead to small errors in OLR, and systematic underestimations in the contrast between moist and dry air can have little effect on climate sensitivity (Held and Soden, 2000). Bold added, TAR, ¶7.2.1.2 Representation of water vapour in models, p. 426.
The simple formulae for RF of the LLGHG quoted in Ramaswamy et al. (2001) are still valid. These formulae are based on global RF calculations where clouds, stratospheric adjustment and solar absorption are included, and give an RF of +3.7 W m–2 for a doubling in the CO2 mixing ratio. (The formula used for the CO2 RF calculation in this chapter is the IPCC (1990) expression as revised in the TAR. Note that for CO2, RF increases logarithmically with mixing ratio.) Bold added, 4AR ¶2.3.1 Atmospheric Carbon Dioxide, p. 140.
So your claim that
For the logarithmic dependence it is stated in all relevant places that it is not valid at all concentrations.
is merely wishful thinking for what IPCC should have done. I’ll leave it to you and the readers to verify that I’ve omitted nothing of significance.
I infer from your response that you agree that without domain restrictions, the models are quite impossible.
Jeff,
I do not claim that the tails would agree precisely the Lorentz distribution. I claim two things: There is no infinity from any line of Lorentz shape, because the spectrum is is cut-off by the available energy. Planck’s law is one valid expression of this limitation. It tells precisely, how this happens.
That there are unsettled topics in fine details is true throughout science in a way that doesn’t invalidate at all the basic results. Why do you take this one sentence from Pierrehumbert, while you do not believe at all, what he is telling, which is of course the same that I and every well educated physicist tells.
Your second claim is equally typical misreading of the texts that every well educated physicist agrees upon. When the interesting range of concentrations is known, the reservations concerning the invalidity of a good parametrization or approximate law are not repeated at every mention of the law.
You are continuously presenting straw men.
Do you really still believe these arguments, or do you think that sooner or later nobody will any more answer and you are the “winner of the argumentation”?
Or is the answer, as somebody else proposed it could be: Is there somewhere a web site that offers scientifically sounding sentences and claims for people with no relevant education for spreading in the net to confuse others?
Pekka Pirilä, 2/21/11 at 1:46 am,
You wrote,
Physics and knowledge on atmosphere are on a totally different level [than] during Arrhenius time. Presently these calculations are well understood and reliable beyond doubt at the level I specified. 2/18/11 at 1:24 pm
Preliminarily, IPCC didn’t invent AGW, but adopted it as its own. The climate crisis for which Judith Curry seeks evidence is not due to surface temperature, precipitation, extreme events, or any other measure of climate. It is due to a single cause, the IPCC. It is political and economic, not scientific. To IPCC’s credit, though, it proclaims its own level of doubt, which a scientist may quote or criticize. What you specify is of no importance to anyone but yourself.
In the matter of this thread, the redundant Consilience of evidence, the decision at hand, which some recognize, seems to be the null hypothesis, H0: AGW does not exist, and the affirmative, H1: AGW exists. The evidence for the proposition is the IPCC Reports, addressed to and prepared for policymakers. Those Reports expressly include neither the totality of the relevant science, nor pretend to address the existence question.
IPCC assumes at the outset that AGW exists, and dedicates its Reports, and designs it models, to showing how severe the consequences will be without being invalidated by events before the money can be flushed out of the treasuries. The null hypothesis is supported by debunking IPCC’s Reports. They are the agency’s condensation of opinion from
Over 3,500 experts coming from more than 130 countries contributed to the Fourth Assessment Report in 2007 (+450 Lead Authors, +800 Contributing Authors, and +2,500 expert reviewers provided over 90,000 review comments). http://www.ipcc.ch/activities/activities.shtml
AR4 alone contains 6,239 references, of which 916 are duplicates, yielding a net is 5,323. In addition, AR4 has 16 instances of and references therein which remain unopened. IPCC has reduced these to statistics to add weight or consensus to its conclusions, all to impress its target audience, the World policymakers, i.e., the United States government. A large number of these references lie behind a paywall (so much for the Freedom of Information regulations), practically inaccessible to the public, and certainly not readable by their naive policymakers. The opinions of one more expert, missing specific references, would be a trivial addition to the evidence only a layman would be tricked into accepting in the first place.
You responded to me, writing
“Tacitly assumed in that one paragraph is that these greenhouse gases are the cause rather than the effect of climate. ”
The assumption is a solid result of physics. First you require using physics, then you deny it. 2/18/11 12:01 pm
This is another instance where you wrongly presume to be an authority on the matter. You claim here that I denied something, but you provide no example in support of your conclusion. Your view can’t be answered categorically. I provide full evidentiary authority for what I write, but having a dialog with someone who provides none is pointless.
You wrote,
I do not claim that the tails would agree precisely the Lorentz distribution. I claim two things: There is no infinity from any line of Lorentz shape, because the spectrum is [] cut-off by the available energy. Planck’s law is one valid expression of this limitation. It tells precisely, how this happens. 2/21/11 at 8:32 pm
You are defending against an imaginary accusation again to imply that your claims are important. What is important to the consilience of evidence is what assumptions prop up IPCC’s application of radiative transfer. If you want to claim what IPCC said, or what position you take on the matter, do give your position some modicum of value by providing references.
You wrote,
That there are unsettled topics in fine details is true throughout science in a way that doesn’t invalidate at all the basic results. Why do you take this one sentence from Pierrehumbert, while you do not believe at all, what he is telling, which is of course the same that I and every well educated physicist tells. 2/21/11 at 8:32 pm.
Your adjectives and extreme position make the first sentence unparsable. How fine must be fine details for your statement? How basic do you mean? Do you really mean throughout? Even if the sentence were decipherable, are you using validate in the scientific sense, to mean confirmation of a model prediction by measurement within model tolerances? You write about calculations being “beyond doubt” and physics being “solid”, two extremes outside the bounds of science.
Contrary to your claim, I did not take one sentence from Pierrehumbert (RTP). I quoted from his on line book for his cause-and-effect error. It was an example of how physics trumps consilience of evidence. 2/18/11 at 11:57 am. A single counterexample is sufficient to overcome unlimited confirming evidence. I quoted Pierrehumbert again – on this thread– in response to Macdonald’s question, and for the proposition that absorption line band wings are “a matter of debate”, the question of what we take to be evidence. 2/20/11 at 3:37 pm. RTP is especially valuable as an H0 authority because he also happened to be a TAR Lead Author, and because he is a staunch defender of IPCC on its RealClimate.org blog outlet. I quoted him again to contradict your opinion, unsupported by evidence, with regard to the Lorentz density and the logarithmic dependence that “They are both very well validated by empirical data for the cases, where they are used.” 2/20/11 at 5:20 pm.
You opine about my opinion of Pierrehumbert, saying “[whom] you do not believe at all“. That is proved false by my reliance on his book. This is but another example of your personal claims which you won’t support because you can’t.
You write about “that [which] I … tell[]. That is important because you are a professor emeritus, and because you seem to put yourself forth in these threads as a physics guru. In these positions, one honorable and earned, and the other self-crowned, you have an ethical and professional duty to support your claims with authorities, to answer your audience’s questions forthrightly and completely, to distinguish between science and opinion, and to distinguish between conjectures, hypotheses, theories, and laws. The assertion that your positions “are described in physics textbooks and other openly available sources” is not eligible evidence for consilience.
As to being well-educated, that is not an absolute term. Notwithstanding any letters behind a scientist’s name, something is lacking in his education when he fails to distinguish between his belief systems and science, or employs consensus in place of validation. These are explicit failings of IPCC, the organization, inherited by its participants and supporters. They cast a dark shadow over what constitutes evidence.
Jeff – I probably shouldn’t intervene here, because Pekka Pirila can speak for himself. However, I understand his frustration. He, I, and many others have consistently refuted your main conclusions on other threads by providing extensive documentation, only to have you claim that we have not presented any evidence. At some point, people become frustrated. We conclude that you have no intention of responding to evidence, but will continue to write interminably long defenses of your views. When it gets to the point where few if any outside observers will be reading these exchanges, it hardly seems worthwhile for any of us to continue the dialog. It’s easier simply to let you have the last word due to your unwillingness to relinquish it.
Thanks Fred. You saved my effort. I have nothing to add.
Fred Moolten 2/27/11 12:42 pm,
Your intervention (again) is amusing. You answered with another example of the same problem: a self-appointed expert with no valid references for his claims. You leave readers to guess what transpired in the past that lead you to believe that you have been either consistent or provided extensive documentation.
Here’s a sample of your past work on the Confidence in radiative transfer models” thread, exhibiting the same disease:
DeWitt – I stopped trying to convince Jeff Glassman a while back about the fallacies in his analysis, and so I wouldn’t ordinarily comment further. However, I thought it would be interesting to analyze why one can’t expect two different wavelengths with significantly different absorption coefficients to yield, in combination, an exponential extinction curve when traveling through the same medium. 12/18/10 at 3:09 pm.
I responded on 12/18/10 at 5:13 pm. I noted first that you didn’t even address your remarks to me. Secondly, I paraphrased the part in bold, only changing to to will, and denied that I ever said anything so silly. The rest of your post at 3:09 pm was, of course, pointless because the premise was false. Your problem started and ended with a failure to make citations or usable references. Then as now, and in stark contrast to your posts, I use exact quotations.
Now you lump yourself in with a bevy of AGW/IPCC supporters: He, I, and many others. That’s still over 3,500 experts who can’t or won’t back up their claims.
I assume you included DeWitt Payne, because the last time you addressed my science you addressed your comments to him, not me, and later joined the fray. I was able to drag some references out of Payne, and he responded with Grant Perry, A First Course in Atmospheric Radiation as it appears in the inside the book feature of amazon.com. DeWitt Payne, 12/17/10 4:02 pm. I responded at 7:23 pm, reporting back to him with statistics that his claims about Beer’s Law were not supported by Perry’s text. In the ensuing dialog, you, Pirilä, Pratt, and others chimed in. Apparently from what you write now, i.e., interminably long defenses, that a full response is beyond your attention span.
Sometimes simplistic notions about climate, and blind support of IPCC, can’t be debunked in 25 words or less.
And the number of people who hold with the simplistic notion adds no weight to the argument. A consensus has no power to validate a scientific model. It only makes mistaken people feel good, and con men rich and famous.
These are underlying principles for the consilience of climate.
Do they still teach the Law of the Excluded Middle anymore?