Expert judgement and uncertainty quantification for climate change

by Judith Curry

When it comes to climate change, the procedure by which experts assess the accuracy of models projecting potentially ruinous outcomes for the planet and society is surprisingly informal. – Michael Oppenheimer

My concerns about the consensus seeking process used by the IPCC have been articulated in many previous posts [link].  I have argued that the biases  introduced into the science and policy process by the politicized UNFCCC and IPCC consensus seeking approach are promoting mutually assured delusion.

Nature Climate Change has published what I regard to be a very important paper:  Expert judgement and uncertainty quantification for climate change, by Michael Oppenheimer, Christopher Little, Roger M. Cooke [link to abstract].

Abstract. Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

With regards to the main technical aspects — structured expert judgment and probabilistic inversion:

  • I wrote a previous post on structured expert judgement [link]
  • The paper’s Supplementary Information is available online [link], which describes the method of probabilistic inversion.

From the Princeton press release:

Science can flourish when experts disagree, but in the governmental realm uncertainty can lead to inadequate policy and preparedness. When it comes to climate change, it can be OK for computational models to differ on what future sea levels will be. The same flexibility does not exist for determining the height of a seawall needed to protect people from devastating floods.

For the first time in the climate field, a Princeton University researcher and collaborators have combined two techniques long used in fields where uncertainty is coupled with a crucial need for accurate risk-assessment — such as nuclear energy — in order to bridge the gap between projections of Earth’s future climate and the need to prepare for it. Reported in the journal Nature Climate Change, the resulting method consolidates climate models and the range of opinions that leading scientists have about them into a single, consistent set of probabilities for future sea-level rise.

Giving statistically accurate and informative assessments of a model’s uncertainty is a daunting task, and an expert’s scientific training for such an estimation may not always be adequate.

Oppenheimer and his co-authors use a technique known as “structured expert judgment” to put an actual value on the uncertainty that scientists studying climate change have about a particular model’s prediction of future events such as sea-level rise. Experts are each “weighted” for their ability to quantify uncertainty regarding the situation at hand by gauging their knowledge of their respective fields. More consideration is given to experts with higher statistical accuracy and informativeness. Another technique, called probabilistic inversion, would adjust a climate model’s projections to reflect those experts’ judgment of its probability.

Structured expert judgment has been used for decades in fields where scenarios have high degrees of uncertainty, most notably nuclear-energy generation, Oppenheimer explained. Similar to climate change, nuclear energy presents serious risks, the likelihood and consequences of which — short of just waiting for them to occur — need to be accurately assessed.

When it comes to climate change, however, the procedure by which experts assess the accuracy of models projecting potentially ruinous outcomes for the planet and society is surprisingly informal, Oppenheimer said.

When the Intergovernmental Panel on Climate Change (IPCC) — an organization under the auspices of the United Nations that periodically evaluates the effects of climate change — tried to determine the ice loss from Antarctica for its Fourth Assessment Report released in 2007, discussion by the authors largely occurred behind closed doors, said Oppenheimer, who has been long involved with the IPCC and served as an author of its Assessment Reports.

In the end, the panel decided there was too much uncertainty in the Antarctic models to say how much ice the continent would lose over this century. But there was no actual traceable and consistent procedure that led to that conclusion, Oppenheimer said. As models improved, the Fifth Assessment Report, released in 2013, was able to provide numerical estimates of future ice loss but still based on the informal judgment of a limited number of participants.

Claudia Tebaldi, a project scientist at the National Center for Atmospheric Research, said that the researchers propose a much more robust method for evaluating the increasing volume of climate-change data coming out than experts coming up with “a ballpark estimate based on their own judgments.”

“Almost every problem out there would benefit from some approach like this, especially when you get to the point of producing something like the IPCC report where you’re looking at a number of studies and you have to reconcile them,” said Tebaldi, who is familiar with the research but had no role in it. “It would be more satisfying to do it in a more formal way like this article proposes.”

The implementation of the researchers’ technique, however, might be complicated, she said. Large bodies such as the IPCC and even individual groups authoring papers would need a collaborator with the skills to carry it out. But, she said, if individual research groups adopt the method and demonstrate its value, it could eventually rise up to the IPCC Assessment Reports.

For policymakers and the public, a more transparent and consistent measurement of how scientists perceive the accuracy of climate models could help instill more confidence in climate projections as a whole, said Sander van der Linden. With no insight into how climate projections are judged, the public could take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential or that scientists do not know enough to justify the effort (and possible expense) of a public-policy response, he said.

“Systematic uncertainties are actually forms of knowledge in themselves, yet most people outside of science don’t think about uncertainty this way,” said van der Linden. “We as scientists need to do a better job at promoting public understanding of uncertainty. Thus, in my opinion, greater transparency about uncertainty in climate models needs to be paired with a concerted effort to improve the way we communicate with the public about uncertainty and risk.”

Some excerpts from the  closing section of the paper, A Path Forward, that I think absolute nails it:

Stepping back from probabilistic inversion to the general problem of uncertainty quantification, we end by suggesting a few signposts pointing towards an informative approach.

  • First, uncertainty quantification should have a component that is model independent. All models are idealizations and so all models are wrong. An uncertainty quantification that is conditional on the truth of a model or model form is insufficient.
  • Second, the method should be widely applicable in a transparent and consistent manner. As already discussed, several approaches to uncertainty quantification have been proposed in the climate context but fall short in their generalizability or clarity.
  • Third, the outcomes should be falsifiable. Scientific theories can never be strictly verified, but to be scientific they must be falsifiable. Whether theories succumb to crucial experiments or expire under a ‘degenerating problem shift’, the principle of falsifiability remains a point of departure.

With regard to uncertainty quantification, falsification must be understood probabilistically. The point of predicting the future is that we should not be too surprised when it arrives. Comparing new observations with the probability assigned to them by our uncertainty quantification gauges that degree of surprise. With this in mind, outcomes should also be subject to arduous tests. Being falsifiable is necessary but not sufficient. As a scientific claim, uncertainty quantification must withstand serious attempts at falsification. Surviving arduous tests is sometimes called confirmation or validation, not to be confused with verification. Updating a prior distribution does not constitute validation. Bayesian updating is the correct way to learn on a likelihood and prior distribution, but it does not mean that the result of the learning is valid. Validation ensues when posterior ‘prediction intervals’ are shown to capture out-of-sample (for example, future) observations with requisite relative frequencies. 

JC reflections

I have been following Roger Cooke’s research closely for the last year or so, and we have begun an email dialogue. I am very pleased to see that his ideas are being seriously applied to issues of relevance to the IPCC and climate change.

Michael Oppenheimer, although regarded as an activist and sometimes an alarmist about climate change, has made important contributions in assessing and criticizing the the IPCC process [link], particularly with regards to the consensus process and the treatment and communication of uncertainty.

I find this statement made by Oppenheimer in the press release to be particularly stunning:

When it comes to climate change, however, the procedure by which experts assess the accuracy of models projecting potentially ruinous outcomes for the planet and society is surprisingly informal.

It is well nigh time for the IPCC and other assessments to up the level of their game and add some rational structure to their assessment and uncertainty analysis.  The proposal by Oppenheimer et al. provides an excellent framework for such a structure.

However, there is one missing element here, that was addressed in my paper Reasoning About Climate Uncertainty.  This relates to the actual point in the logical hierarchy where expert judgment is brought in. Excerpt:

Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions. A logical hypothesis hierarchy (or tree) links the root hypothesis to lower level evidence and hypotheses. While developing a logical hypothesis tree is somewhat subjective and involves expert judgments, the evidential judgments are made at a lower level in the logical hierarchy. Essential judgments and opinions relating to the evidence and the arguments linking the evidence are thus made explicit, lending structure and transparency to the assessment. To the extent that the logical hypothesis hierarchy decomposes arguments and evidence to the most elementary propositions, the sources of disputes are easily illuminated and potentially minimized.

Bayesian Network Analysis using weighted binary tree logic is one possible choice for such an analysis. However, a weakness of Bayesian Networks is its two-valued logic and inability to deal with ignorance, whereby evidence is either for or against the hypothesis. An influence diagram is a generalization of a Bayesian Network that represents the relationships and interactions between a series of propositions or evidence Three-valued logic has an explicit role for uncertainties that recognizes that evidence may be incomplete or inconsistent, of uncertain quality or meaning. Combination of evidence proceeds generally as for a Bayesian combination, but are modified by the factors of sufficiency, dependence and necessity.

Hall et al. conclude that influence diagrams can help to synthesize complex and contentious arguments of relevance to climate change. Breaking down and formalizing expert reasoning can facilitate dialogue between experts, policy makers, and other decision stakeholders. The procedure used by Hall et al. supports transparency and clarifies uncertainties in disputes, in a way that expert judgment about high level root hypotheses fails to do.

I regard it to be a top priority for the IPCC to implement formal, objective procedures for assessing consensus and uncertainty.  Continued failure to do so will be regarded as laziness and/or as political protection for an inadequate status quo.

407 responses to “Expert judgement and uncertainty quantification for climate change

  1. Pingback: Expert Judgment And Uncertainty Qualification | Transterrestrial Musings

  2. “In theory there is no difference between theory and practice. In practice there is.” Yogi Barra

  3. The IPCC held similar discussions after AR2, AR3 and AR4 — with zero impact on AR3, AR4 and AR5, respectively. I suspect this is because these methods are complicated, labour-intensive, and controversial.

    • Methods such as expert elicitation surely aren’t controversial?

      Test groups using scientists to assess a problem (same scientists, same problem, using different methods) would show how sensitive their conclusions and confidence assessment are to the method used.

      • Willard. If you haven’t added Omega, yet you might want to give it a try?

      • Expert elicitation is quite controversial. Sitting around talking about mechanisms, rather than doing equations? Estimating probabilities by introspection, rather than running Monte Carlos? I know a good few academics who regard expect elicitation is wiffly-waffly stuff for hand-waving wimps.

    • RT, the IPCC is supposed to be sciency, and sometimes science is hard. Being hard is no excuse for not being sciency.
      I suspect the reason nothing has been done is along the lines of ‘the science is settled”. Because a truer assessment of uncertainty would unsettle it.
      I speculate that this paper is appearing as an initial part of a larger climb down as Mother Nature continues to prove that many previously ‘settled’ IPCC “projections” (no forecasts, of course) are just wrong.
      Antarctica losing ice is an example relevant to this paper. GRACE loss determination relied heavily on GIA guesstimates from models. When differential GPS was finally used to determine an observed GIA in 2013, OOPs GRACE shows that Antarctica is NOT losing ice. An observation replicated by Zwally’s analysis of IceSAT in 2015. Climate Audit had a detailed post on this not too long ago.
      So not using robust methods to quantify uncertainty may help hasten the end of CAGW nonsense.

  4. David Wojick

    The issue tree is the objective (that is, not subjective) logical hypothesis hierarchy, because it is constructed from exactly what is said by those debating the hypothesis. See my crude little textbook (from 1975, after I discovered the issue tree in 1973): http://www.stemed.info/reports/Wojick_Issue_Analysis_txt.pdf
    The issue tree is the fundamental logical form of all issues.

    I do not believe in quantification, because major issues are far too complex for that, but that is just my opinion.

    • The point I have been making is that if you break things down sufficiently, then some of the items lower in the hierarchy can be quantified. The exercise of what can be quantified versus what can’t be quantified is important info in itself.

      • David Wojick

        Quite possibly but the accuracy of a complex climate model is an extremely complex issue. It must be.

      • DW, tend to disagree. It is fairly easy to show the models cannot be right by comparing their output to observation. It is not much harder to show why. To adequately model convective cells, a grid cell about 2.5-4km on a side is the minimum resolution for things like precipitation that influence water vapor feedback. NCAR says doubling resolution by halving grid sides takes 10x computing power owing to CFL constraint. Finest CMIP5 resolution 110km, typical 250km. A 6-7 orders of magnitude computational intractability problem. So important processes like convection cells have to be parameterized. There are two ways to tune these parameters so that hindcasts look reasonable. But both involve attribution of hindcast result between anthropogenic and natural. The IPCC attribution to anthropogenic CO2 (by charter) is faulty, but by how much is impossible to say. Result is model/ obs comparisons show them to be too sensitive by half (~2x more), and not to include whatever natural variations caused the pause model temperature divergence since 2000. There is no uncertainty in any of those statements.
        Btw read your book on issue trees. Very interesting tool. Thanks for the link. Brought memories of Raiffa’s Decision Analysis, which seems related.

      • David Wojick

        Ristvan, I am sure the modelers have counter arguments to your arguments, else we would not be here. And you have counter arguments to their counter arguments, hence the issue tree structure.

      • David Wojick |
        Ristvan, I am sure the modelers have counter arguments to your arguments, else we would not be here.

        Not all arguments are made in good faith.

      • David Wojick

        Pmhinc, you are just name calling.

      • DW, I am pretty sure they do not. That is where we perhaps differ. You think this is a complex policy problem wirh lots of different legitimate perspectives. I think it is less complicated, was rigged from the beginning, and has been colored very strongly politically by warmunist watermelons. I had no ax to grind in climate until discovered genuine, deliberate, misrepresentation to congress by NSRF concerning food impacts while researching Gaia’s Limits back in 2011. My very first guest post here, in case you don’t have the much longer and more complicated book. So suspect that ‘genuine’ issue disagreement cause complicated is not what we are seeing. Your own experiences obviously differ. Me, I was taught to mediate miner/owner disputes differently.
        In the model comment specific, I reference their published model computational limits. Their published parameterizations. Their published parameter tuning procedures. Their published CMIP5 experimental protocol. The self evident model/observational discrepancies that have resulted by resorting to the KNMI data base. The UNFCCC charter to IPCC is a public document; they were chartered to research anthropogenic causes of global warming. No uncertainty about any of that. That they might disagree is fully accepted. That they could disgree factually is not. Perhaps therein lies our difference in perspective here.

      • From my wargame design perspecive, Dr. Curry has hit the nail on the head. For A to be true, the components of A must add up. If they exceed A, then there is something wrong — sometimes with A, more often, with the components.

        For a historical game to have any “accuracy” at all, it needs to be designed top-down. The climate models remind me of those guys who think you can “simulate” the Eastern Front using Advance Squad Leader rules. But in doing so, A doesn’t rule, the subcomponents do. And all you have to do is tweak the “echo fire” rules and the Germans (or Russians) “win the war” every time.

        But we know that WWII did not revolve around echo fire rules. Go that way and you wind up where CMIP is: playing crack-the-whip with your initial inputs.

      • David Wojick |
        Pmhinc, you are just name calling.

        Certainly not my intent. Rightly or wrongly “ristvan” attempts to make legitimate scientific arguments. Absent political and financial motivation, arguments supporting the current models (e.g. the physics and natural variability are well understood), we would not be hearing “the science is settled” arguments we are hearing. So no, many of the “counter arguments” are not being made in good faith. We are where we are more from political and financial motivation than legitimate scientific argument.

      • ristvan | April 28, 2016 at 6:01 pm |
        DW, I am pretty sure they do not. That is where we perhaps differ. You think this is a complex policy problem wirh lots of different legitimate perspectives.

        It is a complex subject with one legitimate perspective and a lot of perspectives who don’t know/don’t care/won’t admit who their parents are.

        Any complex problem is a bunch of simple problems in a large bag screaming to get out. If you drive up the tree far enough you get to the leaf (fact) level. Until all the leaves to a branch are known it doesn’t make sense to move down the branch, the branch should be marked “under advisement”.

      • David Wojick

        For example, some skeptics argue that the lack of fit between the models and recent observations (the hiatus) shows that the models are wrong. The warmers have at least two basic responses.

        First, the models are not designed to include short term natural variability, which (they say) is what is causing the hiatus. In fact NSF is spending at least a hundred million a year investigating decadal natural variability in order to resolve this problem.

        Second, the lack of fit is not large, especially given the properly adjusted (they say) observations. Note that what the facts are is itself extremely controversial, perhaps the greatest issue of all.

        Of course the skeptics have responses to these responses, to which the warmers then have responses, and so it goes, level by level, an issue tree.

        Anyone who thinks the warmers do not have coherent arguments to great depth does not understand the situation. This is a deep scientific debate.

      • David W., I’m happy to see you agree the science is not settled. Of course, the warmists themselves pegged 30 years as the minimum amount of time to judge “climate change.” Now that’s not panning out and they now want to make that definition what exactly? They are moving the goalposts time after time, making it obvious they are clueless.

      • David Wojick said:

        This is a deep scientific debate.

        It’s “deep” alright.

        It involves the three E’s: egoism, egotism, and egocentrism.

      • Here’s a more complete quote:

        “The shift to a cleaner energy economy won’t happen overnight and it will require some tough choices along the way. But the debate is settled. Climate change is a fact. And when our children’s children look us in the eye and ask if we did all we could to leave them a safer, more stable world, with new sources of energy, I want us to be able to say yes, we did.”

        Notice how the political agenda — “the shift to a cleaner economy” — comes first, and then afterwards the sales pitch used to sell it — the science.

        Then Obama shifts his discourse from the present to the future, the hallmark of all totalitarian propaganda. The sacrifices in the here and now, he urges, are necessary so that we can leave “our children’s childre” a “safer, more stable world, with new sources of energy.”

        Of course Obama never makes his case about how, and if, this rapturous “cleaner energy economy” can, or will, be achieved. It is merely assumed to be self-evident and beyond dispute.

      • > For A to be true, the components of A must add up.

        Most logics don’t even have components. All they have is As, Bs, etc. So the word “for A to be true” doesn’t cut it.

        There are also many systems where additivity is relaxed.

      • Steven Mosher

        This is too funny

        From amatures in modelling

        ‘But we know that WWII did not revolve around echo fire rules. Go that way and you wind up where CMIP is: playing crack-the-whip with your initial inputs.”

        The inputs are set by other groups, typically not modellers.

        So nobody plays crack the whip with inputs.

        you play crack the whip with the bottoms up physics

        same in war gaming

        http://calhoun.nps.edu/bitstream/handle/10945/44169/Lucas_Fitting.pdf?sequence=4

        http://www.brookings.edu/~/media/research/files/articles/2003/1/winter-iraq-ohanlon02/20030122.pdf

      • Most logics don’t even have components.

        But climate models (good or otherwise) do. And it is wise to treat them accordingly. Climate modeling is closer to game design than most of those who do it realize or care to admit.

        As for me, I’m fine with models. But they need to be set up correctly, with the ability to drill down, improve, expand, update. Then, at least, when differences of opinion occur, they can be clearly identified.

      • Evan Jones said:

        But they [the models] need to be set up correctly, with the ability to drill down, improve, expand, update.

        The climatariat doesn’t do that with the models, only with the “empirical” data, which can be easily modified to fit the models, justified by any number of convenient pretenses.

        The models are similar to Saint Vincent of Lerins description of the Catholic Church in the 5th century:

        The Church has become a faithful and ever watchful guardian of the dogmas which have been committed to her charge. In this secret deposit she changes nothing, she takes nothing from it, she adds nothing to it.

      • If someone actually wanted to get good quality science, they’d design a superior quality process. Instead, we have the IPCC clown show and circus. Obviously, some really powerful people have no interest in the best quality science.

    • Maybe the very act of attempting a logical hypothesis hierarchy would reduce overall bias, which can amplify via less visible and more informal processes because social effects are less constrained.

      • David Wojick

        Something like that certainly can happen, because opponents are forced to recognize and respond logically to each other’s arguments. I once did a fairly extreme case of this, with coal miners and mine operators, who are often enemies and were in this case. Banning emotional responses had a powerful calming effect and good progress was made toward mutual understanding and conflict resolution.

      • David, i think your comment here says it all. As it stands today, the agw debate seems to be more about spin verses spin than a rational discussion about the science…

      • David Wojick and afonzarelli +100. Emotional responses and spin verses spin is simply unscientific and rationalising as opposed to a proper rational process and this is the prime reason that the debate has been protracted for so long.

      • afonzarelli |
        …the agw debate seems to be more about spin verses spin than a rational discussion about the science…

        I personally have learned much following Climate, Etc. Your comment implies all commenters (I assume including our hostess), are just spinning fairytales and making stuff up as they go along. Not sure why you are following or commenting on a blog that you believe is just spin vs. spin.
        My thanks to those who make legitimate scientific argument and are making a serious attempt to understand the science underlying our climate.

      • I am also appreciative of the commenters (including Judith) who are making serious attempts to understand and explain the science underlying climate change because their posts are immediately recognisable when they appear, which IMO is too infrequent for my liking.

      • pmhinsc, perhaps your emotional knee jerk response here is a good example of what i’m talking about. (of course, i’ll cut you some slack as my comment was not quite as well articulated as it ought to have been) i merely stated that it is MORE about spin than rational discourse and that being on the whole… i agree with you that many an argument is being made in bad faith from the agw believer side of things. and on the other hand, much of the argumentation coming from the skeptic side is less than desirable as well. polarization exists on both sides and that makes for rational discussion (to quote peter here) “which IMO is too infrequent for my liking”

      • pmhinsc,

        I share your objections to a purely relativist or constructivist point of view.

        If we accept that the debate is merely about “spin verses spin,” then there is no right and wrong. The argument becomes one of wrong vs. wrong, error vs. error.

        That type of debate certainly does exist. But is this one of them?

        The alternative is articulated by Paul Boghossian in Fear of Knowledge: Against Relativism and Constructivism:

        The intuitive view is that there is a way things are that is independent of human opinion, and that we are capable of arriving at belief about how things are that is objectively reasonable, binding on anyone capable of approaching the relevant evidence regardless of their social or cultural perspective.

    • Mosh, if you think most good strategic (as opposed to tactical) wargames are designed from the bottom up, you need to think again. I have designed a few. I have used both approaches. And a climate model is a strategic wargame.

  5. A new paper at Nordic Science confirms Earth and humanity are totally at the Sun’s mercy,

    http://sciencenordic.com/sun-can-emit-superflares-every-1000-years

    as suggested fourteen years ago in, “Super-fluidity in the solar interior: Implications for solar eruptions and climate,” Journal of Fusion Energy 21, 193-198 (2002):

    http://www.springerlink.com/content/r2352635vv166363/     

    The lead author of the new paper, Christoffer Karoff, Department of Geoscience, Aarhus University, Denmark, said, “We definitely hadn’t expected to find superflare stars with equally weak magnetic fields as our own. This means that the Sun could create a superflare and it’s a very scary thought.” 

  6. JC, the link to the Nature Climate Change abstract actually points to the supplementary information.

  7. I am beginning to think Western society is developing political anorexia.
    For example, in my state, the new official goal is zero traffic deaths.
    “No Child Left Behind” is now officially called “Every Child Succeeds”
    Certainty?
    The future quantifiable?
    Is this an achievable goal?
    Somehow I don’t think we’ll ever be thin enough.

    • rr, the first is easy to achieve. No cars, no bicycles, no horses-no traffic. Sort of like the warmunist decarbonization solution.
      The second is either easy or very hard. Easy if you define success as graduating; just give everybody an A to boost their ego. Sort of like not telling fat kids they are fat lest their self esteem get hurt. Very hard if success is defined as minimal competency in reading, writing, and arithmetic. Many examples including class room size, standardized testing, juvenile type 2 diabetes, and womens dress sizes in ebook The Arts of Truth. The climate chapter you might like. This whole thread is about ‘Arts of Truth’ concerning climate science uncertainty.

    • “No Child Left Behind”

      Head start failed. You can spend a lot of money to teach an 80 IQ kid 100 IQ material. Intelligence is genetic. Otherwise in the old world people at the equator wouldn’t have IQs near 60 and people near the poles have an IQ over 100 (the exceptions are people that migrated recently by anthropological standards).

      I have concerns that these programs work by compressing the field. Things look a lot better if you not only speed up the kids at the back of the pack but trip up the kids at the front of the pack.

      • It is best to let the states and local governments run the school system.

        If it were up to me, I would let students work at their own pace, using material similar to that in Khan Academy. The teacher would be there to help individuals as needed, not to present the primary material. Students move to the next “grade” no matter what their progress. And there would be more subject flexibility, including shop-type classes for those who prefer working with their hands. At the end of their senior year, they would graduate with a record of highest level achieved in each subject.

        No mass testing needed.

  8. The press release said:

    For policymakers and the public, a more transparent and consistent measurement of how scientists perceive the accuracy of climate models could help instill more confidence in climate projections as a whole, said Sander van der Linden.

    Judith Curry said:

    It is well nigh time for the IPCC and other assessments to up the level of their game and add some rational structure to their assessment and uncertainty analysis. The proposal by Oppenheimer et al. provides an excellent framework for such a structure.

    Late is better than never I suppose, but it looks to be an uphill battle.

    Trust is difficult to gain, easily lost, and almost impossible to win back.

    http://www.rasmussenreports.com/public_content/politics/current_events/environment_energy/69_say_it_s_likely_scientists_have_falsified_global_warming_research

    • They are already transparent, many already can see past their alarmist smoke and mirrors, they fool less people every day, but I do not believe they falsified research on purpose, I think they do not understand climate, they really do not even suspect. Data does not match model output. They keep trying to adjust data to match model output. They honestly believe they are doing the right thing. This is way beyond understanding.

      • “…but I do not believe they falsified research on purpose, I think they do not understand climate, they really do not even suspect.”

        +100
        It’s not a conspiracy, it ignorance and hubris.
        Ignorance that peer-reviewed papers not only can be wrong, but often are.
        Hubris that a paper never challenged is therefore correct.
        Not so much the facts (data) but the speculation (conclusions).

      • Number of retractions from peer reviewed journals for the top 30 retracted authors – 995 at least.

        Compromised peer review (pal review in many cases) is increasingly becoming a reason for retraction.

        Scientific fraud is practised by real scientists from time to time, but apparently climate scientists never resort to fraud, and would never attempt to subvert the peer review system. They are totally superior to all other scientists in all respects. And if you believe that, you will get on famously with self styled climatologists!

        Cheers.

  9. With no insight into how climate projections are judged, the public could take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential or that scientists do not know enough to justify the effort (and possible expense) of a public-policy response, he said.

    With total and absolute insight into how climate projections are judged, the public should take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential and that scientists do not know enough to justify the effort (and possible expense) of a public-policy response.

    • popesclimatetheory said:

      …scientists do not know enough to justify the effort (and possible expense) of a public-policy response….

      Nita Farahany makes the same argument in this lecture, regarding a totally different scientific field, that “the science really isn’t ready” and that the scientific community needs to speak out “before the science gets out of control in its application.”

      Now compare Farahany’s scientific modesty to the violently certain claims of Jef Nesbit, director of public affairs for two prominent federal science agencies:

      With [2013] IPCC Report, Climate Change is Settled Science
      http://www.livescience.com/39954-with-ipcc-report-climate-change-is-settled-science.html

      [T]he central portion of the artificial science debate — the one that has vexed policy makers for decades — is now over.

      Climate change is real, human beings are responsible for a good portion of it, and we need to take the issue seriously sooner rather than later and start to do something about it.

      [S]cience [is] settle[ed] on the ways in which climate change drives extreme weather events like Superstorm Sandy, massive wildfires in the west, extended droughts that are causing water shortages or once-in-a-thousand-year flood events….

      When the plenary session of the Intergovernmental Panel on Climate Change (IPCC) finishes its work late Thursday night and issues its report on the science basis of climate change to nearly 200 governments, it will essentially end the climate-science portion of the debate for policy makers and government officials.

  10. That’s their bag, I guess.

    To me what environmental experts need for credibility are (a) a few decades worth of demonstrably accurate forecasts; and (b) a winnowing out of people who are (nearly) always wrong. About the latter, why are Paul Ehrlich and John Holdren considered “experts” by anybody?

    What the rest of us need is continued empirical research and continuous comparison of model results, forecasts, extrapolations etc to subsequent events..

  11. The proposal by Oppenheimer et al. provides an excellent framework for such a structure.

    It’s perhaps also worth noting that back in 2013, Oppenheimer (of EDS and IPCC fame and glory) had teamed up with Oreskes to submit a proposal to the IPCC along similar lines. See:

    Will IPCC accede to redefining “neutral”?

    It is, perhaps, to Oppenheimer et al‘s credit that Oreskes was apparently not a co-author on this paper.

    • “The ExCom [IPCC Executive Committee -hro] believes that “Assessing Assessments” would be a worthwhile exercise which would strengthen the credibility of the scientific community as being transparent and objective.”

      Makes me think:
      Who will assess the assessors?

  12. I read:
    With no insight into how climate projections are judged, the public could take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential or that scientists do not know enough to justify the effort (and possible expense) of a public-policy response, he said.

    I wrote:
    With total and absolute insight into how climate projections are judged, the public should take away from situations such as the IPCC’s uncertain conclusion about Antarctica in 2007 that the problems of climate change are inconsequential and that scientists do not know enough to justify the effort (and possible expense) of a public-policy response.

    That was about as good of a line to promote a response as I have seen in a long time.

    This relates to the actual point in the logical hierarchy where expert judgment is brought in.

    There is the problem, which expert judgments? Consensus experts, Skeptic experts, if so, which Skeptic experts.

    There are two major divides.
    1 – CO2 is responsible for everything important about temperature and sea level rise and not much else is important.
    2 – CO2 is responsible for nothing to do with temperature and sea level rise that is important but it is very important to everything that is green that grows. On this side there is much disagreement about what is most important and least important.

    Again, which of the experts in all of this will you bring in. You likely know the answer you prefer and will pick the proper experts. ‘

    I do not know of any effort to bring these different factions together to work together to find the right answer. I do want that to happen. Some of us are meeting on Sunday May 1 to work on that. If you are interested in supporting that effort, email me at alexpope13@gmail.com
    If you are in the Houston area, or can be there, we are meeting in Webster Texas, let me know.
    If not too many respond, lunch is complimentary.

    • David Wojick

      If the hierarchy does not include disagreements then it is not a logical hypothesis hierarchy. Every hypothesis has arguments for and against it. Which experts are used is not particularly important so long as all the major issues are raised.

      On the other hand, methods that involve weighting experts and quantifying the aggregate results will be extremely sensitive to which experts are used. This makes such methods useless in my view.

      • DW, second point is a real big problem. Was noodling it in a comment below. How does one get Mann to be honest about paleoproxy uncertainty? Or Karl about SST uncertainty? Or Dessler about the sign of cloud feedback uncertainty when his comments indelibly archived at NASA claim he proved it is “significantly positive” when his regression slope is barely positive, the r^2 is a stunningly awful 0.02, and the guts of his paper say the analysis cannot rule out it being negative?

      • David Wojick

        Ristvan, are you claiming that those who disagree with you are dishonest? I disagree.

      • No. Well, OK in Mann’s case there is plenty of circumstantial evidence that he is, in fact, dishonest about his science. Nature tricks, hide the decline, and all that. Dessler’s paper, no. His archived comments on the NASA website about cloud feedback, yes. We shall see what the House oversight committee subpoena uncovers about Karl.

        My only point was that ‘consensus’ scientists have strong professional motivations to cognitive biases on things like subjective climate uncertainty. A good recent example is Gavin Schmidt’s PR announcement of 2014 as hottest ever. When digging showed there was actually only a 32% chance given GISS’ own uncertainty estimate that his statement was correct. Both Berkeley Earth and HadCrut4 were more honest about the uncertainty and said too close to call.

      • “This makes such methods useless in my view.”

        I don’t think that’s true – I think if you document the method well enough, then you can perform a sensitivity analysis on expert selection to get an idea of how much expert selection can “spin” the result. That’s the whole point, isn’t it?

    • dogdaddyblog

      Mr. Pope, have you considered streaming the meeting over the internet? If cost is an issue, maybe interested people would contribute.

      Dave Fair

  13. The paper’s SI is worth a read to understand probability inversion, with which I was until today unfamiliar. The radioactive plume example was helpful. One takeaway is that software already exists to do PI. So for many of the climate science subquestions (JC and DW comments above) it could be applied by the IPCC without difficulty. WG1 is already a multi-topic meta analysis of the literature. So the subtopic ‘experts’ are already identified. Polling can be by email, response ‘guaranteed’ by the COP21 enforcement mechanism–name and shame those who don’t respond. A big problem is many so polled might fear the end of their grant gravy trains and or warmunist club expulsion if honest, so robust anonymity mechanisms would need to be in place. Maybe some ‘audit’ entity independent of IPCC does the uncertainty PI stuff and just provides results?
    But this would so unsettle the supposedly settled science (by loosing many uncertainty monsters) that it would defeat the political UNFCCC agenda IPCC was chartered to serve. So as Tol commented: been much talk but no action. Unlikely to change.

    • A big problem is many so polled might fear the end of their grant gravy trains and or warmunist club expulsion if honest, so robust anonymity mechanisms would need to be in place.

      What you are basically saying is that these scientists aren’t concerned about telling truth if it affects their livelihood in a negative way. And I am not sure how anonymity would affect their response given that result could still affect their ability to get grants as you conclude.

      • J, yup I am asserting folks tend to their personal bacon. You think not, provide saintly clisci examples in real time.
        And am echoing the Bengstrom GWPF episode. And the Lomborg Australia episode, Yes, I am saying cliscis are afraid to tell the truth for fear of their careers, with a few notable exceptions that simply prove the general rule. You have counter examples other than faux Muller of BEST?

      • Yes, I am saying cliscis are afraid to tell the truth for fear of their careers

        You mean lying, right? And to lie about work that has such important policy implications is not just ‘tending to their personal bacon, it’s what I would call sociopathic behavior.

        I would call your accusation that they are not telling the truth, speculation. And I don’t think it’s very productive to call people you happen to disagree with liars based on speculation.

      • J, I gave specific examples, named and numbered. Dunno if your general objection is true or not. Why not deal with my specific examples? My hypothesis is because you cannot, so arm wave your general indignation instead.
        Get used to more incoming targeted sniper fire. You don’t like it, then get off the battlefield.

      • You gave specific examples of scientists lying because they were afraid to tell the truth? What exactly did you provide?

      • Joseph,

        You wrote –

        “. . . it’s what I would call sociopathic behavior.”

        Well, I suppose misrepresentation, attempting to prevent journals from publishing dissenting work, claiming credit for an undeserved and unawarded Nobel Prize award might appear sociopathic to some.

        On the other hand, according to a leading self proclaimed climate scientist –

        “I’ve never requested data/codes to do a review and I don’t think others should either.” Who needs data or codes to review a scientific paper? A climatologist exhibiting the finest scientific standards. If it’s a pal’s paper its above reproach. If it’s anyone else’s, reject it.

        Climatological reviewers don’t need no stinkin’ data or code! Steven Mosher would be appalled! He’s a scientist after all!

        Oh, wait . . .

        Cheers.

      • I don’t know, Mike. There are thousands of scientists doing climate science related research. Are they all sociopaths? Are they all lying?

      • Joseph, there are thousands of scientists doing climate science related research. Are any of them sociopaths? Are any of them lying? Are any of them using unvetted, unverified mathematical and experimental procedures? Are any of them predicting apocalypse based on flimsy evidence? Any?? Can you name some???

      • Are any of them using unvetted, unverified mathematical and experimental procedures?

        Are you suggesting that there is some conspiracy to have fraudulent research published?

        Are any of them predicting apocalypse based on flimsy evidence? Any?? Can you name some???

        I don’t haven’t seen any scientist say that catastrophic warming is certain. Only that it is possible if we continue BAU. If you have an example, I would like to see it. And what about the thousands of other scientists who publish research related to climate science. Are they all lying?

      • Notice how Joseph introduced the conspiracy word when I said nothing of conspiracy. Good example of dishonesty, that.

      • We all know how the propaganda game is played Joseph (Goebbels).

      • Goebbels made up accusations about Jews and others. Sounds a lot like the smears I hear coming from some “skeptics” about climate scientists.

      • Goebbels produced propaganda in order achieve H*tler’s ends by manipulating the German public. Kind of reminds me of the CAGW doom and gloomers.

      • jim2,

        The CAGW propaganda has the same melodramatic flare to it, with the same resonances of Armageddon.

        “During the war, the lie most effective with the whole of the German people was the slogan of’the battle of destiny of the German people’,” Hannah Arendt explains in Eichmann in Jerusalem.

        “It was a matter of life and death for the Germans, who must annihilate their enemies or be annihilated.”

      • Steven Mosher

        There is really no discussion to be had with people who accuse you of lying. They never can back down, even if can prove what you say is true. The accusation itself really means talking should cease.

      • Joseph,

        You wrote –

        “I don’t know, Mike. There are thousands of scientists doing climate science related research. Are they all sociopaths? Are they all lying?”

        I presume you are trying to make some point here. Maybe you should have stopped after your first three words.

        But in answer to your silly Warmist question, I have to reply I don’t know. Give me some names, and I might be able to tell you why I think they are either fools or frauds, based on what they said or what they wrote.

        Now confusing the picture by including all scientists doing climate related research is going a long way, even for a dedicated Warmist. Do you also include all physicists, chemists, mathematicians, mammologists, Steven Mosher, or anyone else who claims to be a climate scientist?

        You see to be employing the Warmist tactics of deny, divert, confuse, with the added ploy of adopting false humility.

        First you say you don’t know, and then go on to imply you do, and you’re only pretending not to know.

        You deny what I wrote, and ask your own silly question, hoping I won’t notice the difference. You divert the question by widening the statement to include presumably every scientist in the world, as climatologists claim superior knowledge in all scientific fields. You confuse by being silly enough to assume I have knowledge of the actions of every scientist in the world.

        Put simply, yes, all climatologists who support spending Government or taxpayers’ money based on the non existent warming abilities of CO2 are obviously sociopaths, knowingly or otherwise. As to lying, if climatologists are representative human beings, then yes, it would be difficult to find a climatologist who never told a lie. It would be well outside mainstream behaviour.

        If you could find even one climatologist prepared to state in writing that he or she has never told a lie in their life, I would be surprised. Many would conclude such an individual is likely mentally deranged. Anything else you’d like to know?

        Cheers.

    • This from a previous thread is just special:

      I also asked each expert a set of eleven ‘seed questions’, for which answers are known, so that their proficiency could be calibrated. As is often the case, several experts were very sure of their judgement and provided very narrow uncertainty ranges. But the more cautious experts with longer time estimates and wider uncertainty ranges did better on the seed questions, so their answers were weighted more heavily. Their views would probably have been poorly represented if the decision had rested on a group discussion in which charismatic, confident personalities might carry the day. Self-confidence is not a good predictor of expert performance, and, interestingly, neither is scientific prestige and reputation.

      Perhaps the IPCCs problem, aside from using bad studies and having Greenpeace and WWF on site (no bias there), is their process.

    • Joseph said:

      I don’t haven’t seen any scientist say that catastrophic warming is certain.

      Lordy, Lordy!

      Talk about having departed from factual reality.

      • Glenn, do you have any examples.

      • Joseph:

        “We are in a kind of climate emergency now,” said Prof Stefan Rahmstorf, from the Potsdam Institute of Climate Impact Research in Germany. He told Fairfax Media: “This is really quite stunning … it’s completely unprecedented.”
        https://www.theguardian.com/science/2016/mar/14/february-breaks-global-temperature-records-by-shocking-amount

        and

        “This is a very worrying result,” said Bob Ward, policy director at the Grantham Research Institute on Climate Change at the London School of Economics…, it looks like global mean surface temperature is likely to exceed the level beyond which the impacts of climate change are likely to be very dangerous.”

        and

        The report, “Climate Change: a Risk Assessment,” was commissioned by the UK’s foreign office and cowritten by leading environmental scientists from all over the world.

        The analysis reaffirms many well-documented assertions about climate change: Global temperatures are rising drastically, leading to rising sea levels as well as widespread drought and famine, which threaten human lives on all continents, especially in developing nations.

        http://qz.com/452169/warning-from-a-global-team-of-scientists-climate-change-is-as-threatening-as-nuclear-war/

        Google “warnings of global warming” and you will find many, many articles that make similar claims.

      • “We are in a kind of climate emergency now,”

        I am not sure what he means by emergency. But I don’t think he is saying that catastrophe is imminent or is occurring now. I would ask him what exactly he means by that. After all there was no elaboration allowed in the article..

        it looks like global mean surface temperature is likely to exceed the level beyond which the impacts of climate change are likely to be very dangerous.”

        “Llikely” appears twice in that statement and nothing about “certainty. Also, something can be dangerous and not be considered “catastrophic”

        Global temperatures are rising drastically, leading to rising sea levels as well as widespread drought and famine, which threaten human lives on all continents, especially in developing nations.

        Again something being catastrophic is different from being dangerous or causing harm.

        The analysis reaffirms many well-documented assertions about climate change: Global temperatures are rising drastically, leading to rising sea levels as well as widespread drought and famine, which threaten human lives on all continents, especially in developing nations.

        For future reference, Glenn, you should always go to the source and don’t rely on some author’s interpretation. This is from the actual study:

        http://www.csap.cam.ac.uk/projects/climate-change-risk-assessment/

        This report argues that the risks of climate change should be assessed in the same way as risks to national security, financial stability, or public health. That means we should concentrate especially on understanding what is the worst that could happen, and how likely that might be.

        The report presents a climate change risk assessment that aims to be holistic, and to be useful to anyone who is interested in understanding the overall scale of the problem. It considers:

        What we are doing to the climate: the future trajectory of global greenhouse gas emissions;
        How the climate may change, and what that could do to us – the ‘direct risks’ arising from the climate’s response to emissions;
        What, in the context of a changing climate, we might do to each other – the ‘systemic risks’ arising from the interaction of climate change with systems of trade, governance and security;
        How to value the risks; and
        How to reduce the risks – the elements of a proportionate response.

        And again I see mention of risks and no certainty. Maybe you can actually find one that we can actually agree on.

      • Sorry about the formatting ugh..

  14. When you apply this to attribution (see Verheggen’s poll), more expert judgment leans more towards higher attribution of warming to CO2. Given this piece of information, I predict that the “skeptics” will quickly dismiss this way of doing things again and go back to their conspiracy ideation to explain it. It’s fine to call for this way of doing things, but stick to it if you are doing that.

    • this is why you need structured expert judgement. Over confident people, in independent testing, get a lower weight

      • Who can judge the more expert people to be overconfident? Sometimes more knowledge just leads to more confidence, and that is the case here. They not only see but understand more lines of evidence.

      • Jim D, Have you ever talked with Jeff Harvey? He explains everything at Deltoid. He is in agreement with both Michael and you. No surprise.

      • Jim D | April 28, 2016 at 7:06 pm |
        Who can judge the more expert people to be overconfident? Sometimes more knowledge just leads to more confidence, and that is the case here. They not only see but understand more lines of evidence.

        Just about anyone can judge them. We just need to pick responsible criterion, measure them, and provide the judges with the information.

        Percent successful predictions+ accuracy on seed question survey would be one way.

        There are enough “never wrong type A”s out there that the “confident expert is the best expert” claim isn’t even close to being on solid ground.

  15. Steve McIntyre

    The Antarctic ice loss estimates in AR5, Oppenhemier’s example, do not show IPCC in a good light, as the IPCC lead authors (who later were featured in SKS Denial) adopted estimates that were higher than from an expert group commissioned to examine the topic. During the past decade, glacioisostatic adjustment numbers have come down and with them, values for current ice mass loss. I did a long post at CA on the topic about six months ago.

    • Nor did the mann hockey stick show the ipcc in a good light. That example should be brought up early and often when discussing the credibility of the ipcc…

      • But, later data supported Mann.

      • When did British Petroleum, start making money? Michael must have been on the payroll a very long time by the looks of things. Is he still getting a check from them?

      • You do know that graph is bullsh*t don’t you?

        Instrumental data has been spliced with proxy data.

        Either plot the instrumental data going back 10,000+ years or plot 100% proxy data.

      • Jim D, would you please throw up your 150K year chart for a comparison?

      • Either plot the instrumental data going back 10,000+ years or plot 100% proxy data

        Can you cite the rule?

      • JCH, “Can you cite the rule?”

        Q: Is the rate of global temperature rise over the last 100 years faster than at any time during the past 11,300 years?

        A: Our study did not directly address this question because the paleotemperature records used in our study have a temporal resolution of ~120 years on average, which precludes us from examining variations in rates of change occurring within a century. Other factors also contribute to smoothing the proxy temperature signals contained in many of the records we used, such as organisms burrowing through deep-sea mud, and chronological uncertainties in the proxy records that tend to smooth the signals when compositing them into a globally averaged reconstruction. We showed that no temperature variability is preserved in our reconstruction at cycles shorter than 300 years, 50% is preserved at 1000-year time scales, and nearly all is preserved at 2000-year periods and longer. Our Monte-Carlo analysis accounts for these sources of uncertainty to yield a robust (albeit smoothed) global record. Any small “upticks” or “downticks” in temperature that last less than several hundred years in our compilation of paleoclimate data are probably not robust, as stated in the paper.”

        http://www.realclimate.org/index.php/archives/2013/03/response-by-marcott-et-al/

        The spurious uptick at the end is due to the rapidly decreasing number of proxies being “averaged”. This is why I love giving Jimmy D the comedian of the blog award.

      • So there is there or is there not such rule? Is it in a textbook?

      • Rule 1: Don’t compare apples with oranges. Rule 2: When using statistics don’t mix two different data sets when constructing a time series because the statistical basis of differing data sets cannot be simply aggregated. They both need to be normalised first.

      • Jim D, any time you reference by posting Marcott’s academic misconduct (yup, see previous guest post here comparing his thesis version to the Science version) you exhibit either gross ignorance or knowing dissembling. Which?

      • The only complaints about Marcott are about the uptick that wasn’t even in the paper, but only in the press release. The uptick is supported by thermometers, but the critics don’t care about details like that, perhaps because they don’t believe thermometers either. The point of this plot is to show that the Mann estimate was not that bad, and the hockey stick was actually a very good first attempt that still stands up. It captured the millennium’s downward trend, also seen in the ocean2k reconstruction and Marcott’s.

      • And there Jim goes again, STILL defending the hockey stick…
        He just can’t help himself it looks like.

      • The skeptics really need to do their own reconstruction some day. It’s just another area where their team has come up with nothing, probably due to lack of expertise. They should be upset with their own side for such a poor showing in this whole field of science. No wonder they keep getting clobbered by the evidence, but they still seem fairly plucky.

      • Jim

        We have had this discussion many times. You can’t splice a thermometer onto a reconstruction. The reconstruction is a vague proxy taken from data that is often inaccurate, or centred on data that is smoothed over a 50 years to 500 years time scale. It misses the huge daily, monthly annual and decadal variation that shows up in thermometers.

        This was one example from my article showing wildly variable CET graphed against the highly smoothed data of various reconstructions.

        tonyb

      • I am not on any team Jim.
        But if I wanted to be on one, I would probably join the team that makes fun of medieval warming period deniers.

      • The uptick is supported by thermometers, but the critics don’t care about details like that, perhaps because they don’t believe thermometers either.

        C’est magnifique, mais ce n’est pas la guerre Science: c’est de la folie guerre.

      • JCH, “So there is there or is there not such rule? Is it in a textbook?”

        There are a variety of rules and warnings about using averages in correlations, but as of yet there isn’t a specific Marcott or Mann rule, both of which mix a variety of “novel” methods in an attempt to avoid basic “rules”.

        With Marcott et al. it is pretty simple to show their error range isn’t accurate by just including out of sample data, like higher resolution reconstructions by the authors of the reconstructions they used and some other updates.

      • But Marcott does have sufficient resolution to show the Medieval Warm blip around 1000 AD, and the “skeptics” should be very excited about this, but for some reason are not.

      • Jim D, “But Marcott does have sufficient resolution to show the Medieval Warm blip around 1000 AD.”

        Yep, about a 200 year smoothed blip. Now all you need to do is smooth the instrumental the same way and compare.

      • I have read through an article by statisticians about statistics and temperature reconstructions, and I see nothing there that indicates such a rule exists, or is in the offing. If anything, it tends to support Jim D’s comments. You wanted statisticians to be involved.

      • JCH, “I have read through an article by statisticians about statistics and temperature reconstructions, and I see nothing there that indicates such a rule exists, or is in the offing.”

        Using out of sample data to verify results is pretty standard, requiring a “professional” to use out of sample data to verify his/her work isn’t supposed to be needed, that is part of being the “expert”. As for comparing data smooth to 400 years versus annual, that is “common sense” or a de facto part of the job of a statistician. There aren’t any specific rules on how to wipe your butt either.

      • JimD

        From a quote above;

        ‘Our study did not directly address this question because the paleo temperature records used in our study have a temporal resolution of ~120 years on average, which precludes us from examining variations in rates of change occurring within a century. Other factors also contribute to smoothing the proxy temperature signals contained in many of the records we used, such as organisms burrowing through deep-sea mud, and chronological uncertainties in the proxy records that tend to smooth the signals when compositing them into a globally averaged reconstruction.’

        The already highly novel proxies used by such as Marcott are smoothed and the end results are like a very coarse sieve through which more accurate and finer grained data -such as highly variable instrumental temperature records-easily fall through.

        tonyb.

      • Again, there is nothing there to suggest they even remotely agree with you. Unlike skeptics, the statisticians appear to a reasonable lot. They made shapes they think look like hockey sticks.

      • Here is a comparison of one of Marcott’s reconstructions with a higher resolution (50 year average) reconstruction at the same location. The Mohati 2010 in yellow has a resolution of 530 years and was calibrated to temperature using Anand et al. 2003 methods. The Oppo et al. 2009 was readily available to Marcott I am sure, had he wanted to test his method.

      • JCH, “Again, there is nothing there to suggest they even remotely agree with you.”

        Pretty sad commentary. I guess that is why some states feel compelled to legally define what a men’s room is.

      • A great deal of righteous indignation is intentionally manipulative… an act by thespians. The statisticians do appear to be righteously indignant… because the work, while wrong in some aspects, was not that wrong overall, had usefulness, and its usefulness was subsequently affirmed multiple times, including by them.

        So you wanted statisticians involved, but they did not do the hatchet job you thought they would do, and now I guess we only want steve and ross.

      • whoops – do not appear to be righteously indignant…

      • JCH, “So you wanted statisticians involved, but they did not do the hatchet job you thought they would do, and now I guess we only want steve and ross.”

        Is this a straw squirrel or a squirrely man? The issues with Marcott are obvious and Marcott hisself notes the limitations. It isn’t the “real” statisticians having a problem with this it is the “believers” that misuse the work for their own weird purpose. The “experts” know that there are ways to test their work but not how their work will be tested by “believers”.

      • captd, if you want century averages, make a guess of the 21st century average and compare it. A clue is we are at nearly 1 C on that scale already and rising. Conservatively the average will be 2 C, so you can check out what that looks like on Marcott’s plot. The medieval warm blip won’t compare, and neither will the Holocene Optimum.

      • You’re talking about one study; I’m talking about an alleged rule that a thermometer-based time series cannot be spliced onto a proxy-based time series. The statisticians said the obvious. Because they’re reasonable. 8 out of 7 times, that’s actually surprising.

      • JimD, “21th century average..”

        Okay, that would 15 years with an uncertainty of about +/- 0.05 C because of method used but a standard deviation of about +/- 0.3 C because of what is being measured. A similar point for the first 15 years of the 20th century would have an uncertainty of about +/- 0.30 C and a standard deviation of about +/- 0.30 C degrees. If you could get a 15 year period at the start of the 16th century, it would have an uncertainty of about +/- 0.50 C but you don’t have a valid standard deviation for that period that compares with the 20th or 21th century periods. You can pretend that there isn’t any standard deviation then, but you have evidence that it is likely 0.30 C or greater because of your more precise modern measurements. Since standard deviation is one measure of uncertainty, you are tossing away information for what particular reason?

      • captd, you wouldn’t even be able to distinguish the LIA from the MWP with your level of uncertainty. Is that really your view?

      • I wasn’t talking about 15 year averages. It was the 21st century as a whole, which easily may be 2 C by the time it is through. Not sure you understood, based on your response.

      • JimD, “captd, you wouldn’t even be able to distinguish the LIA from the MWP with your level of uncertainty. Is that really your view?”

        With that level of uncertainty there is still evidence of MWP and LIA so they are likely larger, but you cannot determine how much larger, which is the point, apples v oranges.

      • JimD, “I wasn’t talking about 15 year averages. It was the 21st century as a whole, which easily may be 2 C by the time it is through.”

        I may win the lottery tonight if I buy a ticket. While you may not realize it you are talking about 15, 50 120, 300, and 520 year averages all averaged to produce something that has a method uncertainty but an unknown measured uncertainty. It is like lab calibration versus field calibration, you know the field calibration is not as good, so why pretend you are using lab standards?

      • JCH, “You’re talking about one study; I’m talking about an alleged rule that a thermometer-based time series cannot be spliced onto a proxy-based time series. The statisticians said the obvious. Because they’re reasonable. 8 out of 7 times, that’s actually surprising.”

        There is no rule that prevents splicing, there are rules about properly treating uncertainty. When you have very low frequency paleo you have lots of issues explaining your uncertainty. This is why there has been so much effort made to produce high frequency paleo reconstructions. Like there are no rules for butt wiping, cleanliness is a pretty good test :)

      • captd, I am saying that the most likely 21st century average is 2 C. You can compare that with the Marcott paleo, being a similar averaging period and it is way off the scale. If you have your own most likely value, you can try that out too.

      • JimD, your opinion and 5 bucks will get you a small coffee at Starbucks. I have actually taken the Marcott et al supplimental information and added “cap” reconstructions to extend their work into the 20th century and have a different opinion of about the same value.; Rosenthal, Oppo and Linsey have valued opinions in the paleo field and tend to agree with me more than with you.

      • captd, so now you go from complete uncertainty to Lamb’s hand-drawn line with no error bars. Interesting. Anyway if you like Lamb, maybe you can read what he really thought. Not so different from Marcott in the long term.
        https://sites.google.com/site/medievalwarmperiod/Home

      • catweazle666

        Jim D: “The uptick is supported by thermometers”

        No, it most certainly isn’t.

        Why do you keep up your mendacity, when it is so transparent to every reader of the blog?

      • JimD, “Not so different from Marcott in the long term.”

        World’s of difference. Marcott has error margins that don’t include reality, if reality is 120 years or so. Lamb doesn’t include error margins because he couldn’t determine what they might be or what a relevant time frame might be. Oppo provides some scale for Lamb on a 50 year time scale. That would be progress.

        If you are concerned with “real” climate change that includes OHC, Oppo along with Rosenthal provide reconstructions that should carry more weight since they deal almost exclusively with the oceans that have been sucking up all that energy. Oppo, Rosenthal and Linsey have a few issues, but they seem to at least know what year proxies should start at.

    • Does anyone have a link to this CA post?

    • > who later were featured in SKS Denial

      AR5: April 2008.

      “SkS Denial”?

      A timeline is a useful to coatracking tool.

  16. Here’s an example of uncertainty in physics –

    “1.001 159 652 180 85 (76),
    a precision of better than one part in a trillion. (The digits in parentheses indicate the uncertainty in the last listed digits of the measurement.)”

    There is argument about the uncertainty, but at least there’s a number to argue about, and good theoretical basis for such arguments.

    An example of experimental results supporting a theory. This seems like science to me.

    Compare this with climatology. As Feynman said “This is science?”

    Cheers.

    • Uh, oh.

      1.00115965218085(76)

      Cheers.

    • Steven Mosher

      ““Dark energy should be 10120 times stronger than the value we observe from astronomy,” Cliff said. “This is a number so mind-bogglingly huge that it’s impossible to get your head around … this number is bigger than any number in astronomy — it’s a thousand-trillion-trillion-trillion times bigger than the number of atoms in the universe. That’s a pretty bad prediction.”

      http://www.lifecoachcode.com/2016/04/07/2-dangerous-numbers-universe-threatening-end-physics/

      • The 2 Most Dangerous Numbers in The Universe are Threatening The End of Physics!

        I suppose “physicists” like that were predicting the same thing about the Michelson-Morley results.

      • The Science Network recently posted a similar talk by Larry Krause.

        These physicists like Cliff and Krause provide excellent examples of how science has come full circle back to being theology.

        Their notions rely more on theory than evidence — abstruse theory that can’t be proved or disproved by material evidence.

        So the arguments end up resembling theological arguments, like the age-old question over the existence or non-existence of God, where no physical evidence can be produced to settle the debate one way or the other, for once and for all.

        With Krause, his talks invariably end with an attack on traditional religion. This, not surprisingly, is what one would expect from someone more interested in theology than science. The lady doth protest too much.

      • The problem isn’t that we reached the end of physics.

        The problem is that we reached the wrong end.

      • PA,

        Could we say that Cliff and Krause are the “climate scientists of physics,” since they seem to encourage methods and ideas of science that are similar to those of most climate scientists?

        As Stephen Toulmin pointed out in Cosmopolis,

        Using our understanding of Nature to increase comfort, or to reduce pain, was secondary to the central goal of Science.

        Rejecting in both method and spirit Bacon’s vision of a humanly fruitful science, Descartes and Newton set out to build mathematical structures, and looked to Science for theological, not technological, dividends.

    • Steven Mosher,

      Indeed. Your scary scenario man, (pushing for more funding to keep the LHC operating, by the look of it), discovers that the theory is not supported by the observation.

      He writes –

      “Equally frightening is the reason for this approaching limit, which Cliff says is because “the laws of physics forbid it.””

      Frightening? A good word, if you’re looking to keep your funding. As to the laws of physics forbidding anything – complete garbage. Facts are facts. If they don’t support your theory, no matter how elegant or persuasive it seems, your theory is wrong.

      Predictions based on the existence of the luminiferous ether, the caloric theory of heat, the indivisibility of the atom, and the miraculous heat generating properties of CO2, have proved to be “pretty bad”.

      Einstein apparently said “Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.”

      Your scary man said –

      “It’s the idea that we are reaching the absolute limit of what we can understand about the world around us through science.”

      Really? Unless the LHC receives more funding, of course! We’re expected to believe it, because, after all human stupidity is infinite.

      No doubt Climatological predictions and pronouncements are believed by some who tap into the infinitely deep well of human stupidity. Or maybe they are just lazy or gullible. Who knows?

      Cheers.

      • Steven Mosher

        You missed the point Mike.

        try again, dont be illiterate.

        Clue: what does uncertainty look like in science?

      • Steven Mosher,

        Scientific uncertainty –

        1.00115965218085(76) – the last two digits specify the uncertainty.

        Cheers.

  17. Sorry, you just don’t know and us punters can’t afford the white elephants, so it’s back to conspiracy idi-aminism, denialiciousness, Dunning-dissonance, Kruger-Freddie (take pick of patronising pejoratives)…

    Good try, warmies.

    – And Then There’s Cloud

  18. Tricky-warmist-methodology,
    one-tree-to-beat-all,
    upside-down-tiljndering,
    pea ‘n thimble splicing,
    sleight of hand replacing
    one IPCC AR 4 figure,
    2nd draft, with another,
    details lost in spaghetti
    by adjustment to a past
    projection envelope.Voila –
    CC not science but craft.

    https://climateaudit.org/2013/09/30/ipcc-disappears-the-discrepancy/

  19. The biggest weakness of IPCC studies is benefit/harm.

    There are global warmers who:
    1. Don’t think more plant growth is beneficial
    2. Don’t think more food is beneficial.
    3. Don’t think more people is beneficial.

    This makes it impossible for them to produce sensible cost/benefit analysis.

    This failed viewpoint colors their studies.

    If a study by global warmers proved conclusively that Jesus could walk on water the title of the study would be “Jesus Christ can’t swim!”

  20. Geoff Sherrington

    This whole topic is unsound because it allows human belief to intrude on scientific measurement, even to displace it.

    If two scientific measurements disagree, one or both is wrong or inappropriate. No number of expert opinions (beliefs) can alter that fundamental conclusion.

    Assessments of the weighting to attribute to experts is taking the topic too far into fantasy land. If there is disagreement among experts, the matter is not mature enough to be presented to decision-makers for action. (It is a form of the illegality of crying “Fire” in a dark movie theatre).
    The better way is to withhold beliefs until the science can be replicated. Don’t even go near the belief methods.
    Follow the scientific method.
    Geoff.

    • This is the only comment I’ve looked at so far. Well said!

    • Geoff wrote:

      “the matter is not mature enough to be presented to decision-makers for action”

      It gets even worse. Not only is our knowledge not mature enough to present to decision makers, the technology anointed as being “green” is impotent to alter the arc of CO2 levels in any significant way without vastly reducing the wealth and quality/quantity of human life.

      A few thousand years ago, priests of a Greek or Mayan cult might demand that followers invest their wealth into a series of temples in order to pave the way to a better tomorrow. Today the priests of the climate cult tell us to listen to their oracle and sacrifice our inherited and hard-earned wealth to build an infrastructure which will likely be no more effective.

      • Well, they haven’t asked us to sacrifice our virgin daughters yet…

        But that is usually the next step after construction is done.

    • Steven Mosher

      Ill-posed.

      “If two scientific measurements disagree, one or both is wrong or inappropriate. No number of expert opinions (beliefs) can alter that fundamental conclusion.”

      Measuremen’s always disagree and are always wrong.
      They always and forever will fall short of perfection.
      In The Realm Of theoretical science we can just continue to refine. In the realm of applied science there are times where we have to make decisions when faced with uncertainty when faced with data lacuna and when faced with outright conflicting data. Every hurricane season you see people doing this. Nobody looks at 10 conflicting hurricane tracks and argues that one should just follow the scientific method.
      Science isn’t a panacea. Nor does it always have all the answers when you need them. In these types of situations you could do worse than consulting experts.

      • Steven Mosher said:

        Measuremen’s always disagree and are always wrong.
        They always and forever will fall short of perfection.

        Relativism, anyone?

      • Steven Mosher,

        Someone said science is belief in the ignorance of experts, or something similar.

        It appears you could do much, much, worse by believing them, particularly if they work for the Army Corps of Engineers, NASA, or any number of other groups or agencies.

        Your touching faith in experts is often not borne out by fact. You have to exercise your own brain, and use your own common sense as well.

        Unfortunately, it seems that common sense is not common practice.

        Cheers.

      • Steven Mosher

        Not relativism glenn.
        Every instrument has error.
        If they didn’t we would not have to measure twice and cut once.
        Every one knows this.

      • Steven Mosher

        I have little faith in experts.
        I have less faith in non experts.
        The only thing we can be certain of is that Flynn is probably illiterate.

      • Steven Mosher,

        Not relativism?

        Well how about nihilism? Or anarchism?

        But I do get your point. When it comes to defending the climatariat, any expedient philosophy will do, just so long as it exculpates the trespasses of the Ministry of Truth.

    • Geoff Sherrington, 4/28/16 @11:55 pm got a good response to this:

      If two scientific measurements disagree, one or both is wrong or inappropriate. No number of expert opinions (beliefs) can alter that fundamental conclusion.

      Two scientific measurements will almost always (with probability one) disagree according to a principle or axiom, your choice, of science. Good scientific practices require expression of facts with a specific probability distribution, so measurement agreement comes with a probability, a confidence number.

      The certainty of disagreement is true even for two simultaneous measurements made with one common experiment. Then the next layer of the onion is to decide how closely two different experiments produce comparable results.

      What IPCC does is glue together two entire records made at different times with entirely different methods. Wherever possible, IPCC reports current instrument records, but as often as not includes proxy data or computer generated records, records that agree only by virtue of some unseen calibrations or one of its many equally vague Model Intercomparison Projects IPCC uses to force random GCMs into agreement.

      See, for example,

      https://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-spm-1.html

      and the discussion as Figure 35 in

      http://www.rocketscientistsjournal.com/2010/03/sgw.html

      For a relevant discussion on this blog, see

      https://wattsupwiththat.com/2010/12/26/co2-ice-cores-vs-plant-stomata/

      This IPCC practice is the worst kind of behavior disguised as science.

  21. The situation we are in now is that anyone can predict anything they want and no one will ever know any more than they know now because nothing predicted by anyone will ever happen or not happen until long after everyone is dead.

    • They want us all to learn the ancient religion of self-sacrifice. To save the world from mankind, for mankind. Win, win. We might all know the answers when we die is the only downside I am aware of and that is not scientific by any means.

      • If the hiatus continues perhaps the Left can keep the AGW religion alive by offering 72 virgins in return for sacrificing the economy and putting skeptics to death?

    • Steven Mosher

      No skeptics could predict cooling over the next 5 years as the sun goes to minimum.
      They don’t dare to.

      • Geoff Sherrington

        Steven,
        They are too educated to do that.

      • You make Mosher’s point for him well.

      • Steven Mosher

        Funny. The sun controls everything.
        It’s headed to a minimum.
        Crickets.

      • No skeptics could predict cooling over the next 5 years as the sun goes to minimum. They don’t dare to.

        Well, if they did, they might be a little early, still a decade or so from what some speculated might be a Maunder type minimum (SC24 was a couple of years late).

        I don’t know about predictions, but it’s an interesting test.

        First, about whether SC25 turns out to be as weak as some predicted.
        Second, if there’s any climatic response.

      • “No skeptics could predict cooling”

        http://www.bitsofscience.org/real-global-temperature-trend-2016-2020-global-forecast-average-temperatures-la-nina-7079/

        Hmm. They are predicting the next five years will be warmer than 2015 based on CMIP5 model runs.

        It is an El Nino year so the next three or so years will be cooler.

        Umbral magnetic field is flat around 2000 and doesn’t show signs of going up.

        So it really depends on adjustments. The GISS adjustments are going up at 0.3°C/decade. That is about 3 times as fast as the real warming trend.

        Which means it really depends on if Trump wins the election and/or they come out with a new version of the temperature set (each new version has a steeper trend).

      • Steven Mosher

        “Well, if they did, they might be a little early, still a decade or so from what some speculated might be a Maunder type minimum (SC24 was a couple of years late).”

        Note the paucity of skeptics predicting a Mauder type minimum.

        Skeptics wont

        A) predict the probability of a mauder Minimum
        B) predict a growth in arctic ice
        C) predict the next pause
        D) predict any cooling
        E) predict Less cooling than the IPCC
        F) predict any decrease in extremes

        But we can predict this. 10 years from now when temperatures are warmer, skeptics will still doubt that C02 can warm the planet
        when c02 doubles and temperatures are 1.5C to 4.5C warmer they will still doubt the existence of a global temp. they will still point to the hockey stick. they will still say ECS is too uncertain.. they will still clamor for an explanation of the warming of the 30s. They will still doubt adjustments.
        These objections are timeless. They are timeless because skeptics are never skeptical of their skepticism

      • predict Less cooling than the IPCC
        I’m guessing you mean predict less warming than the IPCC –

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.

        Put me down for the under on that one.

      • predict a growth in arctic ice

        Are there any hysterics ( other than the GCMs ) predicting a decrease in Antarctic sea ice?

      • Note the paucity of skeptics predicting a Mauder type minimum.

        Skeptics wont

        A) predict the probability of a mauder Minimum
        B) predict a growth in arctic ice
        C) predict the next pause
        D) predict any cooling
        E) predict Less cooling than the IPCC
        F) predict any decrease in extremes

        But we can predict this. 10 years from no

        A) The Maunder Minimum had 28 years with virtually no sunspots. Current thinking is that takes an umbral magnetic field of 1500 or less. Maunder minimum doesn’t look likely but a continued low sunspot activity does

        B) Sure it looks to be a safe bet that after this year the sea ice extent will continue up based on the volume trend.

        C) This is a raw data vs adjustment question. By raw data the trend should look like the early 21st century pre-2014.

        D) 2016 about the same as 2015 followed by two cool years. The 5 year warming trend based on CMIP5 predicts no such thing. The thermal inertia of the ocean delayed 20th century warming plus a small amount of CO2 forcing means it will be slowly warming ’til mid century then start tapering off.

        E. Don’t understand what you are getting at.
        F. Don’t see any reason for a statistically significant difference.

      • So, any bets on:

        1. the Hot Spot appearing?

        2. Antarctic Sea Ice declining?

        3. The Southern Ocean warming?

        4. The Eastern Pacific warming?

        5. The Satellite Vegetation Stress Index showing more drought?

        6. The Accumulated Cyclone Energy index indicating a significant trend?

        7. The US Palmer Drought Index indicating a significant increase?

      • stevenreincarnated

        I predict it will cool over the next 10 years and I predict it will be blamed on global warming.

      • Turbulent Eddie | April 29, 2016 at 9:12 pm |
        predict Less cooling than the IPCC
        I’m guessing you mean predict less warming than the IPCC –

        For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.

        Put me down for the under on that one.

        I’m with you on this.

        One caveat for both sides:

        A discussion of the problem
        https://climateaudit.org/2008/08/07/north-versus-south/

        This can’t continue. The large separation of North and South temperature is strange and unhistoric. The SH land is cooling relative to the SH ocean. The NH land is warming relative to the NH ocean.

        I assume this is partly due to the theorized “CO2 makes Antarctica a better radiator” effect. If Antarctic sea ice continues to increase we might find out if the SH can back us into an ice age.

        7. The US Palmer Drought Index indicating a significant increase?

        US study compared corn and soy yield to the drought index and concluded that drought resistance was increasing.

      • Why would a skeptic assert a positive claim? Is it the skeptic’s responsibility to disprove the positive claims made by the CAGW advocates? Do skeptics take on the responsibility of replacing the CAGW claim with an alternative? How could they do so and remain skeptics?

      • Yes, yes — Catastrophic Climate Change Masked by Global Cooling Trend…

        Posted on 2014/11/13

  22. “As a matter of fact I can also define science another way: Science is the belief in the ignorance of experts.” – Richard Feynman.

    Maybe he was being a little tongue in cheek, but that approach served him well in relation to his involvement with the commission into the causes of the Challenger disaster. Some flavour may be gathered from the following –

    “Secondly, Feynman was bothered not just by this sloppy science but by the fact that NASA claimed that the risk of catastrophic failure was “necessarily” 1 in 10^5. As the figure itself was beyond belief, Feynman questioned exactly what “necessarily” meant in this context—did it mean that the figure followed logically from other calculations, or did it reflect NASA management’s desire to make the numbers fit?”

    Climatologists obviously consider themselves far superior to the NASA experts. Climatological expertise is not to be challenged by anyone with the temerity to ask “Please, sir, may I see an experiment?”

    Weighting? I’d weight the whole lot of them down with a few old computers, and chuck ’em in the sea! Climate science experts? “Surely, sir, you jest!”

    Cheers.

    • Steven Mosher

      Appealing to feynman should have its own name as a logical fallacy. Ad dickium.

      • That’s Rich.

      • appeal to exceedingly low-pulse authority

      • Be careful – it might backfire.
        “The Mosher fallacy of commenting while in traffic”
        “The Mosher fallacy of incomplete sentences”

        Being well aware that this might backfire on me. :)

      • Steven Mosher

        “As a matter of fact I can also define science another way: Science is the belief in the ignorance of experts.” – Richard Feynman.

        Muller has better insight which I paraphrase.

        “Science is the belief in your own ignorance ”

        The way he put it to me. When you question your own beliefs you are being a scientist.

        I would add..that questioning someone elses science is too easy to be called science.it’s often just sophistry.

      • John Carpenter

        Or when used in on-line commenting, violating Feynwins Law.

      • You wrote –

        “Science is the belief in your own ignorance ” Why did you not quote Muller directly? Is this the logical fallacy “Ad Muller”?

        In any case, Muller said –

        “Now a theory that’s untestable is not something I consider to be a theory.”

        You obviously don’t agree with Muller – although Warmists can’t actually state a theory in any case. What is the theory relating to CO2 and rising temperatures? Haven’t got one? Can’t find it? Must be with the missing heat!

        Professor Judith Curry comment on some of Muller’s opinions –

        “Maybe he should listen a bit more closely to me :)”

        Who should I prefer as an expert – you, Muller, Feynman, Curry?

        How about I listen to all, and draw my own conclusions. Where’s the harm?

        Cheers.

      • Steven Mosher said:

        I would add..that questioning someone elses science is too easy to be called science.it’s often just sophistry.

        Right.

        Can’t we all just get along? At least when it comes to criticizing the climatariat?

        Some people, including myself, believe scientists should put on their big boy pants:

        What characterize the strive for knowledge is the manner of trying to prove wrong, in every conceivable way, the system to be tested. The aim is not to save the lives of untenable systems but to expose them all to the fiercest struggle for survival. A system is corroborated by the possibility for proving it wrong and the severity of the tests it has been exposed to and survived – and not at all by inductive reasoning in favor of it.

        https://judithcurry.com/2016/04/28/expert-judgement-and-uncertainty-quantification-for-climate-changeon-a-likelihood-and-prior-distribution-but-it-does-not-mean-that-the-result-of-the-learning-is-valid-validation-ensues-when-posterio/#comment-781729

      • Steven Mosher

        ““Science is the belief in your own ignorance ” Why did you not quote Muller directly? Is this the logical fallacy “Ad Muller”?”

        Simple. I sought the illiterati ( aka Flynn) level of comprehension.

        Glad to see you got it.

        There is hope.

      • Appeal to Room Temperature Authority

  23. What continues to amaze about Nature Climate Change is the willingness to base masses of material and calculation on junk terminology like the hopelessly diffuse and bendy expression “climate change”. Maybe that expression is more precise than “stuff, y’know, sorta” – but only just.

    Because I’m in a polite mood (moi!) I’ll suggest that Michael Oppenheimer is being, in the words of Michael Oppenheimer, “surprisingly informal”.

    Also, I’m on to that that sly phrasing, “ice loss”. What were the 1970s and late 1800s? “Ice triumphs”?

    First they came for the language, and we did nothing…

    • Steven Mosher

      The fight against linguistic change is long , misguided, and futile.

      • The fight against cheap verbal and intellectual stunts is also long. But necessary.

        So handy to have an all but functionless term at the centre of one’s case. “Climate change” is about as useful a notion as “water hydration”. Climate stasis being about as likely as drought hydration.

        But climate bothering requires that one build on shifting mush, just so the dinky structure can flex a little longer before crashing. A few precious months or even years to publish without having to find a new phobic craze.

      • > “Climate change” is about as useful a notion as “water hydration”.

        Exactly why this was the term Frank Luntz preferred:

        “Climate change” is less frightening than “global warming.” As one focus group participant noted, climate change “sounds like you’re going from Pittsburgh to Fort Lauderdale.” While global warming has catastrophic connotations attached to it, climate change suggests a more controllable and less emotional challenge.

        https://www2.bc.edu/~plater/Newpublicsite06/suppmats/02.6.pdf

      • Luntz was wrong. “Everyone” knows that policy is less controllable and more hazardous than climate.

      • Steven Mosher

        “So handy to have an all but functionless term at the centre of one’s case. “Climate change” is about as useful a notion as “water hydration”.

        Its therefore perfectly useful.

        Nothing is more useful than a term that is functionless.

      • Amibiguity is indeed the hobgoblin of ad hockery, Steven. OTOH, condensing the behaviour of a massive, complex system down to Twittrerable length can be a challenge. I’m not entirely sure whether you’re feeding the parsnips here, or trying to choke them. [raises eyebrow]

      • Sorry. I don’t do deep-‘n-paradoxical. Don’t believe the mystical sounding labels on energy drinks either.

        The use of the slob expression “climate change” is a deliberate verbal stunt to allow more wriggle-room and back-doors for the climate mullahs. God knows they need it.

      • There are these things called … libraries.

  24. I still don’t see how anyone will falsify a proposition about a probability distribution when only one event can actually be observed.

    One will be stuck with subjective probability. Much ado about nothing.

    Perhaps Phil Tetlock could help https://en.wikipedia.org/wiki/The_Good_Judgment_Project

  25. I have argued that the biases introduced into the science and policy process by the politicized UNFCCC and IPCC consensus seeking approach are promoting mutually assured delusion.

    There is nothing biased or politicized in the above statement.

    • Certainly not if one accepts the relativism and constructavism that the left has been peddling for the past few decades.

      • To state my point more explicitly, leaving out the satire of brandongates’ comment, and making it is easier to understand:

        There is nothing biased or politicized in JC’s statement, unless one buys into the relativism and constructivism that the left has been peddling so furiously for the past few decades.

      • The statement is self-refuting, Glenn. No appeals to external relatives, begging further questions or peddling other deflections are required of me to make that point.

      • brandongates,

        When I click on your moniker this is what I get:

        So am I mistaken to conclude that someone who has a blog named “climateconsensarian” is quite the fan of what JC calls “the politicized UNFCCC and IPCC consensus seeking approach”?

        Do you believe the UNFCCC and IPCC consensus seeking approach is not political?

        What’s your argument? What are you trying to say?

      • Glenn Stehle,

        What’s your argument? What are you trying to say?

        It’s right there in my original reply to you, Glenn. Gish-galloping away from the self-refuting nature of Judith’s quote will not change what it is.

      • brandongates,

        And then you guys wonder why the public has tuned the climatariat out.

      • Thus you circle back to where this started: There is nothing biased or politicized in the above statement.

  26. This glowing testimony to (a crucial element of) climate models provided in the paper’s SI seems rather noteworthy:

    “Adding such fixes is sometimes called a “parameterization”. (2) is not derived from physical laws. In the best case these coefficients are based on regression of experimental results.The meaning of these coefficients is dependent on all the other fixes that have been applied, and no one actually believes these models. They are used because they are judged fit to purpose given data and computing constraints.”

    • David Springer

      Insiders have less flattering terms for “parameterization”

      http://www.dictionary.com/browse/kludge

      KLUDGE
      noun, Computer Slang.
      1. a software or hardware configuration that, while inelegant, inefficient, clumsy, or patched together, succeeds in solving a specific problem or performing a particular task.

      http://www.dictionary.com/browse/fudge-factor

      FUDGE FACTOR
      noun
      1. any variable component added to an experiment, plan, or the like that can be manipulated to allow leeway for error.

      • Parameterization is acceptable if it helps to provide an unbiased description of historical data to help understand possible future trends. It is less so if the objective is to fit a preconceived idea of the future or to support an ideology.
        .
        Oppenheimer’s Nature Climate Change article expands on his earlier article in Science in 2007 on the need to do a better job in treating uncertainties: Oppenheimer, Michael, Brian C. O’Neill, Mort Webster, and Shardul Agrawala. 2007. The Limits of Consensus. Science 317 (1505-1506)

      • David Springer

        “better job treating uncertainties”

        ECS is given to be 3.0C +-1.5C. That’s an uncertainty range of wonderful to awful. That range has not been improved upon in 50 years.

        What they need is not treating uncertainties better but rather it needs to be reduced so we can make reasonable policy decisions.

      • David: You have hit on the nut of the situation. Parameterization of GCMs are necessary kludges and fudge factors because these models already consume all of the computing capacity we currently have. You are also absolutely spot on about the uncertainty range from the Garden of Eden to Dante’s Inferno.

        Unless we build better computers or invent better software to actually numerically simulate these smaller scale processes based on physics rather than using fudge factors, the uncertainty needle won’t move.

        I don’t think the software/hardware will improve sufficient to reduce uncertainty for at least 30-years.

        Therefore, policy will have to be developed using the analytical tools we have, not wishing or waiting for something we would like to have to come along and save the day.

        This also applies to mitigation. We need to implement economically and technologically feasible mitigations that we have available to us now, not wishing or waiting or forcing for some earthy-groovy renewable pipe dream to save us from ourselves.

        On the one hand, we have the alarmists who want radical hair-shirt energy starvation and/or subsidy of boutique methods that cannot scale.

        On the other hand, we have deniers who don’t want to do anything to improve the environment.

      • > What they need is not treating uncertainties better but rather it needs to be reduced so we can make reasonable policy decisions.

        This assumes that the uncertainties we have doesn’t allow for reasonable policy decisions.

        Justifying this assumption seems to be left to the lukewarm reader.

      • Horst Graben said:

        Therefore, policy will have to be developed using the analytical tools we have, not wishing or waiting for something we would like to have to come along and save the day.

        “Analytical tools” like this?

      • Steven Mosher

        “What they need is not treating uncertainties better but rather it needs to be reduced so we can make reasonable policy decisions.”

        reducing uncertainty may lead to more optimum decisions. However, you can still make reasonable decisions based on huge uncertainty.

        ECS has a range of 1.5C to 4.5C, give or take.

        1. A potential high value would support a policy decision to move
        agressively toward nuclear.
        2. There are other reasons to move to nuclear.

        A reasonable policy ( ie a policy that can be rationally defended ) decision can be informed by uncertain range of ECS.

        1. a value of 3 and greater would support a policy decision to
        end the development of new coal plants.
        2. Other damages from coal support a move to end new coal regardless of the ECS value.

        The assumption that most skeptics and alarmists fail to address is the assumption that policy makers need the best data and most accurate data to make reasonable policy. better data is always good. But the practical situation is there are decisions to make with the data at hand.

      • dogdaddyblog

        Mr. Mosher, you should stick to wandering in your own weed patch. Policy based on wild speculation is bad (and expensive) policy.

      • …support a move to end new coal…

        Looks like we’re done then:

      • Horst Graben said:

        ….policy will have to be developed using the analytical tools we have, not wishing or waiting for something we would like to have to come along and save the day.

        Stephen Mosher said:

        reducing uncertainty may lead to more optimum decisions. However, you can still make reasonable decisions based on huge uncertainty.

        You two guys sound like Naomi Oreskes clones. To wit:

        A major target for Oreskes and Conway is the standard 95 percent confidence interval, a statistical norm they describe as “hard to fathom” and “a high hurdle against one specific kind of error.” ….

        Oreskes and Conway write:

        We have come to understand the 95 percent confidence limit as a social convention rooted in scientists’ desire to demonstrate their disciplinary severity.

        ….

        Wouldn’t it make sense to have a low threshold, if the consequences of being wrong—of not acting on climate because of uncertainty—were so terrible?

        Dick Cheney supposedly operated as vice president by what he called the “one percent doctrine,” meaning that if there were even a 1% chance of an attack on America happening, he would operate as if it were 100% certain to occur.

        http://www.huffingtonpost.com/steven-newton/oreskes-conway-collapse-of-western-civilization_b_7080612.html

        and

        So by setting the standard so extremely high, scientists sort of protect themselves against a certain kind of error, the error of thinking something’s true that isn’t, but they put all of us at risk to a different kind of error, which is the error of doing nothing—the error of thinking we’re not sure about something that’s actually taking place.

        http://loe.org/shows/segments.html?programID=14-P13-00030&segmentID=6

        and

        PATRICK FITZGERALD: Certainly one of the more memorable and pungent themes in your essay is the over-reliance of scientists on the 95 percent confidence interval, before they will make a call on causation or recommend any kind of public policy or action… Are you worried about the “slippery slope” argument that abandoning long-cherished standards of statistical significance could lead to crappy science and misguided, even dangerous, policy?

        NAOIMI ORESKES: This is a really big issue….

        The challenge is always to determine what is needed in any given situation. It’s the same for science.

        Scientists have changed their standards in the past, and they will do so again. It’s high time we had a serious discussion of where the 95 percent confidence limit came from, and whether it makes sense in the nearly indiscriminate way that it is currently applied.

        http://gailepranckunaite.com/Naomi%20Oreskes-The-Collapse-of-%20Western-Civilization-2014.pdf

      • Horst Graben said:

        On the one hand, we have the alarmists who want radical hair-shirt energy starvation and/or subsidy of boutique methods that cannot scale.

        On the other hand, we have deniers who don’t want to do anything to improve the environment.

        And doing nothing is not an option, because that is what the “deniers” want to do?

        Has it ever occurred to you that doing nothing might be the appropriate path of action?

        The rhetological fallacy you articulate is what Andrew M. Lobaczewski calls “reversive blockade.” A prerequisite for it to hold, however, is that there exists such a thing as factual reality or truth, so relativists and constructionists will dismiss it out of hand.

        Here’s how Lobaczewski explains it:

        Reversive blockade: Emphatically insisting upon something [e.g., CAGW] which is the opposite of the truth blocks the average person’s mind from perceiving the truth. In accordance with the dictates of healthy common sense, he starts searching for meaning in the “golden mean” between the truth and its opposite, winding up with some satisfactory counterfeit. People who think like this do not realize that this effect is precisely the intent of the person who subjects them to this method.

      • David Springer,

        Insiders have less flattering terms for “parameterization”.

        A kludge is a crock that works. Take out the parameters, and you get a crock.

      • Steven Mosher

        “Mr. Mosher, you should stick to wandering in your own weed patch. Policy based on wild speculation is bad (and expensive) policy.”

        Ah yes, I get your point. There is no data supporting a push for Nuclear.
        That’s just wild speculation.

      • dogdaddyblog

        Mr. Mosher, since my point had nothing to do with nuclear, I again commend you to your own weed patch.

      • brandongates: the kludge is the only tool we have. nothing better is on the horizon. We are in a GMT depressive cycle for next 15+ years and it will take another 10 to 15-years of the next GMT increase cycle to have enough field data to confirm Nic Lewis versus Gavin Schmidt.

        That is my expert judgement.

        Therefore, we have to plan on using existing technology and feathering in new tech as time marches on.

        Doing nothing is just not an American value, it’s mental ma$turbation using intellectualism as a prophylactic placebo.

      • the kludge is the only tool we have.

        We agree, Horst, well said.

      • David Springer

        Horshy

        “Therefore, policy will have to be developed using the analytical tools we have, not wishing or waiting for something we would like to have to come along and save the day.”

        Policy does not “have to be developed” dummy. There are a million more important things to worry about. Things we actually know are harmful and how much. You’re a consensus freak. Here’s one for your much needed edification:

        http://www.copenhagenconsensus.com/

        Copenhagen Consensus Center

        The Copenhagen Consensus Center is a think tank that researches and publishes the smartest solutions for the world’s biggest problems. Its studies are conducted by more than 100 economists from internationally renowned institutions, including seven Nobel Laureates, to advise policy-makers and philanthropists how to spend their money most effectively.

        Copenhagen Consensus is an outstanding, visionary idea and deserves global coverage” – The Economist

        The Center’s advocacy for data-driven prioritization was voted into the top 20 campaigns worldwide in a think tank survey conducted by University of Pennsylvania.

        ———————————————————————–
        You can thank me by becoming better informed.

      • David Springer

        @Mosher

        You mention damages from coal. In modern coal burning power plants the damages from particulates and aerosols have been addressed by smokestack scrubbers. What damages are you talking about?

        You mention reasons for nuclear power too. Nuclear power carries its own baggage including high cost, nuclear weapons proliferation, and long term disposal of spent fuel that is highly radioactive. After 60 years of trying it has yet to become competitive with the best fossil fuel electrical production. Not even close. And it doesn’t even begin to solve the biggest energy problem which is transportation fuel.

        What lame reasoning do you have to support the need for more nuclear power?

      • David Springer

        With an as yet undetermined appendage Horst Graben writes:

        “Doing nothing is just not an American value, it’s mental ma$turbation using intellectualism as a prophylactic placebo.”

        No, asshat. Playing Chicken Little, almost literally, is not an American value. You have NOTHING in the way of data that shows CO2 emission is something of concern. On the other hand the benefits of abundant energy from fossil fuel are legion. Warming, whatever little there is, is happily being delivered preferentially to high latitudes, in the winter, and at night. Just where we actually want it to be warmer. There’s no detectable increase in severe weather events. And then there’s atmospheric fertilization from CO2 that is accelerating plant growth (greening the planet) and making them more drought tolerant at the same time.

        Where’s the downside, bozo?

      • David: I have supported the ideas of Bjørn Lomborg since he came out with the Skeptical Environmentalist which, in the areas of my direct experience, were absolutely spot on. The Copenhagen Consensus (I’m a Skoal man, myself) supports investment in green energy among other priorities such as malnutrition, disease and development.

        So we are most likely in agreement. Based on my reading of your posts, my biggest disagreement with you is regarding the need for mitigating air pollution in India and China.

      • David Springer

        What does “green” energy mean other than the color of money?

        I guess you mean the kind of energy that helps plants grow by fertilizing the atmosphere. That’s green. I think your idea of green is actually something else altogether. Try being honest about it.

      • What does “green” energy mean other than the color of money?

        R&D is the American Way, David S. ‘Twould be a shame to miss out on being the main innovators in the Green Revolution.

        I guess you mean the kind of energy that helps plants grow by fertilizing the atmosphere.

        Don’t forget that plants also fix nitrogen: http://extension.psu.edu/agronomy-guide/cm/sec2/sec28

      • David Springer

        Now we’ve gone from green energy to green revolution. Still the only green in it is the color of money. We might explore its connection with the color of envy as well.

      • You’re right, David. Green ____________ (fill in blank) means Bull$hit. Am disappointed Copenhagen Consensus uses that term, but when in Rome…

        This guy nailed it 16-years ago

        http://www.amazon.com/Hard-Green-Environment-Environmentalists-Conservative/dp/0465031137

      • Here’s something a little more current than 16 years ago, Horst:

        http://www.bp.com/en/global/corporate/energy-economics/energy-outlook-2035/energy-outlook-to-2035.html

        Renewables continue to grow rapidly

        Renewables are projected to be the fastest growing fuel, almost quadrupling (6.6% p.a.) over the Outlook.

        The EU continues to lead the way in the use of renewable power. However in terms of volume growth to 2035, the EU is surpassed by the US, and China adds more than the EU and US combined.

        The rapid growth in renewables is supported by the expected pace of cost reductions: the costs of onshore wind and utility-scale solar PV are likely to fall by around 25% and 40% over the next 20 years.

        Your stereotypical “green” might only call this lip service. I call it smart business and forward-thinking. YMMV.

      • Renewables are projected to be the fastest growing fuel, almost quadrupling (6.6% p.a.) over the Outlook.

        […]

        The rapid growth in renewables is supported by the expected pace of cost reductions: the costs of onshore wind and utility-scale solar PV are likely to fall by around 25% and 40% over the next 20 years.

        BP is engaging in highly over-optimistic (from their POV) low-balling.

        Solar PV has been growing at an average rate of around 41% p.a., almost 7 times greater than BP’s forecast (doubling capacity roughly every 2 years). IIRC wind has been growing even faster. (Solar thermal is probably not going to compete, IMO)

        If that capacity growth continues, and IMO there’s every reason to think it will, Wright’s “Law” suggests that the costs of […] utility-scale solar PV are likely to fall by around […] 40% over the next 20 years.” 5-8 years.

        These guys have no idea!

      • BP is engaging in highly over-optimistic (from their POV) low-balling. Solar PV has been growing at an average rate of around 41% p.a., almost 7 times greater than BP’s forecast (doubling capacity roughly every 2 years). IIRC wind has been growing even faster. (Solar thermal is probably not going to compete, IMO)

        Their projections don’t include solar of any kind, AK, good catch. At least not explicitly; they have a “renewables” category, but they also have hydro as its own bucket. From this I might infer that “renewables” means liquid fuels, and I’ve never seen it as in big oil’s best interest to lag in development of those — they will be the ones best positioned to manufacture them when they become competitive. Way I see it, it just hasn’t been in their best interests to hasten the development because obviously their near- and mid-term margins are going to be better on fossil fuels. All they need be is slightly ahead of the curve of everyone else.

        It might be an interesting exercise to see how bullish/bearish they are on their projections vs. say, Exxon. But it’s Sunday and I’m feeling lazy … and May Day to boot. Surely there’s something neopagan going on outside for me to be doing instead ….

  27. Judith says:

    I regard it to be a top priority for the IPCC to implement formal, objective procedures for assessing consensus and uncertainty. Continued failure to do so will be regarded as laziness and/or as political protection for an inadequate status quo.

    Cooke says:

    The IPCC does not do research and cannot commission uncertainty studies; it can only report on what has been done by others. However, the semantics of uncertainty that the IPCC has adopted and published as guidance for lead authors is unhelpful and ultimately
    insufficient. Even though uncertainty qualifiers, such as “likely” and “confident,” are given a precise meaning, they cannot be propagated through a chain of reasoning and, more importantly, they encourage defective reasoning under uncertainty.

    So let’s recognize that there’s a systemic problem, assign blame to a part of the system that isn’t empowered to solve the problem, and declare an impossible standard to be met by that part of the system.

    And thus, we can walk away feeling justified in concluding “laz[iness]” and/or “political protection[ism].”

    Confirmed biases are confirming.

    • dogdaddyblog

      The IPCC can set standards for research it uses.

    • Steven Mosher

      That’s a fair enough comment.

      One one hand the IPCC states that it doesnt do science. Just summary’s
      On the other hand they produce consensus opinions. That is a science, and here I use science in its broadest sense..

      There seems to be a choice on the table.

      A) stop making consensus statements and just do literature reviews
      ( you can see examples in many fields where science or studies are just reviewed or canvased )
      B) Keep making consensus judgments and adopt some framework (repeatable and traceable)

      I would opt for B.

      • I would opt for B’ – along the lines of incorporating and formalizing the input of people whose area of specialty is assessing risk in the face of uncertainty. To the extent that repeatable and traceable can be achieved, so be it. I’m not sure that either is fully practical – in particular “repeatable,” or that focusing on “repeatable” always brings the best return on effort.

        http://fivethirtyeight.com/features/failure-is-moving-science-forward/

        http://fivethirtyeight.com/features/fivethirtyeight-roundtable-how-scientific-scandal-can-become-scientific-progress/

      • David Springer

        People with expertise in assessing risk in the face of uncertainty.

        Oil company executives? Only highly successful ones of course that have navigated through the uncertain waters of the energy business and come out on top.

        Thanks for your support!

      • I would opt for B’ –

        (1) “Keep making consensus judgments and adopt some framework (repeatable and traceable)”

        (2) “along the lines of incorporating and formalizing the input of people whose area of specialty is assessing risk in the face of uncertainty.”

        Establishing clear metrics to accurately summarize whether the “climate” is improving or worsening should be a priority. Generally, temperature is not the primary driver.

      • The problem is that no one will be given any options.

        IPCC states that it does not perform any research – and that might be right. I agree with Mosher that IPCC is doing science. By selecting, assessing, interpreting and summarizing scientific work IPCC do science. No matter what they say, or whether the method is proper or not.

        Producing a consensus opinion is not a scientific method – neither is producing low, medium and high level of confidence. Nobody has been able to quantify a relationship between consensus or confidence and the probability that a hypothesis or theoretical system is true or accurate. The strive for consensus is a political method – not a scientific method. In the guise of science, IPCC follows a political method.

        Without saying I would vote for A, I would certainly not vote for option B). But – I have not been given the possibility to vote – nobody has. The choice of method has been made for us all, by unelected, appointed bureaucrats in the United Nations – and unelected, appointed leaders of IPCC. Among many others I´m not happy with that choice.

        Hence, besides the scientific problem with their choice of method – I find it ironic that the United Nations should have been concerned about human rights, which states:
        Article 21. (3) The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage and shall be held by secret vote or by equivalent free voting procedures.

        United Nations and United Nations climate panel IPCC have an enormous influence on governments. However, United Nations have not given me the right to express my will on the method which has formed the basis for huge changes for humanity – there have been no alternatives and no elections.

  28. David Wojick

    Speaking of experts, we have two new scary reports from NAS, one abrupt and the other extreme:
    http://us4.campaign-archive2.com/?u=eaea39b6442dc4e0d08e6aa4a&id=8f53ae6725&e=1fb63d8b69
    What you get depends on the experts you pick. Every lawyer knows this.

    Also an interesting webinar and report on needed research toward seasonal forecasts.

  29. David Wojick

    Regarding some of the discussion above, this blog could be mined to build a pretty good issue tree of the scientific issues. But the pieces are scattered here and there. In addition we have the policy debate, the energy debate, the political debate, the debate debate (why it exists, etc.), plus a lot of general bad mouthing, all mixed up together. A grand dynamic show, all things considered.

  30. Those who believe that the climatariat doesn’t falsify the data, or that its arguments “are made in good faith,” remind me of this passage from Reinhold Niebuhr:

    The inevitable hypocrisy, which is associated with all of the collective activities of the human race, springs chiefly from this source: that individuals have a moral code which makes the actions of collective man an outrage to thier conscience.

    They therefore invent romantic and moral interpretations of the real facts, preferring to obscure rather than reveal the true character of their collective behavior.

    Sometimes they are as anxious to offer moral justifications for the brutalities from which they suffer as for those which they commit.

    The fact that the hypocrisy of man’s group behavior…expresses itself not only in terms of self-justification but in terms of moral justification of human behavior in general, symbolises one of the tragedies of the human spirit: its inability to conform its collective life to its individual ideals. As individuals, men believe that they ought to love and serve each other and establish justice between each other. As racial, economic and national groups they take for themselves, whatever their power can command.

    — REINHOLD NIEBUHR, Moral Man and Immoral Society

    • How is it possible to have any sort of objective discussion, and not “romantic and moral interpretations of the real facts,” if one leaves out these facts?

      Letter To President Obama, Attorney General Lynch, and OSTP Director Holdren from Twenty Climate Scientists: Investigate Deniers Under RICO
      http://scienceblogs.com/gregladen/2015/09/19/letter-to-president-obama-investigate-deniers-under-rico/

      20 Attorneys General Launch Climate Fraud Investigation of Exxon
      http://ecowatch.com/2016/03/30/climate-change-investigation-exxon/

    • Steven Mosher

      “Those who believe that the climatariat doesn’t falsify the data, or that its arguments “are made in good faith,” remind me of this passage from Reinhold Niebuhr:”

      That’s funny. a quote from a nut proves a factual case of falsifying data.

      “Those who believe that the denialists doesn’t falsify the data, or that its arguments “are made in good faith,” remind me of this passage from Reinhold Niebuhr:”

      “The inevitable hypocrisy, which is associated with all of the collective activities of the human race, springs chiefly from this source: that individuals have a moral code which makes the actions of collective man an outrage to thier conscience.

      They therefore invent romantic and moral interpretations of the real facts, preferring to obscure rather than reveal the true character of their collective behavior.

      Sometimes they are as anxious to offer moral justifications for the brutalities from which they suffer as for those which they commit.

      The fact that the hypocrisy of man’s group behavior…expresses itself not only in terms of self-justification but in terms of moral justification of human behavior in general, symbolises one of the tragedies of the human spirit: its inability to conform its collective life to its individual ideals. As individuals, men believe that they ought to love and serve each other and establish justice between each other. As racial, economic and national groups they take for themselves, whatever their power can command.”

      Look mom! I can prove something by quoting a guy

      • Stephen Mosher,

        Reinhold Niebuhr “a nut”?

        Is that what passes for argumentation on Planet Green these days?

        There’s a tad more to intelligent and informed debate than ad hominem.

      • Steven Mosher

        Glenn its pretty simple.

        1. Either Reinhold Niebuhr thought his comment applied to climate science, in which case
        a) you are using his quote fairly
        b) he is nut.
        2. Reinhold Niebuhr never intended his comment to be applied to climate science, in which case
        a) you are abusing his quote
        b) he is a great thinker

        Choose:

      • Stephen Mosher,

        The ad hoc rescues don’t cut it with me.

        But I do agree on one thing, it is “pretty simple.”

        Either the climatariat falsified data, suppressed data, manipulated data, massaged data, and made arguments that were in bad faith, or it didn’t.

        These charges are of an objective nature. Why some people deny them is a topic of interest, but it has nothing to do with whether the climatariat committed them or not.

      • Steven: Glenn Stehle has the brain power of Mike Flynn with better google skills. I am surprised you actually read his posts sufficient to Fisk them.

      • Horst Graben,

        More ad hominem?

        That seems to be pretty much the depth and the breadth of argumentation on Planet Green.

      • It’s not ad humperdink if it’s true. Nice to see you using your own words, Glenn. Please give us greenies examples of objective nature evidence regarding your climatariat conspiracy theory. However, I thought objective nature was a science cult construct, but it is hard to keep up with the latest jibber-jabber from word-slayers.

        You do know that Mosher had pretty much the same suspicion of the catastrophists regarding GMT during the previous decade, right? Then he got the data hissown self and fingered it out to be pretty much right. Lots of folks have done this, it is old news.

      • David Springer

        Horsh, get a clue. Mosher is a dipsh*t without a graduate degree of any sort, an English BA of all things, who wants to play scientist. Richard Muller is entertaining that fantasy because Mosher works for him for free. You’re a real tool yourself, by the way.

      • David Springer

        *That* is ad hominem above. Plus some ad hockeystick thrown in on the side. And it doesn’t stop being ad hominem just because it’s true. Start using a dictionary dummy.

      • David: Giving a $hit about a college degree is the type of eurotrash class-envy logic that keeps those countries dependent on American elbow grease and know-how. One comes to expect that sniveling dribble from the Willard’s and Tol’s of the blogosphere. Real men don’t need no stinking merit badges.

        How’s that miracle biofuel project you are working on going? Still mulling it over in your head after a couple Ballchain Blasters?

      • > Giving a $hit about a college degree is the type of eurotrash class-envy logic that keeps those countries dependent on American elbow grease and know-how. One comes to expect that sniveling dribble from the Willard’s […] of the blogosphere. Real men don’t need no stinking merit badges.

        I don’t think you could find a quote from me that would meet your expectation, Geo. Glad to see that real men need to snivel at eurotrash class-envy.

        An ad humperdink is still and ad humperdink even if it’s true, BTW.

      • David Springer

        Horstchit, you mental midget. Fisk should not be capitalized when used as a verb.

      • davideisenstadt

        geez mosh.
        your girlfriend leave you?
        the snark is just sad.

      • Sorry Willard, you are correct. Evoking your name for effect (a blog boogieman to get a reaction from the sceptics) without looking at the twenty-seven 8 by 10 colored glossy pictures with circles and arrows and a paragraph on the back of each one explainin’ what each one was. I used Tol’s the same as well. However, I do recall he likes to mention working with the best people, Nobel Prize winners, etc.

      • David Springer

        Horsh I agree about the merits or lack thereof conferred by a doctorate. That said you don’t get to use the title “scientist” without the sheepskin. This is just a common measure of respect for the work that went into earning the doctorate. Mosher wants the respect without putting in the work. I object and will continue to object. Mosher merits no respect, has not earned the title, yet he is using it. He’s a tool’s tool.

      • David Springer,

        Anybody can Google “who is Steven Mosher sales and marketing” and get the skinny on Mosher.

        What is interesting is that Mosher felt it was necessary to cultivate the pretense of being a scientist, and then Horst Graben, in defending Mosher, turns around and charges “Giving a $hit about a college degree is the type of eurotrash class-envy logic.”

        With guys like Mosher and Graben you can’t win, because they constantly move the goal posts, changing their philosophy to whatever happens to be expedient at the moment to win the argument.

        It’s an orgy of cognitive dissonance.

      • Glenn, as are the warmunistas, are mystified, befuddled and confused by multiple working hypothesis. There are no goal posts, these are just imaginary anchors used as pacifiers by errand boys sent by grocery clerks to collect a bill.

      • This morning Washington journalists most likely feel the same way.

        http://www.breitbart.com/video/2016/04/30/obama-to-press-you-have-responsibility-to-question-thanks-for-working-side-by-side-with-me/#disqus_thread

        Judging from the comments there are lots of people who also feel the same way but different.

      • David Springer

        Ballchain Blaster? I had to look it up. I haven’t had a drink in months. It appears I’m living rent-free inside of Horst’s pointy little head. There’s evidently a party happening in there and Horst is tending bar. LOL

  31. This may be a silly question – but anyhow. If anyone could enlighten me on the following it would be appreciated.
    “Bayesian updating is the correct way to learn on a likelihood and prior distribution, but it does not mean that the result of the learning is valid. ”

    I would believe that without data the prior distribution is just a guess and hence it is little worth. And whenever data is available, the prior distribution is also little worth. Why bother with prior distributions?

  32. Re: Expert judgement & consensus, 4/28/16:

    Questioning expert judgement and the consensus seeking process are most promising signals. These are sign posts, not on the road to fixing their shortcomings, but on coming to the realization that these notions play no significant role in real science.

    Just ahead in the road is a fork. Straight ahead is Popper’s path of the deconstruction of science. He formally deleted definitions (relying on his own set as he needed). He deleted causation and causality. He replaced strict objectivity, meaning models with predictive power, with his three-prong intersubjectivity of peer review, publication, and consensus, all within a closed community. He restored induction which Bacon had replaced with true induction, i.e., deduction via Cause & Effect, and probabilistic to boot.

    To be generous from the real science perspective, two versions of science occupy center stage today. It’s Popperists vs. Baconites.

    And to be sure, don’t forget Popper’s fantastic falsification criterion. He created this out of a misperception that scientific propositions were logic valued, that is, true/false statements. More particularly, he deduced that they were universal generalizations. He was not alone in this mistake, it being shared in his time by Wittgenstein, Schlick, and the body of the Vienna Circle. Real scientific statements are specifications of experiments, mapping of facts onto facts, of all relevant existing measurements onto future measurements.

    Expert judgment and consensus are about voting, the equivalent of Academy Awards and recognition in other mutual admiration societies.

    • “He replaced strict objectivity, meaning models with predictive power, with his three-prong intersubjectivity of peer review, publication, and consensus, all within a closed community.”
      From where in his writings do you get that Karl Popper was a proponent of peer, preview, publication and consensus? This is what he actually wrote:
      “a subjective experience, or a feeling of conviction, can never justify a scientific statement, and that within science it can play no part except that of an object of an empirical (a psychological) inquiry. No matter how intense a feeling of conviction it may be, it can never justify a statement. Thus I may be utterly convinced of the truth of a statement; certain of the evidence of my perceptions; overwhelmed by the intensity of my experience: every doubt may seem to me absurd. But does this afford the slightest reason for science to accept my statement? Can any statement be justified by the fact that Karl Popper is utterly convinced of its truth? The answer is, ‘No’; and any other answer would be incompatible with the idea of scientific objectivity.” – Karl Popper

      “He restored induction which Bacon had replaced with true induction, i.e., deduction via Cause & Effect, and probabilistic to boot.”
      From where do you get that Karl Popper restored induction? To my knowledge his method does not such thing:
      “The theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic. It might be described as the theory of the deductive method of testing, or as the view that a hypothesis can only be empirically tested—and only after it has been advanced.» – Karl Popper

      “And to be sure, don’t forget Popper’s fantastic falsification criterion. He created this out of a misperception that scientific propositions were logic valued, that is, true/false statements.”
      From where do you get this? This is what Karl Popper actually wrote:
      “Every scientific theory implies that under certain conditions, certain thing will happen. Every test consists in an attempt to realize these conditions, and to find out whether we can obtain a counter-example even if these conditions are realized; for example by varying other conditions which are not mentioned in the theory.» « This fundamentally clear and simple procedure of experimental testing can in principle be applied to probabilistic hypotheses in the same way as it can be applied ton non-probabilistic or, as we may say, for brevity´s sake, «causal» hypothesis. “Tests of the simplest probabilistic hypotheses involve such sequences of repeated and therefore independent experiments – as do also test of causal hypotheses. And the hypothetically estimated probability of propensity will be tested by the frequency distributions in these independent test sequences. (The frequency distribution of an independent sequence ought to be normal, or Gaussian; and as a consequence it ought to indicate clearly whether or not the conjectured propensity should be regarded as refuted or corroborated by the statistical test.»” – Karl Popper

      • No such thing as the innocent eye or passive learning.
        Neuro-scientists (David Eagleman videu),and art
        historians (Ernst Gombrich) demonstrate it, human
        laughter implies it.( Henri Bergson) Whenever we
        receive a visual impression, we react by classifying it,
        one way or another. We – just – can’t – help – it.

        So it seems there’s no such thing as inductive
        learning, (Hume’s problem,) the passive reception of
        sense data, as Karl Popper argues in his book,
        ‘Objective Knowledge: An Evolutionary Approach.’
        (Oxford 1979.) Popper calls induction ‘the bucket
        theory of learning,’ and formulates instead, a
        deductive ‘search light theory’ of active learning.

        Popper questions the ‘bucket theory assumption in
        theory of knowledge and scientific method, the postulate
        of an unbiased eye demands the impossible. The
        problem of what comes first, the conjecture / hypothesis
        or the observation is somewhat of a chicken and egg
        scenario. Bucket theory asserts observation first, search
        light theory asserts the disposition to act comes first. An
        observation, says Popper, is always preceded by a
        particular interest or question, however basic, or by
        some problem within the horizon of our expectations, for
        example, a baby picking up a piece of paper from the
        floor, and putting it in its mouth, unspoken question, is
        ‘this’ is part of my food stuff, yes – no?’ Even Pavlov’s
        dog responding to the dinner bell it’s heard before.

      • “Good tests kill flawed theories; we remain alive to guess again.” – Karl Popper

      • If there is one thing unifying those imposing flawed theory on others – it is not left vs. right, conservatism vs. liberalism, greens vs, anything else – my guess is that noble reasons and inductivism is the root of it.

      • dogdaddyblog

        Science or Fiction: I assume you are saying that [place your malefactor’s name], using “noble reasons,” adjusts “objective facts” such that an unsuspecting outsider (powerful decision maker?) using “inductivism” would come to a “flawed theory” that would benefit [place your malefactor’s name]’s agenda.

        Dave Fair

      • That’s definitely one version. But I even think it can happen in an environment where only noble ideas are present.

      • dogdaddyblog

        Science or Fiction: Your “environment where only noble ideas are present” may seem innocuous, it might get me burned at the stake at the extreme. The French Revolution should be a caution.

      • Fixated on Popper, Science or Fiction, 5/3/16 @ 1:04 am, shares this overworked, twice counterfeit gem:

        “Good tests kill flawed theories; we remain alive to guess again.” – Karl Popper.

        While superficially representing Popper’s falsification, he derived that notion from this erroneous presumption:

        Scientific theories are universal statements. Popper (1934/1959) LogSciDisc p. 37.

        Valid or not, the Good tests quotation is Popper out of context, and likely he never said it. The best Wikiquote could do was attribute it to My Universe : A Transcendent Reality (2011) by Alex Vary, but that’s a dead end since Vary attributes it to Popper with no citation.

        The second counterfeit part is that in science good tests reinforce theories, which is contrary to Popper as well as his rival for the admiration of the Vienna Circle, Wittgenstein. What Popper actually did say was this:

        I am an opponent of pragmatism as a philosophy of science … . Popper, K., “Objective Knowledge: An Evolutionary Approach, 1979, Ch. 8, A Realist View of Logic, Physics, and History, §4 Realism in Logic, p. 311.

        Also in the category of things actually said is this from Ludwig W.:

        The events of the future cannot be inferred from those of the … present. Superstition is the belief in the causal nexus. Wittgenstein Tractatus (1922) ¶5.1361.

        Feynman, always a favorite here, actually said this:

        We must, and we should, and we always do, extend as far as we can beyond what we already know those things, those ideas that we’ve already obtained; we extend the ideas beyond their [domain]. Dangerous, yes; uncertain, yes; but the only way to make progress. It’s necessary; it makes science useful, although it’s uncertain. It’s only useful if it makes predictions. It’s only useful if it tells you about some experiment that hasn’t been done; it’s no good if it just tells you what just went on. So it’s necessary to extend the ideas beyond where they’ve been tested. Feynman (11/19/64), Seeking New Laws, p.6/10.

        Popper frequently recognized predictions to be a part of science. He just found them useless. He observed correctly, though a bit exaggerated,

        The ‘principle of causality’ is the assertion that any event whatsoever can be causally explained—that it can be deductively predicted. Popper LogSciDisc (1934/1959) p. 39.

        Couple that with this passage quoted previously

        I shall, therefore, neither adopt nor reject the ‘principle of causality’; I shall be content simply to exclude it, as ‘metaphysical’, from the sphere of science. Popper (1934/1959) p. 39.

        In revising science, Popper thus cast out predictions.

        All this Popper stuff has proved a distraction here. It is merely background for those who believe that Publish or Perish is part of the genuine scientific method. We can leave Popper out if we recognize two things:

        • Science is solely about making predictions that are better than chance.

        The criteria of (1) falsification, (2) peer review, (3) publication, (4) consensus forming, and (5) societal concerns, in any combination, do not require valid predictions.

      • That was indeed a good test, my idea that the quote could be traced to Popper has ceased to be.

        I think there is more to say about science than “Science is solely about making predictions that are better than chance”

        What I would be interested in hearing is how do you deal with
        – the problem of induction
        – quantification of the probability that a hypothesis is true
        And most important, and maybe the reason we are here, what´s wrong with this document:
        Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on
        Consistent Treatment of Uncertainty

        Theses questions are independent of Karl Popper. The reason why I´m fixated on Karl Popper is that he provides thorough arguments which are relevant to these questions.

        I would say transparency and scrutiny is important to the scientific method publishing and peer review does not ensure that.

      • dogdaddyblog

        Science or Fiction: The real issue is trust.

        One cannot understand all of the other guys’ businesses, especially as it applies to nebulous concepts of falsification of what they are up to. For most of us everything was all well and good in the climate business(es) until contrary information came in, as opposed to the fictions the other guys were saying it was, is and will be (pick your metric). We don’t need to understand all the ins and outs of climate science to apply reason in detecting falsehoods.

        For some, clearly not a majority, trust in the climate guys is gone. Assuming the climate continues to vary as in the past, until there are impartial and non-punitive “truth commissions” the climate guys will continue their narrative to an ending in open and destructive conflict. Trust is gone. Argue truth, not logic.

        Dave Fair

      • @ Dave Fair. Sorry that it took me so long. Sometimes I take pleasure in thinking things through a bit.

        I think logic and scientific method will continue to be relevant until the day the food and drug administrations around the world will adopt the methods from the climate industry to ensure safety of medical treatment.

        United Nations is about to treat 7 billion people prophylactically against the effect of hypothesized climate change, the side effect is to deprive many from affordable energy. Energy poverty is already a huge problem, exemplified by this article 54 million Europeans must choose between eating and heating.

        At the moment the scientific methods of the climate industry is far from becoming a standard. Here is one example of an existing standard the climate industry fails to meet:
        §314.126   Adequate and well-controlled studies
        (b) An adequate and well-controlled study has the following characteristics:
        (5) Adequate measures are taken to minimize bias on the part of the subjects, observers, and analysts of the data. The protocol and report of the study should describe the procedures used to accomplish this, such as blinding.

        The Principle governing IPCC is to strive for consensus (Article 10). That is certainly not an adequate measure to minimize bias. Groupthink is a well known fallacy. And this article shows that indeed: IPCC was heavily biased from the very beginning .

      • Among my own writings I found the following summary quote:
        “Knowledge on the other hand is characterized by the ability to repeatedly predict a particular range of outcome for a particular set of conditions.”
        Seems to me that we are pretty much in line on the aim of science at least.

      • Science or Fiction, 5/4/16 @ 5:14 pm, said he

        What I would be interested in hearing is how do you deal with [¶] – the problem of induction, [¶] – quantification of the probability that a hypothesis is true and what´s wrong with this document: [¶] Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainty

        Of induction, Popper says, amongst many other almost consistent expressions,

        In the eyes of the upholders of inductive logic, a principle of induction is of supreme importance for scientific method: “… this principle … determines the truth of scientific theories. To eliminate it from science would mean nothing less than to deprive science of the power to decide the truth or falsity of its theories. Without it, clearly, science would no longer have the right to distinguish its theories from the fanciful and arbitrary creations of the poet’s mind.” Bold added, Popper (1934,1935) LogSciDisc, p. 4.

        Scientific propositions are neither true nor false. In Modern Science, scientific propositions relate facts to facts statistically, so the second question re the quantification of the probability that a hypothesis is true is meaningless. In Post Modern Science, the question is moot since all that is required is subjective agreement within the relevant, closed community. In short, induction doesn’t apply to science.

        IPCC’s Guidance Note includes this:

        5. Consider that, in some cases, it may be appropriate to describe findings for which evidence and understanding are overwhelming as statements of fact without using uncertainty qualifiers. Id., p. 2.

        This reveals two things. (1) IPCC views uncertainty not just as subjective uncertainty, but exclusively so. And (2), it views objectivity as overwhelmingly certain.

        Modern Science tolerates no subjectivity in its models, and as stated earlier, it views all facts as noisy, that is, every fact lies within a quantifiable uncertainty cloud. Those clouds are the source of objective confidence limits.

        Uncertainty is certain in the Real World. It is manifest in the law where jurors are instructed to decide criminal conduct on the basis of beyond a reasonable doubt. But that is impossible to define explicitly for jurors because it is subjective. It is manifest in scientific decision making under the principles of minimizing risk, where risk is the expected value of the result of a decision. In those cases each possible outcome requires quantification.

        Risk expressions are like any other scientific proposition. They state values for dependent variables as standing in certain determinate, measurable relationships to independent variables. However, a principle of science is that every measurement has an error, an uncertainty cloud. Consequently, input measurement noise conveys an uncertainty cloud upon the dependent variables. That is what science does to achieve what S or F, 3:21 am, recently discovered in his notes:

        “Knowledge on the other hand is characterized by the ability to repeatedly predict a particular range of outcome for a particular set of conditions.”

      • Thank you very much – there are many takeaways in your reply. I think the following quote sums it up nicely:
        “Modern Science tolerates no subjectivity in its models, and as stated earlier, it views all facts as noisy, that is, every fact lies within a quantifiable uncertainty cloud. Those clouds are the source of objective confidence limits.”

        I would like to build on that quote. There is an international Guideline on expression of uncertainty – Guide to the expression of uncertainty in measurement (measurement = estimate if you like). This is the only internationally recognized standard on expression of uncertainty. Anyhow, my point is that the guide does not open up for «expert judgement». The guideline opens up for «scientific judgement» but it is quite well defined what is meant by «scientific judgement»:

        «For an estimate that has not been obtained from repeated observations, the associated standard uncertainty is evaluated by scientific judgement based on all of the available information on the possible variability. The pool of information may include
        ⎯ previous measurement data;
        ⎯ experience with or general knowledge of the behaviour and properties of relevant materials and instruments;
        ⎯ manufacturer’s specifications;
        ⎯ data provided in calibration and other certificates;
        ⎯ uncertainties assigned to reference data taken from handbooks.»

        Understandably there are quite strict guidelines for reporting of uncertainty:

        Ref.: 7.1.4
        «Although in practice the amount of information necessary to document a measurement result depends on its intended use, the basic principle of what is required remains unchanged: when reporting the result of a measurement and its uncertainty, it is preferable to err on the side of providing too much information rather than too little. For example, one should
        a) describe clearly the methods used to calculate the measurement result and its uncertainty from the experimental observations and input data;
        b) list all uncertainty components and document fully how they were evaluated;
        c) present the data analysis in such a way that each of its important steps can be readily followed and the calculation of the reported result can be independently repeated if necessary;
        d) give all corrections and constants used in the analysis and their sources.

        A test of the foregoing list is to ask oneself “Have I provided enough information in a sufficiently clear manner that my result can be updated in the future if new information or data become available?”

        I´m uncertain about pretty much everything, but I know for sure that whenever I´m tempted to quantify uncertainty by referring solely to my expert judgement I´m about to commit a fallacy – a professional suicide.

      • Science or Fiction, 5/6/16 @ 7:20 pm, referenced JCGM 100:2008, Evaluation of measurement data – Guide to the expression of uncertainty in measurement, as background for uncertainty in science. JCGM is an organization of eight separate metrological organizations, including familiar agencies like ISO (International Organization for Standardization), and BIPM which provides this far-reaching definition:

        Metrology is defined by the International Bureau of Weights and Measures (BIPM) as “the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology.” Wikipedia>Metrology.

        Metrology is concerned with accuracy, precision, reliability, and traceability of measurements, all critical to the practice of measurements in science. Metrologists are the people ultimately responsible for the calibration of scientific instruments, as when a scientist visits the strange man in the dark basement room for a tune up of his instrument. Metrology is the art behind calibration of instruments, for example, that measure the concentration of CO2 in a gas sample, of thermometers, or of chronometers.

        However, metrology is not the calibration of global CO2 measuring stations, or of various proxy measurements investigators use to bring records into some preconceived agreement with some specially chosen baseline. Metrology is responsible for those four attributes of measuring, but it is only indirectly responsible for prediction, the crux of Modern Science, abandoned in Post Modern Science in the deconstruction of MS.

        Some have used metrology to predict, as in facial metrology to estimate gender, and conversely, investigators have used the art of prediction to aid metrology, e.g., Mestre, M., H. Abou-Kandil, Linear prediction of signals applied to dimensional metrology of industrial surfaces, Measurement, V.11, No.2, April, 1993. These are relatively recent considerations, which perhaps accounts for this claim:

        The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement … . JCGM 100:2008, ¶0.2.

        A midpoint for uncertainty in science is the statistics of R.A. Fisher (1890–1962), knighted for his work in genetics, and who invented and settled much of modern statistics in order that he might account for what he was observing. He is famous for his 1935 book The Design of Experiments, for inventing the concept of maximum likelihood estimation, the probability distribution that bears his name, and much more.

        Metrology is truly ancient, predating Aristotle by more than a couple of millennia. It thrives today, all but immune to Bacon’s 17th C creation of Modern Science, to Popper’s 20th C abortion of it, and to the evolution of Fisher’s work into today’s Detection and Estimation Theory, the theory that brought us radar, computers, satellite communications, and cell phones, based not just on Fisher’s work but with two centuries of most notable contributions from Bayes, Legendre, Gauss, Neyman, Pearson, Kolmogoroff, and Weiner.

        Metrology bears the same relationship to science as do language, logic, and statistics. These are legs of the stool of the scientific method that defines the existence of science, but none is science in the sense of predicting the real world.

        The uncertainty surrounding scientific facts is a different species than the uncertainty in metrology. It begins with measurement errors, but it travels a very long way from that point. It accumulates background noise, interference, distortions, approximations, data abuse, and in the end often is the uncertainty in a parameter that can only be estimated and not directly observed. As in temperature and entropy, encoded signals hidden in noise, parameters of pieces of subatomic particles, or on the grander scales, the fate of the universe. Oh, and global climate.

      • «Metrologists are the people ultimately responsible for the calibration of scientific instruments, as when a scientist visits the strange man in the dark basement room for a tune up of his instrument.»

        Measurement is a wider concept than indicated by this quote. My thesaurus provides the following synonyms for measurement: «quantification, quantifying, computation, calculation, mensuration; estimation, evaluation, assessment, appraisal, gauging; weighing, sizing»

        Introduction to the “Guide to the expression of uncertainty in measurement” (JCGM 104:2009) propounds that:
        «A statement of measurement uncertainty is indispensable in judging the fitness for purpose of a measured quantity value. .. Measurement uncertainty is a general concept associated with any measurement and can be used in professional decision processes as well as judging attributes in many domains, both theoretical and experimental. ..
        Measurement is present in almost every human activity, including but not limited to industrial, commercial, scientific, healthcare, safety and environmental. Measurement helps the decision process in all these activities. Measurement uncertainty enables users of a measured quantity value to make comparisons, in the context of conformity assessment, to obtain the probability of making an incorrect decision based on the measurement, and to manage the consequential risks.»

        It is also clear that IPCC in general present uncertainty ranges. This is clearly seen In the Summary for policymakers where IPCC presents uncertainty ranges for more or less every figure it presents. But IPCC never refer to Guide to the expression of uncertainty in measurement, do not present uncertainty in accordance with this Guide, and does not fully report information that would be relevant to their uncertainty estimates and how they arrived at their uncertainty estimates.

        Here is one (laughable) example of how IPCC regard uncertainties in models:
        «Box 12.1 | Methods to Quantify Model Agreement in Maps
        The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability. ..There is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ‘indistinguishable’ interpretation.»
        To me this is indistinguishable from inductivism.

        The argument I wish to put forward is that the Guide to the expression of uncertainty in measurement is relevant for IPCC, it is the only internationally recognized standard for expression of uncertainty, IPCC failed to report uncertainty in accordance with this standard and IPCC failed to provide all information which would be relevant to the uncertainty estimate. I wish to put forward the argument that IPCC should base their uncertainty estimation and reporting of uncertainty on Guide to the expression of uncertainty in measurement rather than on their own practice which seem to have been made up in a hasty way.

        Without saying anything in this reply about the other legs the works by IPCC is resting on I will say there are serious short commings in the way IPCC report uncertainty for their estimates.

      • Science or Fiction, 5/8/16 @ 6:24 pm said, Measurement is a wider concept than indicated by [my] quote about metrology, equating metrology to the whole of measurement. S or F’s view is shared by Professor Eran Tal, Dept. of History and Philosophy of Science, U. Cambridge, author of the recent Stanford Encyclopedia of Philosophy entry, Measurement in Science, 6/15/2015. Supporting the conclusion that he is a metrologist are the facts that he authored Why Philosophers Care about Metrology and The Science of Measurement: Towards an Epistemology of Metrology, and he relies on a document from the Joint Committee for Guides in Metrology (JCGM), Vocabulary in Metrology (VIM). Tal SEP (2015) §6. Also , he boasts,

        Information-theoretic accounts of measurement were originally developed by metrologists with little involvement from philosophers. Id.

        Perhaps so, but with a lot of involvement from scientists wherever it might be happening.

        JCGM first published the VIM in 1993, and its companion, the Guide to the expressions of uncertainty in measurement (GUM) in 1995. These were commissioned by the highest authority in metrology, the Comité International des Poids et Mesures (CIPM) in 1977. The work replaced what had been metrology’s Traditional Approach, aka the Error Approach or True Value Approach to measurement error, with the Uncertainty Approach. Not only did these original publications postdate the development of information theory by half a century, neither document uses the term information theory nor cites any of its proponents or founders, as, for example Claude Shannon. To be sure, no reference to metrology is known in any curriculum or text in Communication Theory, the field encompassing Information Theory and Detection and Estimation Theory. Moreover, depending on the parsing of the quotations, the GUM appears to contradict Tal where it says,

        The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement … . Id. §0.2.

        Tal (2015) confesses to confusion among philosophers, and presumably metrologists, as to the definition and application of measurement, but it still places measurement as a hallmark of the scientific enterprise and a privileged source of knowledge relative to qualitative modes of inquiry. Tal SEP (2015) 1st ¶. That importance of measurement to science, and its unrealized importance to climatology, is reflected in Wiley’s down-to-earth description of David Middleton’s book with the esoteric title, Non-Gaussian Statistical Communication Theory (2012):

        The book is based on the observation that communication is the central operation of discovery in all the sciences. In its “active mode” we use it to “interrogate” the physical world, sending appropriate “signals” and receiving nature’s “reply”. In the “passive mode” we receive nature’s signals directly. Since we never know à priori what particular return signal will be forthcoming, we must necessarily adopt a probabilistic model of communication. This has developed over the approximately seventy years since it’s beginning, into a Statistical Communication Theory (or SCT). Here it is the set or ensemble of possible results which is meaningful. From this ensemble we attempt to construct in the appropriate model format, based on our understanding of the observed physical data and on the associated statistical mechanism, analytically represented by suitable probability measures.

        Since its inception in the late ’30’s of the last century, and in particular subsequent to World War II, SCT has grown into a major field of study. As we have noted above, SCT is applicable to all branches of science. The latter itself is inherently and ultimately probabilistic at all levels. Moreover, in the natural world there is always a random background “noise” as well as an inherent à priori uncertainty in the presentation of deterministic observations, i.e. those which are specifically obtained, à posteriori.

        These citations devolve the problem of distinguishing between scientific measurement and metrology to its most primitive state, to the ultimate, Clintonesque problem inherent in language, the existence of undefinables. As the VIM says,

        In some definitions, the use of non-defined concepts (also called “primitives”) is unavoidable. In this Vocabulary, such non-defined concepts include: system, component, phenomenon, body, substance, property, reference, experiment, examination, magnitude, material, device, and signal [plus noise]. Bold added, VIM (2010) p. xii.

        Prof. Middleton recognizes this problem by offsetting his candidate primitives in quotation marks. The VIM helps with this observation:

        It should be recalled, however, that a given concept may be describable by many characteristics and only essential delimiting characteristics are included in the definition. VIM (2008) Annex A, p. 54.

        A signal then is anything that conveys any measurable (objective) knowledge relating to the characteristics of a known or an unknown thing – an object, an event, an energy, a process, or, especially, a pattern – to our senses or to an instrument.

        Again quoting from the VIM, with a few linguistic touch-ups and inserts drawn from other parts of metrology:

        In the GUM, [] definitional uncertainty is considered to be negligible with respect to [] other components of measurement uncertainty. The objective of measurement is then to establish a probability that this essentially unique value lies within an interval of measured quantity values, based on the information available from measurement [Type A or analysis, vs. Type B]. VIM (2008) Introduction.

        Science would interpret a probability as the probability distribution, metrology’s Part A as à posteriori knowledge, and its Part B as à priori knowledge.

      • @ Jeff Glassman

        I find your post very interesting. I was not aware of the field called Statistical Communication Theory. Also, it seems like I will find the works of Professor Eran Tal to be of interest.

        Your reference to Type A uncertainty as a posteriori knowledge and Type B uncertainty as a priory knowledge makes very much sense to me. It makes me think that the Guide to the expression of uncertainty in measurement might be improved by using that terminology rather than «Type A» and «Type B» uncertainty estimates (I always forget which is which). Perhaps the standard should refer to Type A analysis as «a posteriori uncertainty estimate» and Type B analysis as «a priori uncertainty estimate». There is a huge difference between these two types of uncertainty estimates, In my view an a posteriori estimate of uncertainty is based on a comparison of predictions with a sufficiently accurate reference. A priori estimates of uncertainty are based on a mixture of experience, beliefs and hopes – I know that mine are :). I think that difference is somewhat concealed by the existing terminology.

        (For those interested – here is the distinction between the two types as explained in Guide to the expression of uncertainty in measurement: «Thus a Type A standard uncertainty is obtained from a probability density function derived from an observed frequency distribution, while a Type B standard uncertainty is obtained from an assumed probability density function based on the degree of belief that an event will occur [often called subjective probability]. Both approaches employ recognized interpretations of probability.»)

        A few types of uncertainty I find particularly worth emphasizing are: There is no guarantee that the proponent of the model has defined the input and output variables properly. And, there is no guarantee that the proponent has arrived at a proper model. Even an erroneous model can provide reasonable outputs – for a long while – under a limited range of conditions: The congressional committee’s Democratic chairman, Henry Waxman, pressed him: “You found that your view of the world, your ideology, was not right, it was not working?” Greenspan agreed: “That’s precisely the reason I was shocked because I’d been going for 40 years or so with considerable evidence that it was working exceptionally well.”

        I think that´s where ethical guidelines enter into science – and I think that is where scientific moral differs from human moral. Contrary to what one would believe there is no help in looking for evidence that our ideas are correct, such evidence can easily be found, even though our ideas are wrong. On the contrary, we should expose our pet theories to a struggle for survival and only keep the survivors.

      • Re Science or Fiction, 5/13/2016 @ 6:51 pm:

        Safe to say, the designations Type A and Type B are not used in science. They are metrology terms, referenced solely for comparison with the scientific terms à priori and à posteriori, respectively from reason and from experience.

        In science, à posteriori distributions are empirical. À priori distributions are reasoned from scientific principles and other models, from conjectures and hypotheses (tentatively) to theories and laws. Beliefs and hopes are the subjective junk that science displaces.

        Reliance on probability density is discouraged in favor of probability distribution. That’s pedanticism, for sure, but it is supposed to remind people that mathematics can show that as the number of data points increases, the empirical probability distribution will almost always converge to the underlying distribution. No such theorem exists for the density. Going one step further, investigators should never match histograms to densities because that introduces binning errors. The preferred method is to fit distributions to the cumulative data less than or equal to each datum (i.e., bin-free). These observations provide an alert to reviewing papers: the presence of histograms and fits to density functions are an alert to doubt the results.

        Re properly defined inputs and outputs, science provides no criteria for selecting variables. The only test is whether the model has predictive power for what has been selected, given, of course, that the model is consistent with all facts in its domain. Models need have no particular fidelity to the real world. However, when someone’s model is shown to be invalid, such as the AGW failure to predict ECS (dT/d[CO2]_g), the model is fair game for criticism for infidelity. If carbon emissions had predicted global average surface temperature, it would have been a match point for AGW. Instead, since the empirical ECS matches the prediction at about the 3% confidence level, the model is open to criticism for such major goofs as failing to represent dynamic cloud cover (the most powerful feedback in climate, positive with respect to the Sun and negative with respect to surface warming) and failing to include the ocean’s variable carbon pump (which renders man’s puny CO2 emissions insignificant).

    • Science or Fiction, 4/29/16 @12:53 pm asked,

      From where in his writings do [I] get that Karl Popper was a proponent of peer, preview, publication and consensus?

      Not peer, preview, but peer review, a tried but untrue, mid-20th C phenomenon, post-dating most of Popper. See, for example, Fyfe, Peer review: not as old as you might think, timeshighereducation.com. Popper’s deconstruction expressly switched science from objectivity to subjectivity, saying preposterously

      I shall therefore say that the objectivity of scientific statements lies in the fact that they can be inter-subjectively tested fn*1:

      fn•1: I have since generalized this formulation; for inter-subjective testing is merely a very important aspect of the more general idea of inter-subjective criticism, or in other words, of the idea of mutual rational control by critical discussion. Bold added, Popper, The Logic of Scientific Discovery (1934/1959), p. 22.

      Note his use of the word control in the last phrase in bold, a power play variation on his theme not to trust scientists working alone. This is today peer review. He also wrote,

      These considerations … also throw some light, to come back to my main criticism, on the important role which co-operation, inter-subjectivity, and the publicity of method play in scientific criticism and scientific progress.

      That publicity today is called publication, and applies only to certified journals.

      Popper wrote,

      To sum up these considerations, it may be said that what we call ‘scientific objectivity’ is not a product of the individual scientist’s impartiality, but a product of the social or public character of scientific method; and the individual scientist’ impartiality is, so far as it exists, not the source but rather the result of this socially or institutionally organized objectivity of science.

      Popper urged distrust of individual scientists. In fact, S or F’s quotation (a subjective experience … scientific objectivity) is merely Popper’s rare but fair application to himself of his trust-no-one-scientist model. Instead Popper placed his trust in the acceptability of scientific work in the hands of social organizations or institutions. This translates today as the vaunted consensus. Popper was a crank. In the terms of the late Australian Philosopher David Stove, he was the lead character in an opera of recent Irrationalists in philosophy.

      Those are all things Popper actually said, too. One must be careful reading him. Popper’s writings are as loaded with contradictions as any other scripture.

      Science or Fiction Questions:

      From where do you get that Karl Popper restored induction?

      and

      From where do you get that Popper misinterpreted scientific propositions as true/false statements.

      Answer: Direct from Popper. These twin conclusions follow immediately from his falsification idea. His logic begins with his recognition as a practicing logician that universal generalizations could only be falsified, never proved empirically. And as he also knew, they resist empirical proof because they require infinite regression, that is, induction. In conclusion, to make falsification sensible, Popper had to clsim that scientific propositions were universal generalizations:

      It is usual to call an inference ‘inductive’ if it passes from singular statements (sometimes also called ‘particular’ statements), such as accounts of the results of observations or experiments, to universal statements, such as hypotheses or theories. … The problem of induction may also be formulated as the question of the validity or the truth of universal statements which are based on experience, such as the hypotheses and theoretical systems of the empirical sciences. Popper (1934/1959 LogSciDisc) p. 4.

      Popper was not above using boot-strap logic. Scientific statements were UGs because his criteria called demarcation or intersubjective testing required it:

      [S]cientific statements, since they must be inter-subjectively testable, must always have the character of universal hypotheses. Popper LogSciDisc (1934/1959) p. 23.

      Thus Popper restored Aristotle’s childish induction (Bacon).

      Per S or F’s quotation, Popper at the same time was at odds with inductive logic. If he were rational, his requirement for falsification clauses would seem to be a little joke he was playing on science. The hunt is on for the first scientific model that contains its own falsification clause. Nominations are in order.

      Science or Fiction quotes Popper for his dismissal of instances of a <> hypothesis and his elaborations on probability, all under an introduction that twice mentions experimental certainty, once on initial conditions and the other on outcome. An axiom or principle of science is that every measurement has an error. Responsible scientific statements are never statements about certainty. In science, statements analogous to Popper’s famous All ravens are black prototype for scientific propositions are definitions, excluded by Popper:

      They all asserted that there cannot be such a thing as the correspondence between a statement and a fact. This is their central assertion. They say that this concept is meaningless (or that it is undefinable, which, incidentally, in my opinion does not matter, since definitions do not matter). Popper, Objective Knowledge (1972) p24/31.

      This is analogous to his silly dismissal of causation:

      I shall, therefore, neither adopt nor reject the ‘principle of causality’; I shall be content simply to exclude it, as ‘metaphysical’, from the sphere of science. Popper (1934/1959) p. 39.

      The sole, essential test of validity for a scientific model is its statistical predictive power, predictions which follow from candidate Cause & Effect propositions by deduction (Bacon).

      • However you arrived there, I agree with your concluding statement that:
        “The sole, essential test of validity for a scientific model is its statistical predictive power, predictions which follow from candidate Cause & Effect propositions by deduction”

        Popper said something like:
        “what characterizes the empirical method is its manner of exposing to falsification, in every conceivable way, the system to be tested. Its aim is not to save the lives of untenable systems but … exposing them all to the fiercest struggle for survival.”

        Based on Popper I would say:
        What characterize the strive for knowledge is the manner of trying to prove wrong, in every conceivable way, the system to be tested. The aim is not to save the lives of untenable systems but to expose them all to the fiercest struggle for survival. A system is corroborated by the possibility for proving it wrong and the severity of the tests it has been exposed to and survived – and not at all by inductive reasoning in favor of it.

      • Science or Fiction and Jeff Glassman,

        Great discussion between you two. I especially liked this passage, and believe it is true of any person who has almost achieved divinity status:

        Those are all things Popper actually said, too. One must be careful reading him. Popper’s writings are as loaded with contradictions as any other scripture.

        There seem to be two issues:

        • objectivity vs. subjectivity

        • certainty vs. uncertainty

        It seems to me that, historically speaking, there is a positive correlation between certainty and subjectivity.

      • “Popper’s deconstruction expressly switched science from objectivity to subjectivity, saying preposterously “I shall therefore say that the objectivity of scientific statements lies in the fact that they can be inter-subjectively tested”.”

        I understand this completely different from you. I think this is best explained by Karl Popper himself:
        “The words ‘objective’ and ‘subjective’ are philosophical terms heavily burdened with a heritage of contradictory usages and of inconclusive and interminable discussions. My use of the terms ‘objective’ and ‘subjective’ is not unlike Kant’s. He uses the word ‘objective’ to indicate that scientific knowledge should be justifiable, independently of anybody’s whim: a justification is ‘objective’ if in principle it can be tested and understood by anybody.” – Karl Popper – The logic of scientific discovery

      • “Science or Fiction quotes Popper for his dismissal of instances of a hypothesis and his elaborations on probability, all under an introduction that twice mentions experimental certainty, once on initial conditions and the other on outcome. An axiom or principle of science is that every measurement has an error. Responsible scientific statements are never statements about certainty.”

        I know that, you know that and Karl Popper knew that. I think he just used the wrong word, or assigned another meaning to it. I did not understand the word “certain” the same way as you did.

        «I never quarrel about words»
        – Karl Popper

      • “This is analogous to his silly dismissal of causation”.

        Maybe that isn´t so silly. He do not dismiss cause and effect. What he seem to dismissed is the assertion that any event whatsoever can be causally explained – that it can be deductively predicted. That seems reasonable to me. There seem to be stochastic events in nature. There seem to be some randomness. What he dismiss as metaphysical is the claim that if we know enough we can predict every event like in weather or climate.

        “The ‘principle of causality’ is the assertion that any event whatsoever can be causally explained—that it can be deductively predicted. According to the way in which one interprets the word ‘can’ in this assertion, it will be either tautological (analytic), or else an assertion about reality (synthetic). For if ‘can’ means that it is always logically possible to construct a causal explanation, then the assertion is tautological, since for any prediction whatsoever we can always find universal statements and initial conditions from which the prediction is derivable. (Whether these universal statements have been tested and corroborated in other cases is of course quite a different question.) If, however, ‘can’ is meant to signify that the world is governed by strict laws, that it is so constructed that every specific event is an instance of a universal regularity or law, then the assertion is admittedly synthetic. But in this case it is not falsifiable, as will be seen later, in section 78. I shall, therefore, neither adopt nor reject the ‘principle of causality’; I shall be content simply to exclude it, as ‘metaphysical’, from the sphere of science.” Popper LogSciDisc (1934/1959) p. 39.

      • Science or Fiction, 4/30/16 @ 4:33 pm defends Popper with his own statements and rationalizations. That’s begging the question. He can’t be shown right because he said was. His utterances are valid only in a negative critique.

        Again at 5:17 pm, S or F claims that Popper knew that every measurement has an error, saying that Popper may have just used the wrong word or assigned another meaning to it. My criticism is not about anything Popper might have said. My problem is that he, like his contemporaries who shunned him, tried, part, to bring science back into the philosophical fold by rendering scientific propositions as true or false. That belief, extended to universal generalizations, is the origin of Popper’s falsification notion.

        Science is not about truth, or proofs, or, impliedly, certainty or always. It is about prediction from noisy facts to future noisy facts, where facts are observations reduced by measurements and compared with standards. Popper, Wittgenstein, Schlick, the Logical Positivists, and the Vienna Circle, and apparently even Ernst Mach, an accomplished scientist and a founding participant in the Circle, didn’t get it.

      • I know that the following is both an invalid argument and an indecent proposal, however – it seems to me that you and Popper are made for each other – may I suggest that you have another look – through another set of glasses? I understand his writings very differently from what you do. The first 26 pages of The logic of scientific discovery did it for me.

      • Science or Fiction, 4/30/16 @ 7:00 pm

        Maybe that isn´t so silly. He [did] not dismiss cause and effect. What he seem[ed] to dismiss[] is the assertion that any event whatsoever can be causally explained – that it can be deductively predicted. That seems reasonable to me. There seem to be stochastic events in nature. There seem to be some randomness. What he dismiss[es] as metaphysical is the claim that if we know enough we can predict every event like in weather or climate.

        Per my quotation earlier, Popper excluded the principle of causality as metaphysical. I share at least this much with Popper: metaphysics is the trash heap of philosophy. If philosophy is chewing gum for the brain, then metaphysics is where the wrappers go. But how could he preserve Cause and Effect without causality or causation? (Usage varies. I use causation to mean the scientific principle that every effect has a cause, and causality to mean that the principle that every cause must precede all its effects.) I must conclude that he dismissed C&E by dismissing his causality, baby, bathwater, and all. If elsewhere he preserved C&E, then we have him skewered for yet another contradiction.

        Neither stochastic events nor randomness exist in nature. Those concepts are either defined with respect to human knowledge, as in predictability or pattern perception, or with respect to numbers, as in probability, and the latter, too, are human inventions. This may seem like picking a nit, but it is the identical problem people, especially IPCC-types, have with chaos and nonlinearity. These are all concepts relating to man’s models of the real world, not to the real world. The real world is neither random nor chaotic nor nonlinear. Those are all excuses for the inability of IPCC-types to predict the origin of much more than their next meal.

        In reality, climate is not that hard to predict. Given enough tolerance, almost anything practical is predictable. It’s just proving quite impossible to predict that man has any measurable effect on climate.

        P.S. That is an unfair accusation because prediction is a requirement from Modern Science. Prediction is not a requirement in IPCC climatologists’ world of Post Modern Science.

      • Thank you for a good discussion, I need to look more into Popper on causality. Regarding the following of your statements:
        “Neither stochastic events nor randomness exist in nature…In reality, climate is not that hard to predict. Given enough tolerance, almost anything practical is predictable.”

        By the following meaning of stochastic: having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely. How do you know that neither stochastic events nor randomness exists in nature. How, do you know that stochastic events at some microscopic levels isn´t in part the cause for uncertainty in some measurements or predictions. like in trying to predict the position of every particle in a turbulent fluid flow.

        “The real world is neither random nor chaotic nor nonlinear.” Looking at my wife right now – I would say she looks nicely curved.

      • Science or Fiction, 5/1/16 @ 3:28 am asks two questions:

        How do you know that neither stochastic events nor randomness exists in nature. How, do you know that stochastic events at some microscopic levels isn´t in part the cause for uncertainty in some measurements or predictions, like in trying to predict the position of every particle in a turbulent fluid flow.

        That knowledge comes from definitions. Nature, I presume to stipulate, means absent human or human-like influence. Stochastic and random are defined in two ways. The first we can dismiss because it involves humans, or at least sentient beings, as in predictability and observed patterns. Birds and apes use tools, requiring prediction. A big cat predicts where is prey is going to be in the next few seconds. S or F’s example of predicting in turbulent flow is a modeling problem, not a natural world problem. But outside of sentient beings, predictions and pattern observations have yet to be discovered in nature. Maybe we could stipulate that stochastic and random require intelligence. Continuing, the second part of the definitions involves numbers and probabilities. Those seem clearly human inventions. No one rational even bothers to look for numbers in nature, notwithstanding the label natural numbers.

        Some offer as an exception Darwin’s Natural Selection. It has the power to collect incremental changes to help life survive. It has the power of recognize the pattern of where life is going, and to give evolution direction. It has the power to coordinate mutations and adaptations into genetics. The answer is that Darwin’s Natural Selection does not fit the offered definition of science. There is an alternative, which I call Darwin 2.0. See

        https://wattsupwiththat.com/2015/03/17/hot-news-evolution-cools/

        https://wattsupwiththat.com/2016/01/30/climate-alarmists-discover-natural-selection/

        Of course, if anyone can stipulate definitions free of intelligent beings, the answer would be reversed.

      • Please excuse me, I´m only concerned about human understanding of nature, I have now idea about how to discuss nature per se.

  33. Do you run comparisons between model outputs for key parameters? For example, how much of a spread do they have for the total amount of water vapor in the atmosphere in the tropics, in grid format? I have never sat in a shop where these models are run, but if I had to judge how well they work I would start by checking to see how they compare at the nuts and bolts level.

    These comparisons between models are fairly easy to put in a visual format so they can be viewed in a hive environment.

  34. Public media has critical role in which expert judgement is presented to general public.
    Ten years ago expert judgement according to the BBC’s science department was:
    Sun has critical effect on the climate change; Little Ice Age was caused by sun at the time of Maunder Minimum.
    http://www.bbc.co.uk/iplayer/episode/b0074s96/the-sun
    spool forward to 33.00 min.
    Spool forward 19 years expert judgement is not only that sun didn’t do it but even existence of Little Ice Age is questioned.
    What I found it confirms BBC wisdom that was prevailing 10 years ago
    CET follows all ups and downs in the rate of change of the solar activity’s the longer term average. One exception is the second half of 1700s, the time of number of powerful Icelandic volcanic eruptions.
    http://www.vukcevic.talktalk.net/GC.htm
    So what has changed in 10 years?
    – Solar activity has prediction that SC24 is going to be one of the strongest ever, it proved to be very wrong (I am not known for the expert judgement, so when I presented my ‘lowest since 1900’ SC24 projection, to the NASA expert featured in the video, response was “not possible”, but it turned to be correct).
    – Global temperature pause strengthen.
    Question to BBC is: was the science department aware of the relevant comments in the program re-transmitted last night (normal practice is that all programs before transmission are thoroughly checked for technical and content compliance)
    I would like to think that BBC was aware of ‘non pc’ content, and the ‘softer’ attitude regarding AGW is sign of things to come.
    ‘Expert judgement’ of a decade ago has been rejected by the ‘expert judgement’ of today, but nature may force re-establishing the ‘expert judgement’ of decades ago.

  35. nobodysknowledge

    I think Evan Jones hit the nail on the head
    “From my wargame design perspecive, Dr. Curry has hit the nail on the head. For A to be true, the components of A must add up. If they exceed A, then there is something wrong — sometimes with A, more often, with the components. For a historical game to have any “accuracy” at all, it needs to be designed top-down.”
    Looks like many people have a poorer understanding of climate change when they depend on models. A longt history of wild speculations based on models that have been wrong. Could it be time to learn from that? All the people trying to say something about future without knowing the whole picture of the past. I think that the only solution is to use all statistics available to build a storyline.

  36. As an expert within my field I can assure you that expert judgement is crap, without data – I know nothing.

  37. “Identifying the most important uncertainties and introducing a more objective assessment of confidence levels requires introducing a more disciplined logic into the climate change assessment process. A useful approach would be the development of hierarchical logical hypothesis models that provides a structure for assembling the evidence and arguments in support of the main hypotheses or propositions.”

    Regarding uncertainty, I have the following to say:
    There is only one internationally acknowledged standard for the expression of uncertainty in measurement (estimation if you like) and that is:
    Guide to the expression of uncertainty in measurement

    The following seven organizations* supported the development of that Guide, which is published in their name:
    BIPM: Bureau International des Poids et Measures
    IEC: International Electrotechnical Commission
    IFCC: International Federation of Clinical Chemistry
    ISO: International Organization for Standardization
    IUPAC: International Union of Pure and Applied Chemistry
    IUPAP: International Union of Pure and Applied Physics
    OlML: International Organization of Legal Metrology

    More details here: This is how the climate industry should have reported uncertainty!

    Regarding expert judgement I have the following to say:
    “a subjective experience, or a feeling of conviction, can never justify a scientific statement, and that within science it can play no part except that of an object of an empirical (a psychological) inquiry. No matter how intense a feeling of conviction it may be, it can never justify a statement. Thus I may be utterly convinced of the truth of a statement; certain of the evidence of my perceptions; overwhelmed by the intensity of my experience: every doubt may seem to me absurd. But does this afford the slightest reason for science to accept my statement? Can any statement be justified by the fact that Karl Popper is utterly convinced of its truth? The answer is, ‘No’; and any other answer would be incompatible with the idea of scientific objectivity.”

    And regarding “probability of hypothesis”:
    “All this glaringly contradicts the program of expressing, in terms of a ‘probability of hypotheses’, the degree of reliability which we have to ascribe to a hypothesis in view of supporting or undermining evidence.”

    – Karl Popper – The logic of scientific discovery

    I think it will be a challenge to build a strong case on expert judgement, or probability of hypothesis – to many experts have been dead wrong – and Karl Popper has a strong case agains probability of hypothesis.

    • > I think it will be a challenge to build a strong case on expert judgement, or probability of hypothesis – to many experts have been dead wrong – and Karl Popper has a strong case agains probability of hypothesis.

      Conflating expert judgement with probability might be suboptimal.

      Here’s the extent of Sir Karl’s case:

      What, then, is the source of the widespread conflation of truthlikeness with probability? Probability — at least of the epistemic variety — measures the degree of seeming to be true, while truthlikeness measures the degree of being similar to the truth. Seeming and being similar might at first strike one as closely related, but of course they are very different. Seeming concerns the appearances whereas being similar concerns the objective facts, facts about similarity or likeness. Even more important, there is a difference between being true and being the truth. The truth, of course, has the property of being true, but not every proposition that is true is the truth in the sense of the aim of inquiry. The truth of a matter at which an inquiry aims is ideally the complete, true answer to its central query. Thus there are two dimensions along which probability (seeming to be true) and truthlikeness (being similar to the truth) differ radically.

      http://plato.stanford.edu/entries/truthlikeness/

      But objective Bayesians, I know, I know.

      • @ Willard – long time no see!

        «And how do you falsify falsificationism exactly, Fiction?»

        As far as I understand – Popper regarded his method as an ethical way of conduct in the strive for truth, he did not think that his method was falsifiable. I think that he might have underestimated the power of his method.

        Based on Popper´s writings I would say:
        What characterize the strive for knowledge is the manner of trying to prove wrong, in every conceivable way, the system to be tested. The aim is not to save the lives of untenable systems but to expose them all to the fiercest struggle for survival. A system is corroborated by the possibility for proving it wrong and the severity of the tests it has been exposed to and survived – and not at all by inductive reasoning in favor of it.

        In accordance with Popper´s method I will not state that something is true or even probably true. I will state that for this particular theoretical system, under this particular set of conditions, I predict that the following set of inputs will result in the following output value(s) within the following uncertainty limits. Rather than stating that something is true or probably true I will submit information about the tests that this particular theoretical system has been exposed to. If the outcome is not as predicted, all I will know is that there is something wrong with the theoretical system or the test of it.

        You can simply prove that method wrong by providing information about a case where this method will prove wrong something which isn´t wrong.

      • > You can simply prove that method wrong by providing information about a case where this method will prove wrong something which isn´t wrong.

        That’s one way to circumvent the falsifying falsificationism quandary, Fiction: replace “falsify” with “prove wrong.”

        Sir Karl did the opposite of this verbal defense in the 50s:

        We’ve already been over the fact that he did not “regard” his method as a method.

        Think about verisimilitude, please – that’s where Sir Karl had to backtrack because of his objectivist conception of probability.

      • “In point of fact, no conclusive disproof of a theory can ever be produced.”

        Without seeing the context, I suspect that statement is related to the following quote:
        “For it is always possible to find some way of evading falsification, for example by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such procedure is possible» – Karl Popper The logic of scientific discovery.

        That´s why high ethical standard is required from a scientist, a scientist must avoid using these stratagems to avoid falsification. If these stratagems are used it is nearly impossible to disproof theoretical system.

      • “That’s one way to circumvent the falsifying falsificationism quandary, Fiction: replace “falsify” with “prove wrong.””

        The only reason I replaced falsify with prove wrong is that falsify have at least two completely different meanings:
        1. falsification – any evidence that helps to establish the falsity of something
        2. falsification – a willful perversion of facts

        As Karl Popper clearly meant meaning number 1 – I wanted to be more precise by replacing “falsify” with “prove wrong”. I really don´t understand why that should be a problem.

      • “We’ve already been over the fact that he did not “regard” his method as a method.”

        It seems to me that Popper certainly did regard his method as a method:
        “The theory to be developed in the following pages stands directly opposed to all attempts to operate with the ideas of inductive logic. It might be described as the theory of the deductive method of testing, or as the view that a hypothesis can only be empirically tested—and only after it has been advanced.” – Karl Popper – The logic of scientific discovery

      • “Think about verisimilitude, please – that’s where Sir Karl had to backtrack because of his objectivist conception of probability.”

        May I kindly ask that you provide some more information in support of your position.

      • > I really don´t understand why that should be a problem.

        One does not simply falsify falsificationism, Fiction. Think about it: the experimentum crucis that that would refute once and for all falsificationism would itself prove its truth! Trying to directly falsify falsificationism simply leads to a pragmatic inconsistency

        Even Sir Karl acknowledge that his doctrine wasn’t falsifiable somewhere. You read him more than me. I’m sure you can find it all by yourself. Falsifiability is neither a method nor the hallmark of every single scientific statement.

        There’s nothing wrong with falsification – it’s most probably required in some way. It’s just a principle according to which the scientific rubber should meet the road. However, as a doctrine, there are two very big problems with it. The first is that it doesn’t describe how we practice science very well­. The second is that even in theory (as a logical reconstruction, say) it doesn’t work – hypotheses cannot be tested in isolation. The other problem I underlined earlier – verisimilitude has yet to garner enough popularity to supplant statistical hypothesis testing.

        Hope this helps,

        w

      • David Springer

        Theory: All swans are white.

        Observation: A black swan.

        Theory disproved.

        Theory: Willard is an imbecile.

        Observation: Says stupid, wrong things.

        Theory not disproven.

      • «Even Sir Karl acknowledge that his doctrine wasn’t falsifiable somewhere.» I remember having seen that as well – I think I will look more into his considerations.

      • «There’s nothing wrong with falsification – it’s most probably required in some way. It’s just a principle according to which the scientific rubber should meet the road. However, as a doctrine, there are two very big problems with it. The first is that it doesn’t describe how we practice science very well­.»

        I agree that falsificationism does not describe how science is practiced. Here is one example of how science is practiced:
        «Be prepared to make expert judgments in developing key findings, and to explain those judgments by providing a traceable account: a description in the chapter text of your evaluation of the type, amount, quality, and consistency of evidence and the degree of agreement, which together form the basis for a given key finding… A level of confidence is expressed using five qualifiers: “very low,” “low,” “medium,” “high,” and “very high.” It synthesizes the author teams’ judgments about the validity of findings as determined through evaluation of evidence and agreement.»

        For this particular example I would say that the problem is how science is practiced. And I think the document these quotes are taken from deserves some scrutiny: Guidance Note for Lead Authors of the IPCC Fi h Assessment Report on Consistent Treatment of Uncertainty. I have a few articles on this document at my site.

      • ” there are two very big problems with it… The second is that even in theory (as a logical reconstruction, say) it doesn’t work – hypotheses cannot be tested in isolation.”

        If a hypothesis cannot be tested in isolation it cannot be corroborated in isolation. That isn´t a problem with the method of empirical falsification, it is a problem for the proponent of the theoretical system of which this hypothesis form a part. A climate model will typically consist of many hypothesis, assumptions, approximations, parameters, variables, initial conditions, software codes etc. . Systematic errors in individual constituents of that model may be hard to detect, errors may camouflage each other. And by the central limit theorem a combination of several systematic errors will even tend to partly cancel each other rather than add to each other as they may go in opposite directions.

    • Steven Mosher

      Yes. Strong cases are built on random opinion.
      No wait.. strong cases are built on the opinions of the criminally insane. No wait..read entrails.

      • I should have used the word argument rather than case – his argument seems valid – falsify it!

      • > I should have used the word argument rather than case – his argument seems valid – falsify it!

        And how do you falsify falsificationism exactly, Fiction?

      • Read the emails, instead of the entrails.

      • David Springer

        Steven Mosher | April 29, 2016 at 6:43 pm | Reply

        Yes. Strong cases are built on random opinion.
        No wait.. strong cases are built on the opinions of the criminally insane. No wait..read entrails emails.

        ——————————————————–

        So close! Fixed it for ya!

  38. I think an issue that needs more thought in the context of this post is “What is an expert?” Many so called experts have poor records in practice. For instance, Faith Birol, is the Executive Director of the International Energy Agency (IEA). His surface qualifications look impressive, but his ability to predict oil and energy supplies has been absymal. In an interview with Monbiot, in 2008 was that “we have said is that this year, compared with past years, we have seen that the decline rates are significantly higher than what we have seen before. But our line that we are on an unsustainable energy path has not changed.” http://www.theguardian.com/business/2008/dec/15/oil-peak-energy-iea

    Joe Romm called him the “world’s top energy economist” and quoted Birol as stating “We have to leave oil before oil leaves us.” http://thinkprogress.org/climate/2009/08/03/204444/eia-faith-birol-peak-oil/ However, this so called expert, along with the likes of Paul Ehrlich and John Holdren has been pantsed by Julian Simon, who was a professor of advertising and marketing, and who has been by far, the best environmental futurist. (Simon has accurately stated that human ingenuity outpaces raw material scarcity)

    I would add that in my workers’ compensation practice, the worst mistake I ever saw was a Cleveland Clinic so called specialist who claimed that my client had a mental problem when the client (as time proved) actually had scleroderma, which had been diagnosed by an Internist.

    In the climate context, I would avoid the term “expert” and simply say that an individual has a certain amount of experience working in a particular area.

    JD

    • > Simon has accurately stated that human ingenuity outpaces raw material scarcity.

      That “but Simon” meme again:

      Ehrlich lost the bet, as all five commodities that were bet on declined in price from 1980 through 1990, the wager period. However, economists later showed that Ehrlich would have won in the majority of 10-year periods over the last century, and if the wager had included all important commodities instead of just five metals, or if it was extended by 30 years to 2011, Ehrlich would have won.

      https://en.wikipedia.org/wiki/Simon%E2%80%93Ehrlich_wager

      • Willard you have outdone yourself :) If the bet was different he would not have lost :)

      • Even you should be able to admit that the extrapolation from “Julian predicted 5 commodities between 80 and 90” to “Simon has accurately stated that human ingenuity outpaces raw material scarcity” is a bit farfetched, Cap’n.

        You should also agree because losing just about any other bet, besides making Julian a little less famous, may not have refuted human ingenuity.

      • Steven Mosher

        Yes captain.
        Willard is shifting the goalposts.
        Number 4 trait of denialism.

        Also note contra Andy west. I didn’t need domain expertise to see the goal posts move.

      • Willard. Simon’s predictions haven’t turned out to be 100% accurate but they were far more accurate than accurate than the prophesies of doom made by Holdren & Ehrlich and many other environmentalists.

        JD

      • Willard, a wager is a wager nothing more. The biggest thing I get from the wager is that Ehrlich would suck as a commodities trader. He wouldn’t make much of an diplomat either.

      • I think you’re missing the extrapolation, Mosh.

      • btw, copper peaked at nearly $4.50 a pound about when the “economists” made their claim, it is now close to $2.00 a pound and that fluctuation has squat to do with human ingenuity or material scarcity. It is due to herd mentality and idjots trying to predict the future.

      • > Willard, a wager is a wager nothing more

        Of course, which is why very few put their money where their mouths are and go bet with JamesA.

        And yet from that wager we get that Simon has “accurately” stated that human ingenuity outpaces raw material scarcity.

        Talk about shifting goalposts.

      • Willard: “And yet from that wager we get that Simon has “accurately” stated that human ingenuity outpaces raw material scarcity. Talk about shifting goalposts”

        The wager was just an example. Simon’s points made in The Ultimate Resource have been proven to be true. Humanity, notwithstanding many more people, is doing much better than it was in 1980, with large reductions in poverty occurring. Meanwhile, a stupendous failure like Holdren is appointed as Obama’s science advisor. In fact, the precipitous drop in the price of computing can be viewed as very close or analogous to the a commodity price drop. Environmentalists are such delusional and irrational flat earthers that they continue prophesizing doom and catastrophe although the opposite is occurring.

        JD

      • > The wager was just an example. Simon’s points made in The Ultimate Resource have been proven to be true.

        Simon recited the usual GRRROWTH axiom, JD. Whether you use ingenuity or infinity (the Zeno-like argument in the book), it’s not even an empirical proposition. There’s no engineer-level formal derivation of it either. Even on population growth Simon wasn’t the Galileo freedom fighters would like him to be:

        http://www.jstor.org/stable/2807977

        (Note where the author underlines that Simon’s modulz -gasp!- assume both linearity and continuity.)

        One does not simply “proves” a vacuous proposition like GRRROWTH. It’s more effectively used in libertarian claptraps by cottage industries like CATO, where Simon was incidentally a Senior Fellow.

        ***

        > Meanwhile, a stupendous failure like Holdren […]

        Look, a temporal squirrel!

    • good point, but the actual ‘expert’ situation is even worse — so-called experts chiming in on the attribution problem don’t have experience in this particular area, but have worked in peripheral areas

    • “However, global climate models are not developed with these user needs in mind, and they typically operate at resolutions that are too coarse to provide information that could be used to support regional and local decisions.”

      Gee, who would would have thought that modellers were developing models for themselves, and not with the needs of possible users?

      And how much has been spent on these potentially useless models?

      Steven Mosher, many thanks for pointing this out. Which set of experts do you advise we listen to?

      Cheers.

      • Steven Mosher

        The question is not who to listen to.
        The question is who to discount.
        1. Discount Flynn.
        2. Discount people who have no experience deciding under deep uncertainty.
        3. Discount people who have no experience with modelling.
        4. Discount people who have no experience with data.

        As for downscaling. I worked on one study. It’s interesting work. Requires a lot of cross disciplines.

      • Steven Mosher said:

        2. Discount people who have no experience deciding under deep uncertainty.

        Well i guess that leaves Bush in the running.

      • Steven Mosher said:
        2. Discount people who have no experience deciding under deep uncertainty.

        If there ever was a criteria which would not excluded anyone – that would be it. I believe every living organism being able to take a decision at all have experience with deciding under deep uncertainty.

  39. Its been a good thread so far IMO. A few home truths to ponder for peeps of all persuasions. Debates IMO should be done with peeps listening. Not enough listening is happening and what we have seems to a constantly revolving door.

  40. Geoff Sherrington

    This topic here is getting far too finessed. With thanks to those differing over Popper quotes, can we please take it back to major issues?
    It is about hard data versus belief.
    At one extreme we can chose the example of everyday religion. Few scientists would claim there is hard evidence of the major pillars like Life After Death, a place called Heaven, Angels are real, Miracles happen and so on. Yet, a significant % of many civilisations has guided their lives with religious beliefs in mind, even dominant. What is worse, they have often persecuted those who oppose the prevailing fashionable beliefs. Into the future, that way lies madness and warfare.
    OTOH, there are some activities where science still prevails. My main occupation, the discovery and development of new mineral deposits, just happens to be a handy case and I will concede that I am coloured by that experience. Many people seek to persecute me or my ilk.
    Imagine this scene from the minerals sector. There is a plot of ground where geological observation, geochemical and geophysical measurement and objective interpretation show a good probability of a mine being there. The usual procedure is to assemble those on hand who have done this before, to decide if a drill hole is warranted by the data, what the simple hypothesis to be tested is. Money in hand, a drill hole might be put down. I call that a science-based procedure. Alternatively, with a belief based procedure the sky is the limit. We can have sages voting their subjective probabilities that a hole is worth the cost. We can have formal Bayesian analysis with different priors and different outcomes, to be weighed again to guess which is the best.
    Farewell, suckers, you are in the same clueless group as those epitomised by Steven’s quote, spending your money (or ours!) before there is more than just guesses.
    https://eos.org/meeting-reports/climate-modeling-with-decision-makers-in-mind
    Uri Geller, who claims to bends spoons, asked us for our bizjet to fly at 40,000 ft while he wrote on a map where new mines would be. For a fee. This is obviously a belief-based example and we declined his offer.
    If scientists are not comfortable sticking to hard science without belief inputs, they are free to take lotto tickets, knowing that the House will win overall, but they will still be driven by the belief “You can’t win it if you are not in it”. Sure, scientists can exercise personal beliefs, but away from the workplace is best.
    Likewisew, there is no place for belief inputs in climate studies. If the stage is reached where someone calls for a squirt of belief, that should disqualify the project from appearing anywhere near a decision maker. The need for belief is a lazy need, an admission that the science is incomplete – and incomplete science should not ever inform policy. A policy might still be needed, but the science part of the matter should be excerpted. Decide to start a war, but don’t use science as a reason for it.
    In climate work, belief enters the flow of information at the stage of the initial work on a subject, where the worker believes e.g. that tree rings can be used for proxies. It enters again, when the paper is written, when the author leaves out the inconvenient bits or fails to report a hypothesis dashed. It comes again in peer review, by selecting reviewers known to have similar beliefs. And again in the press release. And again in the press where reporters might write “Global greening from CO2 believed to have hidden dangers” or the like. It happens again when voters go for parties that share their belief in that topic. It intrudes again when decision makers adopt a stance that they believe in, as is usually necessary because they do not understand the science.
    It is not turtles all the way down, it is beliefs all the way down.
    The biggest problem lies right at the start. Scientists should not allow decision makers near any data that is affected by belief.

    I have never studied Popper. These thoughts are my own and they go back 40 years or more. A small team of us, over 20 years, used principles like this to discover a dozen mines and to see them bring in sales of tens of billions of dollars. What a shame that some of those dollars have gone, unwished, from taxes to those who are actively working to destroy scientific systems and credibility.

    • The need for belief is a lazy need, an admission that the science is incomplete – and incomplete science should not ever inform policy. A policy might still be needed, but the science part of the matter should be excerpted.

      I hear gut feeling works well.

      • brandongates,

        “Gut feeling” is all that informs the majority of climate science, that and the need to stay bellied up to the government feeding trough.

        So you should be all for it.

      • blockquote>So you should be all for it.

        I’m for rational belief, Glenn — an option dear Geoff left out of his strawmanning, which upon which you have doubled-down. What’s your solution to this epistemological problem?

      • David Springer

        Oh how cute! An R. Gates family photo!

      • Oh how cute! An R. Gates family photo!

        Oh how cute, another content-free “comment” from David Springer.

        For the record, I am not to my knowledge related to the real R. Gates that I know of. I don’t exactly mind the association because in my view he was a master of his art, whereas I may only be worthy of polishing his boots, if that.

        Not that it further really matters, I am related to another R. Gates, from whence I derive my middle initial. Same first name as the ex-Sec Def and ex-DCI, not the same guy. Nor am I related to anyone famous given the name of William, (un)popularly called Bill. Alas.

        I believe that I have now been properly introduced, and we may carry on as we will.

      • David Springer

        brandonrgates | April 30, 2016 at 11:36 pm |

        “Oh how cute, another content-free “comment” from David Springer.”

        Yet it inspired several paragraphs in response from you.

        Non-sequitur, Dummy.

      • Glenn –

        ==> “Gut feeling” is all that informs the majority of climate science,

        802 comments (and a post from Judith) from which you can test your assertion for selectivity:

        https://judithcurry.com/2012/09/15/bs-detectors/

      • Non-sequitur, Dummy.

        Fluff follows fluff, Springer.

    • > It is about hard data versus belief.

      I thought it was about “some of those dollars have gone, unwished, from taxes to those who are actively working to destroy scientific systems and credibility.”

      Is this a matter of hard data too?

    • Geoff Sherrington, 4/29/16 @9:45 pm may have missed the point when he remarked

      With thanks to those differing over Popper quotes, can we please take it back to major issues?

      I introduced Popper solely to illuminate the major issue over expert judgement (in the title) and consensus seeking (in the first paragraph). These concerns are extremely rare in industrial science, which practices Modern Science. They are last resorts there when science has not offered an acceptable answer (implying an authority). Example: the consensus voting to overrule the expert who said do not proceed with the launch of Challenger.

      Reliance on expert opinion and consensuses are routine in academic, Publish or Perish, science. It is the practice of Post Modern Science (my term). That brand of science has five characteristics as formally recognized by the US Supreme Court for expert testimony in federal trials, four in the affirmative and one in the negative. Daubert v. Merrell Dow 1993. The Supreme Court recognized only one as due to Popper, but all five (taken in the affirmative) are in fact traceable to him. Collectively they are demonstrably Popper’s philosophical deconstruction of Modern Science.

      Popper, widely recognized as an expert on the philosophy of science, managed to pollute science and most of academia. The companion major topics of this thread are but one of many current examples in science, including medicine, physiology, and evolutionary biology. He is the major, transcending issue. His name is offered as a relevant response, and by way of an explanation, all in the distant hope of a reversal in academia, and the promise of our host’s ongoing evolution.

      • > He is the major, transcending issue.

        Sir Karl Popper is the major, transcending issue?

        That would deserve due diligence.

        Alas, so much to do, so little time.

      • Willard, 4/30/16 @ 3:07 pm was dubious about Popper being the major, transcending issue.

        Today’s climatology is a vast mobile, filled with branches and links, but no one else seems to notice that it is not connected to a ceiling. An upside down tree with no roots. We have a surfeit of discussions about how to measure temperatures or gas concentrations, regional phenomena in all dimensions, aerosol effects, or the validity of proxies. These supposedly feed into a model supported by an orchestra of rationalized computer models which (1) have individually and collectively have no predictive power, the ultimate criterion of science, all caused by the (2) omission of critical elements of the climate system.

        Topping the list of those emissions has to be dynamic cloud cover. Because it gates the Sun on and off, it comprises the two most powerful feedbacks in the entire climate system. IPCC parameterizes cloud cover albedo, making it static, extinguishing the positive feedback to TSI that amplifies the Sun, and at the same turns off the negative feedback that mitigates surface warming from all causes. One candidate for second on the list is current climatology’s omission of the carbon pump. It is caused by Henry’s Law of Solubility acting on the immense (15 to 50 Sv), slow circulation (~1 millennium) of the surface layer. The circulation moves seasonally poleward, absorbing CO2 as it cools, then plunging to the bottom of the ocean, maximally loaded with CO2, to be sucked back to the surface by the Ekman transport, there to be heated by the Equatorial Sun, and there to outgas orders of magnitude more CO2 than previously estimated for the ocean, whereupon the warm, humid gas enters the Hadley cells to cool and descend upon, for example, Mauna Loa. A little bit of this carbon pump comprises the MOC/THC. And there’s more.

        The point of all this is that fatal errors exist in the AGW model at the very top. That is why science favors top-down analyses whenever resources count. Don’t bother with the small stuff when the model is doomed from first considerations.

        So it is with analysis of the scientific method and its implications for expert judgement and consensus seeking. It’s worrying over whether you’re sitting in your assigned pew when you’re in the wrong church. Post Modern Science (Popper), and PMS alone, is about falsification clauses, peer review, publication in journals, consensus forming, and political correctness. Modern Science (Bacon), and MS alone, is about predictive power. The two are orthogonal.

      • > Post Modern Science (Popper), and PMS alone, is about falsification clauses, peer review, publication in journals, consensus forming, and political correctness. Modern Science (Bacon), and MS alone, is about predictive power. The two are orthogonal.

        This means that if I search for “Popper” and “predictive power,” I should get get no hit where predictive power is not part of Popper’s philosophy of science. Here’s a quote from the first resource I get:

        However, Popper stresses that we ascertain whether one theory is better than another by deductively testing both theories, rather than by induction. For this reason, he argues that a theory is deemed to be better than another if (while unfalsified) it has greater empirical content, and therefore greater predictive power than its rival The classic illustration of this in physics was the replacement of Newton’s theory of universal gravitation by Einstein’s theory of relativity. This elucidates the nature of science as Popper sees it: at any given time there will be a number of conflicting theories or conjectures, some of which will explain more than others. The latter will consequently be provisionally adopted. In short, for Popper any theory X is better than a ‘rival’ theory Y if X has greater empirical content, and hence greater predictive power, than Y.

        http://plato.stanford.edu/entries/popper/#SciKnoHisPre

        As you can see, your “Post-Modern Science” theory has not much predictive power.

      • Steven Mosher

        nice willard.
        I have noticed that when folks cant discuss the science they go philosophical. As if.

      • I think we could agree that most people can’t discuss most of the science, Steven Mosher. I will gladly put myself in that bucket. Since this is a Willard thread, I might note that perhaps identifying good heuristics is optimal.

      • Willard, 4/30/16 @ 10:38 pm gets +1 for cross-checking claims. To make his counterpoint, Willard lifts 140 words from Thornton, Karl Popper, Stanford Encyclopedia of Philosophy (2013). Those 140 words are from Thornton’s 648 word explanation of a 132-word passage Popper revised in 1959.

        How much Thornton (2013) changed Popper (1959) is for some other blog. But, Thornton refers to Newton’s gravitation model as universal, seemingly attributing it to Popper. Popper didn’t do that, at least in those 132 words. Did Newton ever make that claim? This is not a triviality because Popper’s model of science turns on it containing universal propositions, claimed above and confirmed next.

        In the first sentence following Willard’s excerpt, Thornton summaries his expansion of Popper (1959) with this:

        The general picture of Popper’s philosophy of science, then is this: Hume’s philosophy demonstrates that there is a contradiction implicit in traditional empiricism, which holds both that all knowledge is derived from experience and that universal propositions (including scientific laws) are verifiable by reference to experience. Thornton (2013) p 11/33.

        First, to correct if not finesse Hume, empiricism today holds that all objective knowledge is derived from experience. Science is the objective branch of knowledge. So, all scientific knowledge is derived from experience. History, Popper’s historicism, metaphysics, art, harmony, and psychology are all examples of knowledge, just not objective knowledge.

        The second part of Popper’s philosophy, that concerning universal propositions (including scientific laws) is erroneous. Scientific models, including laws (in the conjectures, hypotheses, theories, laws schema) are never universal propositions. Scientists cannot observe the universal nature of phenomena.

        Five sections later, Thornton says,

        While it cannot be said that Popper was a modest man, he took criticism of his theories very seriously, and spent much of his time in his later years trying to show that such criticisms were either based upon misunderstandings, or that his theories could, without loss of integrity, be made compatible with new and important insights. Thornton (2013) p24/33.

        Plus one for Popper. Every person has a perfect right to change his mind. That is a virtue. But to be taken seriously, a scholar who changes his mind and just ignores his previous pronouncements is guilty of self-contradiction, a fault. He has a professional duty explicitly to admit his error, and then to explain his new reasoning.

        So in his 1959 summary of the scientific method, what happened to Popper’s intersubjective testing? That part of he contends all scientific statements must have? That rule which still appears in Popper (1959)? Quoted earlier.

        Again this neither a triviality nor even a mere observation of a Popper self-contradiction. Also as reported earlier, Popper’s intersubjective testing is collective subjectivity replacing objectivity. His IT is a triad comprising peer review, publication, and consensus (all in a certified, closed, and hence unscientific community). Popper’s intersubjective testing is the traceable origin of the two-pronged subject of the subject article and of Post Modern Science.

      • Jeff Glassman said:

        This is not a triviality because Popper’s model of science turns on it containing universal propositions….

        Also as reported earlier, Popper’s intersubjective testing is collective subjectivity replacing objectivity.

        Stephen Toulmin articulated a similar criticism of Popper. Here’s how he explained it:

        Paul Feyerabend has followed up his earlier work on rationalism, ‘Against Method,’ with a new collection of essays called ‘Farewell to Reason’; yet the “reason” that Feyerabend bids farewell to is not the everyday ideal of being “reasonable” or “open to reason,” which Montaigne and the humanists embraced.

        Rather, it was what he calls “scientific rationalism”: i.e., the 17th-century dream of a logical rationality, shared by philosophers from Descartes to Popper….

        The ideals of reason and rationality typical of the second phase of Modernity were…intellectually perfectionist, morally rigorous, and humanly unrelenting. Whatever sorts of problem one faced, there was a supposedly unique procedure for arriving at the correct solutions. That procedure could be recognized only by cutting away the inessentials, and identifying the abstract core of “clean and distinct” concepts needed for its solution.

        Unfortunately, little in human life lends itself fully to the lucid, tidy analysis of Euclid’s geometry or Descartes’ physics….

        Rationality adequate thought or action cannot, in all cases equally, start by cleaning the slate, and building up a formal system: in practice, the rigor of theory is useful only up to a point, and in certain circumstances.

        Claims to certainty, for instance, are at home within abstract theories, and so open to consensus: but all abstraction involves omission, turning a blind eye to elements in experience that do not lie within the scope of the given theory, and so guaranteeing the rigor of its formal implicaitons.

        Unqualified agreement about these implications is possible, just because the theory itself is formulated in abstract terms.

        Once we move outside the theory’s formal scope, and ask questions about its relevance to the external demands of practice, however, we enter into a realm of legitimate uncertainty, ambiguity, and disagreement….

        All the same, rationalism died hard… [A]t Karl Popper’s insistence, sharp criteria were used to “demarcate” genuinely scientific issues from other, irrelevant, or superstitious quesitons about ideology and metaphysics.

        In a rationalist spirit, the “demarcation criteria” were timeless and universal demands of a “critical reason” that operated above or apart from the changes and chances of history….

        Karl Popper’s insistence that the criteria of scientific rationality are universal implies that we can decide, here and now, what it is “scientific” to consider anywhere and at any time. According to him, all “scientists” worthy of that name serve the same timeless interests everywhere and always. Others may conclude that we can master the scientific ideas of earlier times fully, only if we look at them in their original contexts.

        — STEPHEN TOULMIN, Cosmopolis

      • > Willard, 4/30/16 @ 10:38 pm gets +1 for cross-checking claims.

        Only one claim, Jeff.

        And the claim doesn’t hold up.

    • Uri Geller, who claims to bends spoons, asked us for our bizjet to fly at 40,000 ft while he wrote on a map where new mines would be. For a fee. This is obviously a belief-based example and we declined his offer.

      Nope.

      A real scientist would have made a counter-offer: the fee is contingent on successful mines. A perfectly good scientific experiment.

      People who reject the opportunity for such experiments because they’re counter to consensus “science” are as belief-based as the IPCC “climate science”.

      • AK,

        Well yea, even a stopped clock is right once a day.

        Dad Joiner, who drilled the discovery well for the East Texas Field, had a “geologist” by the name of Doc Lloyd.

        Lloyd took a map with all the major existing oil fields in the world at the time and drew lines threw them. Where these lines intersected he called “the apex of the apexes.” And that’s where they drilled the Daisy Bradford No. 1, the discovery well for the East Texas Field.

        Dad Joiner billed himself as a wildcatter and an oil man, but he was really a con artist and a lady’s man. He had oversold the working interest in the Daisy Bradford No. 1 by several hundred percent to a group of gullible, elderly widows. This presented quite a problem for Joiner when the Daisy Bradford No. 1 struck oil.

        And this is one of the reasons why Joiner was inclined to sell out to H. L. Hunt. It took Hunt’s lawyers and landment years to clear up the title.

        As one of the old Hunt landmen always used tell me, “There’s a great deal to be said for serendeptity.”

        Or as another old oil man used to say: “I’d rather be lucky than skillful anyday.”

      • Dad Joiner, who drilled the discovery well for the East Texas Field, had a “geologist” by the name of Doc Lloyd.

        Well, according to this account, Joseph Idelbert Durham, AKA ‘“Doc” Lloyd’, had a good previous track record:

        Taking the name “A.D. Lloyd,” Durham proclaimed, “I’m not a professional geologist…but I’ve studied the earth more, and know more about it, than any professional geologist now alive will ever know.”

        Joiner believed in “Doc” Lloyd and his confidence was reinforced when Lloyd accurately located the rich Seminole oilfield. Joiner drilled to within 200 feet of discovering this previously untapped reserve — but stopped short when his money ran out. Empire Gas & Fuel Company brought in the field’s discovery well on a nearby lease.

        After a similar near miss in Oklahoma’s Cement field and a stretch of bad luck, the broke but optimistic Joiner headed to Dallas, where oilmen and oil money were plentiful. Meanwhile, A.D. Lloyd was off to Mexico, promoting new oil ventures.

        It would appear Joiner ended up losing confidence in “Lloyd”, perhaps because he hadn’t made any money off the latter’s finds. (Or maybe he oversold out of desperation.)

        Like Geller’s spoons, there’s a reasonable chance something was going on, although probably not what the proponents were claiming.

      • AK,

        The story about the “apex of the apexes” is what one of the old Hunt land men told me.

        Here’s how the account you linked describes Doc Lloyd’s “geology,” which seems about as scientific as the “apex of the apexes” account I was told:

        Promoting oil certificates in an area largely dismissed by professionals called for a slick pitch, and Joiner’s self-taught geologist friend, “Doc” Lloyd, could help.

        While Humble Oil Company geologists and geophysicists were reporting that Rusk County offered no possibilities, Joiner was mailing his own report to potential investors: “Geological, Topographical and Petroliferous Survey, Portion of Rusk County, Texas, Made for C.M. Joiner by A.D. Lloyd, Geologist and Petroleum Engineer.”

        Using clear and correct scientific terminology, “Doc” Lloyd’s document described Rusk County anticlines, faults, and a salt dome — all geologic features associated with substantial oil deposits and all completely fictitious. Equally imaginary were the “Yegua and Cook Mountain formations” and the thousands of seismographic registrations ostensibly recorded.

        The impressive looking but fabricated report was accompanied by a map depicting a “salt dome” and a fault running squarely through the widow Daisy Bradford’s farm, the exact site of the 500 acre Syndicate lease block that “Dad” Joiner was promoting.

        And as to your assertion that Doc Lloyd “had a good previous track record,” the passage you cite to justify this conclusion doesn’t support the conclusion:

        Joiner believed in “Doc” Lloyd and his confidence was reinforced when Lloyd accurately located the rich Seminole oilfield. Joiner drilled to within 200 feet of discovering this previously untapped reserve — but stopped short when his money ran out. Empire Gas & Fuel Company brought in the field’s discovery well on a nearby lease.

        Lloyd did not “accurately locate the rich Seminole oilfield.” Empire Gas & Fuel did. They’re the ones who drilled the well that discovered the field.

        This only goes to confirm another bit of common sense that is well known to those in the oil business: “Close only counts in horseshoes and hand grenades.”

      • Lloyd did not “accurately locate the rich Seminole oilfield.”

        So if Joiner hadn’t “stopped short when his money ran out” but had drilled 300 feet further, would his well have been the discovery well? Seems to me you’re making a semantic quibble here.

        Regardless of his fake “geology”, Lloyd had pointed to several good fields. Claiming that it “didn’t count” because the wildcatter ran out of money before he reached oil is pure sophistry.

        Of course, Joiner had never benefited from Lloyd’s “geology”, and perhaps he believed he never would. From a less sympathetic account:

        The first discovery in the East Texas field came in Rusk County. It was there in the summer of 1927 that Columbus Marion (Dad) Joinerqv, a sixty-seven-year-old promoter from Ardmore, Oklahoma, took mineral leases on several thousand acres of land with the intention of selling certificates of interest in a syndicate. Joiner presented a kind and gentle appearance, which belied his shrewd ability to use the labor, property, and money of others in his ventures. Joiner transferred 500 acres of mineral leases into the syndicate and offered a one-acre interest in the lease block and a pro-rata share in a drilling test for twenty-five dollars. He mailed copies of a misleading report, prepared by a nonprofessional geologist, to hundreds of names on his sucker list. The duplicitous report promised the existence of oil-productive geological structures in Rusk County and incorrectly claimed that major oil companies were actively leasing there. Joiner’s real motivation, as evidenced by a later lawsuit, was the sale of his syndicate shares. Because he needed a drilling test to serve as a prop for impressing potential investors, Joiner commenced a lackadaisical poor-boy drilling operation with rusty, mismatched, and worn-out equipment. But, Joiner’s unstable rig and his oil-rich promises appealed to the generosity and dreams of the hard-scrabble farmers and townspeople who donated their labor and traded supplies for syndicate certificates.

        Joiner and driller Tom M. Jones spudded the Bradford No. 1 on an eighty-acre tract belonging to Daisy Bradford in the Juan Ximines Survey of Rusk County. After drilling for six months without finding a show of oil, the hole was lost to a stuck pipe. Joiner abandoned the well at a depth of 1,098 feet. By April 14, 1928, he formed a second syndicate from another lease block of 500 acres and sold certificates of interest through another mail promotion. A second well, the Bradford No. 2, was spudded by Joiner and driller Bill Osborne at a site 100 feet northwest of the original well. Joiner’s certificates again were bartered for supplies and labor for the well, becoming an accepted medium of exchange in the poor economy of Rusk County. After eleven months of intermittent drilling, the Bradford No. 2 reached a depth of 2,518 feet, where the drill pipe twisted off and blocked the hole. Before abandoning the site, Osborne returned to test a shallower horizon at 1,437–1,460 feet where gas had been reported. When no evidence of production was found, the well was abandoned. After abandoning the Bradford No. 2, Joiner formed a third syndicate and managed to oversell the shares of interest in a third test as he had in the first two. On May 8, 1929, driller Ed Laster skidded the rig to a new location, 375 feet from the second site, and spudded the Bradford No. 3. After two days of drilling, the well reached a depth of 1,200 feet, using the same rachitic rig, weary equipment, and farmers as rig hands. The boilers were fed green wood from the beginning and old automobile tires at the end. In late August Laster and farmer-rig hand, Jim Lambert, were seriously burned when the boiler exploded. The well was shut down until the driller recovered. By January 1930 the well reached a depth of 1,530 feet, and drilling was suspended until the spring. Laster resumed work by late March, when Joiner sent word that he was bringing potential investors to the site and wanted Laster to take a core for their benefit. Drilling proceeded through the summer and into the fall. On September 5, 1930, after the well reached a depth of 3,592 feet in the Woodbine sand, it flowed live oil and gas on a drill stem test. Its initial production was 300 barrels of oil per day, and no one appeared more surprised that Joiner, who had oversold his shares. The well was completed on October 5, 1930.

        I’m not interested in the specifics of who got “credit” for discovery wells, I’m interested in whether “Doc” Lloyd had some way of finding oil that was better than random stabs on a map.

        It’s worth remembering that “geology” in that day knew nothing about plate tectonics. And humans have over 200,000,000 years of evolutionary incentive for visual pattern recognition. (Most of that evolution probably with annual generations.)

        It can’t be ruled out that, somehow, Lloyd was able to spot productive points on a map by “filling in the blanks” in a pattern his brain recognized, although the “professional geologists” of the time were too busy with their pre-plate tectonics ideas to look. Just how many predictions of oil did “Doc” Lloyd ever make? And how many turned out in the end to have oil?

        BTW, did you ever read the Fountainhead by Ayn Rand?

      • °°°°°AK said:

        So if Joiner hadn’t “stopped short when his money ran out” but had drilled 300 feet further, would his well have been the discovery well? Seems to me you’re making a semantic quibble here.

        No, I’m talking about what did happen, not what would have happened if “Joiner had drilled 300 feet further.”

        My view is emprical. Yours is purely speculative.

        °°°°°AK said:

        Regardless of his fake “geology”, Lloyd had pointed to several good fields.

        Other than the East Texas Field, can you name any?

        If Lloyd had this ability, why do you believe he was poor at the time of the East Texas Field discovery?

        °°°°°AK said:

        I’m interested in whether “Doc” Lloyd had some way of finding oil that was better than random stabs on a map.

        It’s worth remembering that “geology” in that day knew nothing about plate tectonics. And humans have over 200,000,000 years of evolutionary incentive for visual pattern recognition. (Most of that evolution probably with annual generations.)

        It can’t be ruled out that, somehow, Lloyd was able to spot productive points on a map by “filling in the blanks” in a pattern his brain recognized, although the “professional geologists” of the time were too busy with their pre-plate tectonics ideas to look.

        Trust me, as long as I’ve been in the oil business I’ve heard it all.

        One of the most common services offered in the oil field was by those who claimed they could “dowse” or “witch” a well. It is also billed as a way to find water and minerals such as gold and silver. Here’s a web page that describes how it works:

        dowsing (a.k.a. water witching, radiesthesia)
        http://skepdic.com/dowsing.html

        And here’s a video:

        We now, however, have technologies that help determine where to drill a well that have proved to be far more reliable.

        One of these is called seismic, and it has been in use in the oil field since the 1930s.

      • My view is emprical. Yours is purely speculative.

        Nope.

        Speaking Empirically: “Doc” Lloyd said oil was there, and oil was, indeed, there.

        Pointless sophistry about whose well did and didn’t get to it doesn’t change that fact.

      • Other than the East Texas Field, can you name any?

        According to the first account I linked:

        •       “the rich Seminole oilfield” and

        •       “Oklahoma’s Cement field

        I don’t know anything more about these events than is in the account, but it would appear likely that he actually located three separate oil deposits. As I already asked:

        Just how many predictions of oil did “Doc” Lloyd ever make? And how many turned out in the end to have oil?

        How many truly random stabs on a map would he have had to make to get those three good calls?

      • AK,

        Sorry, but like I said earlier, “Close only works with horseshoes and hand grenades.”

        Your logic is like that of a poker player who says, “Gosh, I should have bet more on the turn of a friendly card,” after the friendly card has already turned.

      • Your logic is like that of a poker player who says, “Gosh, I should have bet more on the turn of a friendly card,” after the friendly card has already turned.

        Which may be why, in East Texas, Joiner kept trying till he struck oil. “Third time’s the charm, as they say.”

        But my logic isn’t about the poker player, it’s about the guy (“Lloyd”) who kept telling him to bet more. How did he know?

      • AK said:

        How many truly random stabs on a map would he have had to make to get those three good calls?

        In Oklahoma, given that much of the state is covered by oil and gas fields, the odds for drilling on two random spots where there would eventually be oil discovered were fairly great.

        The odds would be even greater if one were drilling near previously discoveried fields, which is almost always the case.

      • AK,

        In East Texas, the odds of drilling on a spot where there would eventually be oil discovered are much less.

        Nevertheless, I still mark the discovery up to blind luck. Even a blind squirrel finds an acorn once in a while.

        Being typical human beings, however, people don’t like to mark their success up to how lucky they were, but to how good they were.

        But I also understand that a great many major scientific discoveries were made the same way: by the discoverer stumbling into something by luck.

        Like they say here in Mexico, Perro que no anda no topa con hueso.

      • Nevertheless, I still mark the discovery up to blind luck. Even a blind squirrel finds an acorn once in a while.

        Well, three out of three, even if two of them are fairly high-probability, is worth thinking about.

        OTOH, 3 out of 300 is a different matter. For all I (and you?) know, this “Doc” Lloyd could have made a business of producing likely looking geological surveys for dozens of con-men, perhaps even under dozens of different names.

        At this remove, I doubt any trace would have survived of all the other surveys (if any). For that matter, we hear about the two “near misses” with Joiner, but I wonder whether we’d have heard about the others that never panned out. If such existed.

        And it also can’t be ruled out that the first two “near misses” may not have been quite the way Joiner described them: they might have been part of the con.

        But I’m always interested in the way “consensus science” dismisses things outside its own box. It isn’t so much that I think something was there, as that I’m pretty sure normal “consensus science” would have missed it if there were.

  41. Before probabilistic inversion is applied, this might be a warm-up exercise. From the article:

    probabilistic inversion

    A professor at Princeton University has published a CV listing his career failures on Twitter, in an attempt to “balance the record” and encourage others to keep trying in the face of disappointment.

    Johannes Haushofer, who is an assistant professor of psychology and public affairs at the university in New Jersey, posted his unusual CV on Twitter last week. The document contains sections titled Degree programs I did not get into, Research funding I did not get and Paper rejections from academic journals.

    http://www.theguardian.com/education/2016/apr/30/cv-of-failures-princeton-professor-publishes-resume-of-his-career-lows

  42. In the past the IPCC has established uncertainty/certainty levels for conclusions of its working groups which on the face of it gives an appearance of objectivity. The IPCC instructions to the working groups, when I last looked, asked the working groups to use their own methods in producing the certainty levels, but further required the groups to have a documented procedure that could be provided to interested parties. When I made a formal request from a working group with which I had interest my request was ignored. I have never heard or read of anyone obtaining this requested information from the IPCC.

    From my experience here I can only conclude that the IPCC instructions and using the certainty levels were more for PR than transparency.

  43. Pingback: Weekly Climate and Energy News Roundup #224 | Watts Up With That?

  44. Harry Twinotter

    “I regard it to be a top priority for the IPCC to implement formal, objective procedures for assessing consensus and uncertainty. Continued failure to do so will be regarded as laziness and/or as political protection for an inadequate status quo.”

    Climate change dissidents lost the consensus “debate” a long time ago – get over it.

    I guess that just leaves “shoot the messenger” as the only tactic left.

    • dogdaddyblog

      To me “shoot the messenger” means forcefully hammering away publicly at the model’s failures on every metric, every chance we get.

      Dave Fair

  45. Harry Twinotter

    “Obviously, a ragtag motley assortment of wannabes, and their gullible associates, agree they are in agreement.”

    Oh boy, talk about prejudice. Putting the “D” into denier.

  46. Harry Twinotter

    dogdaddyblog.

    You can criticise the model projections if you like, it’s a free(ish) country.
    I still think you will find the projections get more right than wrong.

    Then there is all the other lines of evidence. The legal people call that a “preponderance” of evidence.

    • dogdaddyblog

      Mr. Twinotter:

      Green talking points are not arguments.

      Please provide specific examples of model “projections” that got thing right more than wrong. Of particular interest would be basin-by-basin ocean SST and recent heat content at depth. It appears to me that model “projections” from their early beginnings through the latest in IPCC AR5 got it more wrong than right on the metrics I’m aware of, especially on regional bases.

      What are “…all the other lines of evidence.” (all!) you propose? I’m not aware of any definitive evidence (your “preponderance” or otherwise) that anthropogenic emissions of various gasses drive what little warming has taken place, especially given the wide variance in temperature trends over different lengthy time periods. A recitation of temperature-driven metrics to “prove” AGW won’t cut it.

      Dave Fair

      • Harry Twinotter

        dogdaddyblog.

        “Green talking points are not arguments.”

        And there you have it, the Conspiracy Theory surfaces sooner or later. What do they call it these days, the “Green Blog”? Makes a change from Reds Under the Bed I guess.

        The general circulation models predict global average temperature will increase as CO2 concentration increases. All the credible global average temperature observations support this prediction. Take the rising CO2 out of the models and they show no increase. It sounds like a triumph of the models to me.

        Ignore the other lines of evidence if you will, it just makes you more wrong.

        Anyway I will leave you to your conspiratorial ideation, I have work to finish.

      • Mr. Twin Otter, are you an Egg-head scientist in the eyes of others?

  47. Pingback: Weekly Climate and Energy News Roundup #225 | Watts Up With That?