Structured expert judgment

by Judith Curry

Any attempt to impose agreement will “promote confusion between consensus and certainty”. The goal should be to quantify uncertainty, not to remove it from the decision process. –  Willy Aspinall

Context

The framing of the climate change problem as caused by humans and the UNFCCC strategy of the Precautionary Principle has led to a framework where scientists work in a politicized environment of ‘speaking consensus to power’ – a politicized process where consensus is manufactured to identify carbon emissions stabilization targets.

Many times, I have voiced my concerns about the consensus-seeking approach used by the IPCC:

The biases that the UNFCCC and IPCC have introduced into the science and policy process are arguably promoting mutually assured delusion. There has to be a better way to assess the science.  The strategy of structured expert judgment presents a much better strategy for building a rational consensus (as opposed to a political consensus) that realistically accounts for uncertainties and diversity of opinions/perspectives.

Rational consensus under uncertainty

The challenge is laid out in a document by Cooke et al.: Rational consensus under uncertainty.  Excerpts (JC bold):

Governmental bodies are confronted with the problem of achieving rational consensus in the face of substantial uncertainties. The area of accident consequence management for nuclear power plants affords a good example. Decisions with regard to evacuation, decontamination, and food bans must be taken on the basis of predictions of environmental transport of radioactive material, contamination through the food chain, cancer induction, and the like. These predictions use mathematical models containing scores of uncertain parameters. Decision makers want to take, and want to be perceived to take, these decisions in a rational manner. The question is, how can this be accomplished in the face of large uncertainties? Indeed, the very presence of uncertainty poses a threat to rational consensus. Decision makers will necessarily base their actions on the judgments of experts. The experts, however, will not agree among themselves, as otherwise we would not speak of large uncertainties. Any given expert’s viewpoint will be favorable to the interests of some stakeholders, and hostile to the interests of others. If a decision maker bases his/her actions on the views of one single expert, then (s)he is invariably open to charges of partiality toward the interests favored by this viewpoint.

An appeal to ‘impartial’ or ‘disinterested’ experts will fail for two reasons. First, experts have interests; they have jobs, mortgages and professional reputations. Second, even if expert interests could somehow be quarantined, even then the experts would disagree. Expert disagreement is not explained by diverging interests, and consensus cannot be reached by shielding the decision process from expert interests. If rational consensus requires expert agreement, then rational consensus is simply not possible in the face of uncertainty. If rational consensus under uncertainty is to be achieved, then evidently the views of a diverse set of experts must be taken into account. The question is how? Simply choosing a maximally feasible pool of experts and combining their views by some method of equal representation might achieve a form of political consensus among the experts involved, but will not achieve rational consensus. If expert viewpoints are related to the institutions at which the experts are employed, then numerical representation of viewpoints in the pool may be, and/or may be perceived to be influenced by the size of the interests funding the institutes.

We collect a number of conclusions regarding the use of structured expert judgment.

1. Experts’ subjective uncertainties may be used to advance rational consensus in the face of large uncertainties, in so far as the necessary conditions for rational consensus are satisfied.

2. Empirical control of experts’ subjective uncertainties is possible.

3. Experts’ performance as subjective probability assessors is not uniform, there are significant differences in performance.

4. Experts as a group may show poor performance.

5. A structured combination of expert judgment may show satisfactory performance, even though the experts individually perform poorly.

6. The performance based combination generally outperforms the equal weight combination.

7. The combination of experts’ subjective probabilities, according to the schemes discussed here, generally has wider 90% central confidence intervals than the experts individually; particularly in the case of the equal weight combination.

We note that poor performance as a subjective probability assessor does not indicate a lack of substantive expert knowledge. Rather, it indicates unfamiliarity with quantifying subjective uncertainty in terms of subjective probability distributions. 

Some examples of implementation of Cooke’s structured expert judgement strategy is described in Procedures Guide for Structured Expert Judgment.

There is a also a relevant .ppt presentation by Tim Bedford (Management Sciences at University of Strathclyd) entitled Use and Abuse of Expert Judgement in Risk Studies, with a good discussion of biases and framing problems.

Geophysical applications

A 2010 article published in Nature [link] provides some interesting geophysical applications.

A route to more tractable expert advice

Willy Aspinall

Abstract. There are mathematically advanced ways to weigh and pool scientific advice. They should be used more to quantify uncertainty and improve decision-making, says Willy Aspinall.

When a volcano became restless on the small, populated island of Montserrat, West Indies, in 1995, there was debate among scientists: did the bursts of steam and ash presage an explosive and deadly eruption, or would the outcome be more benign? Authorities on the island, a British overseas territory, needed advice to determine warning levels, and whether travel restrictions and evacuations were needed. The British government asked me, as an independent volcanologist, to help reconcile differing views within the group.

As it happened, I had experience not only with the region’s volcanoes, but also with a unique way of compiling scientific advice in the face of uncertainty: the Cooke method of ‘expert elicitation’. This method weighs the opinion of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The approach isn’t perfect. But it can produce a ‘rational consensus’ for many hard-to-assess risks, from earthquake hazards to the probable lethal dose of a poison or the acceptable limits of an air pollutant. For broader questions with many unknowns — such as what the climate will be like in 50 years — the method can tackle small specifics of the larger question, and identify gaps in knowledge or expose significant disparities of opinion.

As a group, myself and ten other volcanologists decided to trial this methodology. We were able to provide useful guidance to the authorities, such as the percentage chance of a violent explosion, as quickly as within an hour or two. More than 14 years on, volcano management in Montserrat stands as the longest-running application of the Cooke method.

Faced with uncertainty, decision-makers invariably seek agreement or unambiguous consensus from experts. But it is not reasonable to expect total consensus when tackling difficult-to-predict problems such as volcanic eruptions. The method’s originator Roger Cooke says that when scientists disagree, any attempt to impose agreement will “promote confusion between consensus and certainty”. The goal should be to quantify uncertainty, not to remove it from the decision process.

Of the many ways of gathering advice from experts, the Cooke method is, in my view, the most effective when data are sparse, unreliable or unobtainable. There are several methods of such expert elicitation, each with flaws. The traditional committee still rules in many areas — a slow, deliberative process that gathers a wide range of opinions. But committees traditionally give all experts equal weight (one person, one vote). This assumes that experts are equally informed, equally proficient and free of bias. These assumptions are generally not justified.

Another kind of elicitation — the Delphi method — was developed in the 1950s and 1960s. This involves getting ‘position statements’ from individual experts, circulating these, and allowing the experts to adjust their own opinions over multiple rounds. What often happens is that participants revise their views in the direction of the supposed ‘leading’ experts, rather than in the direction of the strongest arguments.

Cooke’s method instead produces a ‘rational consensus’. To see how this works, take as an example an elicitation I conducted in 2003, to estimate the strength of the thousands of small, old earth dams in the United Kingdom. Acting as facilitator, I first organized a discussion between a group of selected experts about how water can leak into the cores of such ageing dams, leading to failure. The experts were then asked individually to give their own opinion of the time-to-failure in a specific type of dam, once such leakage starts. They answered with both a best estimate and a ‘credible interval’, for which they thought there was only a 10% chance that the true answer was higher or lower.

I also asked each expert a set of eleven ‘seed questions’, for which answers are known, so that their proficiency could be calibrated. As is often the case, several experts were very sure of their judgement and provided very narrow uncertainty ranges. But the more cautious experts with longer time estimates and wider uncertainty ranges did better on the seed questions, so their answers were weighted more heavily. Their views would probably have been poorly represented if the decision had rested on a group discussion in which charismatic, confident personalities might carry the day. Self-confidence is not a good predictor of expert performance, and, interestingly, neither is scientific prestige and reputation.

Sometimes an elicitation reveals two camps of opinion. This usually arises from ambiguity in framing the problem or because the two sub-groups have subtly different backgrounds. In this case, a few of the experts with longer time-to-failure estimates were working engineers with practical experience, rather than academics. Highlighting such clear differences is useful; sometimes it reveals errors or misunderstandings in one of the groups. In this case it triggered further investigation.

Uncertainty in science is unavoidable and throws up many challenges. On one side, you may have scientists reluctant to offer their opinions on important societal topics. On the other, you may have decision-makers who ignore uncertainty because they fear undermining public confidence or opening regulations to legal challenges. The Cooke approach can bridge these two positions. It is most useful when there is no other sensible way to make risk-based decisions — apart from resorting to the precautionary principle, or, even less helpfully, evading an answer by agreeing to disagree.

JC reflections

I think Cooke’s ideas, and Aspinelli’s case studies of geophysically-relevant applications, show great promise in assessing the state of knowledge in a range of climate change topics.

I can particularly imagine applying the method of structured expert judgment in context of applying influence diagrams and 3-valued logic (Italian flag) to represent uncertainty in complex climate change problems [link to post].

In applying this to something like the 20th century climate attribution problem, the challenge would be twofold:

  • assembling a sufficiently diverse group of scientists, in the highly politicized IPCC process
  • identifying appropriate ‘seed questions’ and weighting the experts

I was particularly struck by this statement:

We note that poor performance as a subjective probability assessor does not indicate a lack of substantive expert knowledge. Rather, it indicates unfamiliarity with quantifying subjective uncertainty in terms of subjective probability distributions.

Hence, a premium on scientists who understand uncertainty and can quantify subjective uncertainty.  I really like the idea that better training of scientists on how to assess and think about uncertainty is key to better expert judgment.

Uncertain T. Monster is pleased.

 

176 responses to “Structured expert judgment

  1. “Our universities shape young men’s and women’s sensibilities, and our professors are supposed to serve as guardians of authoritative knowledge and exemplars of serious and systematic inquiry. Yet our campuses are home today to a toxic confluence of fashionable ideas that undermine the very notion of intellectual virtue, and to flawed educational practices and procedures that give intellectual vice ample room to flourish.” (Peter Berkowitz, “Climategate Was an Academic Disaster Waiting to Happen,” WSJ (2010)

  2. “We note that poor performance as a subjective probability assessor does not indicate a lack of substantive expert knowledge.”

    Well from my frame of reference, atmospheric teleconnections are solar forced at down to daily scales and not internal and chaotic, forced predominately by solar wind variations. And the Arctic and AMO cools with increased forcing of the climate. That is quite a different picture to their “substantive expert knowledge”.

  3. When observational data doesn’t support promoted supposition resort to statistics.

  4. David Wojick

    Uncertainty, like love, cannot be quantified. There is nothing to measure.

    • Well, not quite correct. Statistical uncertainty can be quantified (think coin tosses). Ignorance most definitely cannot be quantified. Sometimes we can put bounds on a range, or can assess an expected sign or trend. The challenge is identifying when we have conditions of statistical uncertainty (which can be quantified), scenario uncertainty (perhaps some sense of likelihood), recognized ignorance, ambiguity, or total ignorance.

      • David Wojick

        This post is not about probabilistic sampling theory or coin tosses. It is about quantifying expert judgement. There is no reasonable way to do that, because there is nothing to measure. We can measure votes, however, and often do.

      • David Wojick: This post is not about probabilistic sampling theory or coin tosses. It is about quantifying expert judgement. There is no reasonable way to do that, because there is nothing to measure. We can measure votes, however, and often do.

        I think that you are wrong about that. Investment houses and insurance firms tote up their dollar gains and losses, and those combine both kinds of uncertainty. You could do the same with predictions of rainfall, days with above and below average temperature. Instead of $$, you could assign points (or poker chips, or “Monopoly money”) to errors of different sizes (nobody would be exactly correct very often), and see who accumulates the most year-by-year.

        In medical practice, plans or “strategies” can be compared via “Quality Adjusted Life Years”: when is it a good idea to stop chemotherapy for adult victims of leukemia, for example; what are the trade-offs between open-heart surgery and merely lifestyle changes for patients of a certain age. The Union Pacific Railroad published its $$$ and survival comparisons for CPAP vs No Treatment in the cases of sleep apnea. Hospitals can be compared to each other on the good and bad outcomes of treating appendicitis and other diseases.Those assessments combine all of the aspects of uncertainty: disagreements among experts and statistical uncertainty in the estimated effect sizes.

      • None of these statistical approaches are pre-event quantifications of expert judgment in the context of uncertainty. We can certainly quantify how things turned out.

    • There’s always the remote possibility of fate and still not know what’s coming.

    • David L. Hagen

      David Wojick
      Re: Quantifying Uncertainty
      The international metrology community has established detailed quantitative standards for quantifying uncertainty in measurement. See
      GUM: Guide to the Expression of Uncertainty in Measurement, by the Bureau International des Poides et Mesures (BIPM). e.g., Evaluation of measurement data – Guide to the expression of uncertainty in measurement JCGM 100:2008. This explicitly includes both Type A errors (statistical) and Type B errors (all others including bias.)
      See also the previous NIST: Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Barry N. Taylor and Chris E. Kuyatt, NIST TN1297 PDF and NIST’s Uncertainty Web Site
      Expert “judgement” includes making estimates of all the Type B unknowns and then evaluating how to reduce those uncertainties.
      PS Most IPCC and Climate science members appear oblivious to this standard or avoid applying it for fear of their jobs. e.g. a search of IPCC.ch yields NO hits for “Guide to the expression of uncertainty in measurement” nor for “JCGM” nor for “TN1297”. Instead the IPCC concocted its own “Box 1.1: Treatment of Uncertainties in the Working Group I Assessment”. It nominally gives assent to

      “Two primary types are ‘value uncertainties’ and ‘structural uncertainties’.”

      But then the IPCC 92007) defined:

      The standard terms used in this report to define the likelihood of an outcome or result where this can be estimated probabilistically are:
      Likelihood Terminology Likelihood of the occurrence/ outcome
      Virtually certain > 99% probability
      Extremely likely > 95% probability
      Very likely > 90% probability
      Likely > 66% probability
      More likely than not > 50% probability
      About as likely as not 33 to 66% probability
      Unlikely < 33% probability
      Very unlikely < 10% probability
      Extremely unlikely < 5% probability
      Exceptionally unlikely < 1% probability

      Rather than objective evaluation like the “Structure Expert Judgement” above, the IPCC’s 2013 political consensus method judged (with “95%” certainty) that:
      D3 Attribution of climate change

      “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to
      2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period. {10.3}

      This IPCC’s “political consensus” method produces astonishingly laughable 95% certainty results. e.g., When tested against reality, the mean of 35 year predictions (1979 – 2014) of the latest CMIP5 models for the tropical tropospheric temperatures are 400% greater than actual satellite measured temperatures. (John Christy, May 13, 2015 testimony to Congress.)
      That bears NO resemblance to the scientific method, where if the models do not match the data they are WRONG.
      (Politely = “Cognitive Dissonance. Politically incorrect = “Lemming circus” etc.)
      Redress/Restoration
      For an international body to explicitly ignore the international standards for uncertainty raises the major question WHY? Que Bono? (vis Shukla’s Gold,/a>)
      To restore the scientific method and a small start at credibility, the IPCC needs to begin by acknowledging and applying the BIPM’s international standard
      GUM: Guide to the Expression of Uncertainty in Measurement quantitatively for EACH SECTION and Parameter. ALL Type B Uncertainties MUST be included to at least encompass the very large divergence of CMIP5 models from the reality of satellite temperature measurements backed by balloon measurements etc.
      Replace “political concensus” with “structured expert judgment
      Then methodology such as Cook’s “structured expert judgement” needs to be objectively applied WITHOUT political coercion, sufficient to provide credible evaluations over the FULL range of scientific evidence, methodology, and perspectives. To do so, skeptical minority opinions MUST be sought out and included. See the Climate Change Reconsidered and The Right Climate Stuff
      Without such massive corrective action, the IPCC’s models and credibility will continue to wander off into the wild blue unknown, causing severe harm to the global scientific community, and untold devastation and harm to the poor and our economies.

      • David L. Hagen

        Errata: IPCC’s 2013 D3 Attribution of climate change

      • Geoff Sherrington

        David,
        Agreed. For some years I have been advocating via blogs that the BIPM procedures should at least be examined for suitability, then used. Like you, I have found no reference to their use in global warming papers.
        Because I started in Analytical Chemistry, where procedures such as BIPM suggest were common place, I was drawn to wonder why others did not use them. Blogger Pat Frank shares these views.

      • Some people still don’t get it, that the satellites don’t measure surface temperature.

        It’s simple, you need to compare surface temperature to surface temperature.

      • David L. Hagen

        bobdroege – Perhaps you could explain the divergence between surface and satellite temperature trends (besides the offset due to the atmospheric lapse rate)

      • The satellites’ failing attempts at surface accuracy are at odds with all surface observations: ice melt and sea level rise and SST and thermometers.

      • David L. Hagen

        JCH
        Any evidence for “failing accuracy” (besides your appeal to authority) – when the satellite temperature trends are validated by numerous balloon measurements etc.?
        Have you every studied “systemic bias”? Ever though to ask why ALL “adjustments” to the surface temperature records result in GREATER WARMING trends? See the analysis of statistics expert and Physics Prof. Brown (rgbatduke) who finally came to:

        I no longer consider it remotely possible to accept the null hypothesis that the climate record has not been tampered with to increase the warming of the present and cooling of the past and thereby exaggerate warming into a deliberate better fit with the theory instead of letting the data speak for itself and hence be of some use to check the theory.

      • David L. Hagen

        In light of other comments to that thread, maybe not “ALL”, but sufficient to strain credibility, especially seeing the large divergencebetween satellite and surface temperature trends beyond reasonable statistical bounds.

      • David,
        Perhaps I will take a crack at it after the satellite guys release their data and methods.

        But you are asking something completely different from your assertion that models and data are diverging when they are not modeling and measuring the same thing.

        You are looking at trends that are too short.
        For 30 year trends, GISS UAH and RSS are in agreement, here’s the data.
        0.170 +/- 0.053
        0.132 +/- 0.088
        0.163 +/- 0.086

        pop goes the weasel

      • Some people still don’t get it, that the satellites don’t measure surface temperature.

        It’s simple, you need to compare surface temperature to surface temperature.

        True dat.

        However, models predict:
        greater warming in the lower trop than at the surface, and,
        greater warming still in the middle trop than in the lower trop.

        What has happened is just the opposite:
        lesser warming in the lower trop than at the surface, and,
        lesser warming still in the middle trop than in the lower trop.

      • The divergence is an indication something is wrong.

        Does sea level rise, much of it measured by satellites, support the UAH/RSS surface temperature record? No, not even close.

        Looks like three centimeters up since the 2011 trench bottomed out.

        http://sealevel.colorado.edu/files/2015_rel3/sl_ns_global.pdf

      • David L. Hagen

        JCH perhaps you should look at the ADJUSTMENTS to satellite altimetry vs tidal gauges before jumping to conclusions. Show you can search and find – see scholar.google.com and wattsupwiththat.com for “satellite altimetry adjustments” etc. and show you can understand the major satellite “adjustments” and why they strain credibility when compared to 100 years of tidal gauge trends.

      • David L. Hagen

        bobdorege re: “models and data are diverging when they are not modeling and measuring the same thing.”
        Christy documented TROPICAL tropospheric model projections running 400% of actual tropospheric temperatures over 35 years (Fig 2). He separately documented GLOBAL mid-tropospheric models predictions were running 300% higher than global mid-tropospheric temperatures over that 35 year period. That’s an apples to apples comparison with models far outstripping actual temperatures.
        The surface/satellite divergence I referenced was between RSS and GISS since 1997.

        “The above graphic shows RSS having a slope of zero from both January 1997 and March 2000. As well, GISS shows a positive slope of 0.012/year from both January 1997 and March 2000.

        On the GIS, UAH and RSS surface temperature trends you listed, compare IPCC’s AR5 0.501 C/30 years (1.67 C/century) with your stated:
        GIS “0.170 +/- 0.053”
        UAH “0.132 +/- 0.088”
        RSS “0.163 +/- 0.086”
        For those trends by themselves, I grant you they overlap within the statistical bounds listed. rgbduke’s primary complaint was

        In general one would expect measurement errors in any given thermometric time series, especially when they are from highly diverse causes, to be as likely to cool the past relative to the present as warm it, but somehow, that never happens. Indeed, one would usually expect them to be random, unbiased over all causes, and hence best ignored in statistical analysis of the time series. . . .
        All it takes to introduce bias is to correct for all of the errors that are systematic in one direction, and not even notice sources of error that might work the other way.

        That objection still stands. Why no corrections in the other direction?

      • The adjustments… the same load of rubbish every time.

      • The biggest reason there are few corrections in the other direction is physics… the globe is warming.

      • The divergence is an indication something is wrong.

        Does sea level rise, much of it measured by satellites, support the UAH/RSS surface temperature record? No, not even close.

        Looks like three centimeters up since the 2011 trench bottomed out.

        Well, I don’t think the SLR budget items are known very well.

        But irrespective of that, SLR wouldn’t tell you anything about profiles of atmospheric warming rates.

        If one assumed that most of SLR is from thermal expansion, that would tell you whether the oceans were warming, but that wouldn’t have a bearing on the middle or upper troposphere. Those two trends are not mutually exclusive.

      • Nonsense.

        Name one physical observation of the surface, which is what thermometers are, that supports the notion that global temperature has been flat since 1998.

        The satellite SAT record is BS. It does support a political viewpoint, and they aren’t going to let go of it. Fine. The longer this goes on, the worse they will look.

        AMO – spiking upward.
        PDO – has gone from negative to solidly positive
        SST – surging upward
        Thermometers – surging upwards.
        2015 – already a given to be the warmest year by a country mile.

      • Name one physical observation of the surface, which is what thermometers are, that supports the notion that global temperature has been flat since 1998.

        You’re ignoring the truism quoted above that surface temperatures and satellite measurements are for different levels.

        There is no physical law which necessitates that warming is the same, or even present everywhere.

        Indeed, this winter, there may well be snow on the roof but fire in the furnace at your house.

      • The satellite SAT record is BS. It does support a political viewpoint, and they aren’t going to let go of it. Fine. The longer this goes on, the worse they will look.

        The satellite record, like the surface record, has uncertainties and a seemingly endless chain of ‘corrections’.

        However, the satellite has better coverage than the surface and co-located verification from an independent measurement set ( sonde data ).

      • “There is no physical law which necessitates that warming is the same, or even present everywhere.”

        I should add a caveat to that wrt MSU data.

        The measurements indicate a trend toward instability ( more warming at surface / less warming aloft ). This would reach a limit at some point.

        But the existing trends are not implausible.

      • Turbulent Eddie,

        The problem is the atmospheric bands reported by Christie and Spenser are not tight enough to compare to the model predictions.

        Not to mention that the uncertainties in those measurements prevent you from declaring that the trends are as exact as you say they are.

        Even though, the last time I looked the mid tropospheric trends were about the same as the surface, definitely not less.

      • The problem is the atmospheric bands reported by Christie and Spenser are not tight enough to compare to the model predictions.

        ?

        Here’s what the trends look like. You decide:
        http://climatewatcher.webs.com/HotSpot.png

        Even though, the last time I looked the mid tropospheric trends were about the same as the surface, definitely not less.

        For 1979 through 2014 ( the last complete year ), degrees C per century:

        0.7 – mean of UAH & RSS middle troposphere
        1.2 – mean of UAH & RSS lower troposphere
        1.5 – mean of ( GISS, CRU, NCDC ) surface land/ocean index

        opposite of what is modeled.

      • David Wojick

        David Hagen, uncertainty in measurement can indeed be treated probability statistically, provided the same measurement is made repeatedly with the same instrument. It is a simple case of sampling theory. This has nothing to do with quantifying human judgement, which is the topic of this post.

      • David Wojick

        JCH, you say “Name one physical observation of the surface, which is what thermometers are, that supports the notion that global temperature has been flat since 1998.” I cannot name them but many station thermometers show cooling over this period (and over much longer periods as well).

        You folks seem to be confusing the surface statistical models like GISS, HadCRU, BEST,etc., with observations, which they are not, by any means. They are crude estimation methods at best. The only instruments we have for actually trying to measure atmospheric heat content, hence temperature, are the satellites. They were designed for that job.

      • David L. Hagen

        JCH Try APPLYING some Expert Judgment.
        No Warming
        RSS shows NO WARMING for 18 years 8 months (despite the Sierra Club’s mantra on 97%)
        re a station showing NO WARMING
        See the summer average temperature for Baffin Island

        the weather station located at Clyde, Northwest Territory, which is located on Baffin Island very near the site of the lake. There is no trend here from 1943 to 2008, the period of available data. 

        Temperature Adjustments
        As an example of questionable warming “adjustments” to five stations see: Darwin Zero Before and After WUWT Dec 12, 2009

        We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

        Sea Level Rise Rate
        On sea level rise, see NOAA’s highly linear records of tidal gages: Latest NOAA Mean Sea-Level Trend Data through 2013 Confirms Lack of Sea-Level Rise Acceleration
        For details on trends versus adjustments see:
        Nils-Axel Mörner “The Great Sea-Level Humbug” There is NO Alarming Sea Level Rise!”, SPPI

      • Turbulent Eddie
        Look at fig 1
        http://www.remss.com/measurements/upper-air-temperature

        Not tight enough to do what your charts say they do, wherever your charts came from

      • David L. Hagen

        David Wojick Re: “uncertainty in measurement . . . has nothing to do with quantifying human judgment”
        I recommend you actually study uncertainty analysis using the BMIP GUM, and NIST TN1297 especially focus on learning about Type B errors The GUM Intro states:

        4.6 Knowledge about an input quantity Xi is inferred from repeated indication values (Type A evaluation of uncertainty) [JCGM 100:2008 (GUM) 4.2, JCGM 200:2008 (VIM) 2.28], or scientific judgement or other information concerning the possible values of the quantity (Type B evaluation of uncertainty) [JCGM 100:2008 (GUM) 4.3, JCGM 200:2008 (VIM) 2.29].

        Similarly see NIST Evaluating uncertainty components: Type B

        A Type B evaluation of standard uncertainty is usually based on scientific judgment using all of the relevant information available, which may include:
        previous measurement data,
        experience with, or general knowledge of, the behavior and property of relevant materials and instruments,
        manufacturer’s specifications,
        data provided in calibration and other reports, and
        uncertainties assigned to reference data taken from handbooks.

        NIST 2.5.4 Type B evaluations

        Type B evaluations can apply to both random error and bias. The distinguishing feature is that the calculation of the uncertainty component is not based on a statistical analysis of data. The distinction to keep in mind with regard to random error and bias is that:
        random errors cannot be corrected
        biases can, theoretically at least, be corrected or eliminated from the result.

        Type B errors are ALL about “scientific judgment” NOT statistics of repeated measurement. The IPCC’s greatest failure is in NOT addressing ALL Type B errors – Especially the LEMING FACTOR (political herd bias.)
        Dan Burke, Alan Morrison observe:

        As with any mass movement or business trend, the lemming factor around technology (global climate models) and the Internet (climate science) is extreme. Regardless of the number of carcasses accumulating in plain site (model failures) . . .

        When 95% of 35 year model predictions lie OUTSIDE of the data = FAILURE.
        Go back to basics.

      • Station thermometers showing cooling is simply situation normal. How could it be otherwise?

      • David Hagen, the NIST type B quantification method is just guesswork dressed up. There is no way to accurately or even reasonably quantify these sorts of uncertainty.

      • David L. Hagen

        David Wojick Examine NIST’s quantitative evaluation of TypeB uncertainty components. See 17 quantitative items in Slide 48 “Case 1: Summary of Type B Uncertainties, 5 MPa oil, gauge mode” of: SIM Metrology School: Pressure
        Try actually reading, studying, learning about Uncertainty Analysis and citing professionals who do it! e.g. see
        Depth Accuracy in Seabed Mapping with Underwater Vehicles”, BJØRN JALVING

        A complete DTM depth error budget has been identified. Modeling and quantification of the individual error sources reveals that a DTM depth accuracy of 0.13 m (1σ) can be achieved for 300 m UUV depth, 50 m altitude and 30° MBE beam angle (see Fig 2)

      • David L. Hagen
        Sorry for my late arrival in this discussion, I realize the party might be coming to an end, however.
        Thank you for drawing attention to the Guide to Expression of uncertainty. I think it demonstrates glaring ignorance by IPCC to omit this standard. A standard published in the name of: BIPM: Bureau International des Poids et Measures, IEC: International Electrotechnical Commission, IFCC: International Federation of Clinical Chemistry **, ISO: International Organization for Standardization, IUPAC: International Union of Pure and Applied Chemistry, IUPAP: International Union of Pure and Applied Physics, OlML: International Organization of Legal Metrology.

        As several others here, I have searched for references to this standard in the works by IPCC, but have not found any references so far. Unbelievable – as this is the only internationally recognized standard for expression of uncertainty. Even though IPCC urgently needed a standard to express uncertainty, they failed to identify or recognize this guideline. They made up their own in a hasty way. Their guideline is a largely a joke called: “Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties”. https://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf

        As this post is on expert judgement I would like to draw attention to the section “7 Reporting Uncertainty” in the “Guide to the expression of Uncertainty in Measurement.» Here are some relevant extracts:
        “7.1.1 In general, as one moves up the measurement hierarchy, more details are required on how a measurement result and its uncertainty were obtained. Nevertheless, at any level of this hierarchy, including commercial and regulatory activities in the marketplace, engineering work in industry, lower-echelon calibration facilities, industrial research and development, academic research, industrial primary standards and calibration laboratories, and the national standards laboratories and the BIPM, all of the information necessary for the re-evaluation of the measurement should be available to others who may have need of it.»

        7.1.4 Although in practice the amount of information necessary to document a measurement result depends on its intended use, the basic principle of what is required remains unchanged: when reporting the result of a measurement and its uncertainty, it is preferable to err on the side of providing too much information rather than too little. For example, one should

        a) describe clearly the methods used to calculate the measurement result and its uncertainty from the experimental observations and input data;
        b) list all uncertainty components and document fully how they were evaluated;
        c) present the data analysis in such a way that each of its important steps can be readily followed and the calculation of the reported result can be independently repeated if necessary;
        d) give all corrections and constants used in the analysis and their sources.
        A test of the foregoing list is to ask oneself “Have I provided enough information in a sufficiently clear manner that my result can be updated in the future if new information or data become available?”

        My point is that an expert judgement cannot be in the form: “As an expert I believe the uncertainty is: …..”

        Without data no one can be an expert. Scientific judgement cannot be «just guesswork dressed up» .Scientific judgement must be based on data and observations. Such judgement, such data and such observation should be reported in accordance with the recommendations in Guide to the Expression of Uncertainty section 7.

        The reporting of uncertainty by IPCC fails more or less completely to meet the recommendations of this international guideline.

      • What is really noteworthy and even more unbelievable is that IPPC was partly misguided by the recommendations from the 2010 independent review. The points from the independent review, with relevance to expression of uncertainty, are included on the last page in their own document “Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties” https://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf

        “The 2010 independent review of the IPCC by the InterAcademy Council (IAC)3, released on August 30, 2010, included six recommendations related to the evaluation of evidence and treatment of uncertainty in IPCC reports. These recommendations are listed below, with brief summaries explaining how the AR5 guidance addresses their key elements.”

        But even the independent review, by InterAcademy Council, fails completely to identify, recognize and comply with the internationally recognized “Guide to the expression of Uncertainty in Measurement”.

        I only include one of the recommendations by the independent review here. The one which considers quantitative probabilities, or by other words uncertainty:

        “Recommendation: Quantitative probabilities (as in the likelihood scale) should be used to describe the probability of well-defined outcomes only when there is sufficient evidence. Authors should indicate the basis for assigning a probability to an outcome or event (e.g., based on measurement, expert judgment, and/or model runs).»

        I find the two following observations particularly noteworthy:

        The independent review advised that it is sufficient to “indicate» the basis for assigning a probability!

        The word “Indicate” can be interpreted in many ways, IPCC seems to have interpreted it in the slackest way possible.

        The independent review opens up for basing the probability of an outcome on expert judgement and model runs!

        So much for an independent review. Science seems to struggle to find a safe haven even in the InterAcademy Council. For the next time, may I suggest close scrutiny, rather than a friendly review.

    • Robin Hanson has campaigned for betting markets as ways to quantify confidence and uncertainty. There is some evidence that these do better than other methods of aggregating opinions and information. Charles Manski has pointed out some theoretical limitations of these markets, however.

  5. Sometimes I find the air here at Climate Etc. mighty rarified. Of course I enthusiastically applaud the hard work Dr. Curry does to bring fresh air and light to the dark, fetid swamp that is the climate change industry. To boil it all down, we simply do not know what the climate’s going to do in the future. We know it will change….or continue to change I should say….but no one…not even Michael Mann knows how.

    Even if we assume that it will actually warm, no one short of God knows whether that will be a good thing or a bad. About the only thing we can say for sure it seems to me, is that the earth is greening under the influence of increased Co2. Funny, how we never hear about that from the greens. You’d think given their name, they’d be wanting to celebrate this development.

    But of course, the warmists by and large won’t even consider such things. The President of the United States is an embarrassment. So is his party. The vested interests, the obvious corruption, the shoddy science, the hypocrisy continues unabated.

    As our old friend Jim Cripwell used to day, “who will bell the cat?”

    (aka pokerguy)

    • Yep. And don’t hold your breath for coastal South Carolinians to move inland. Even in the very teeth of a ONE THOUSAND YEAR FLOOD.

      • About 1.4 million people live on the barrier islands and they aren’t moving, despite expert testimony. Why are they called “barrier” islands?

        Paper: Barrier Island Population…yada yada…

        http://www.jcronline.org/doi/full/10.2112/JCOASTRES-D-10-00126.1

      • jim2 wrote, ” And don’t hold your breath for coastal South Carolinians to move inland. Even in the very teeth of a ONE THOUSAND YEAR FLOOD.”

        Why would they move inland … into the flood?

      • You have a point roving. They should all move to the barrier islands. Yep.

      • This is the land of Porgy and Bess.

        Strawberry.

        Summertime.

        I got plenty of nothing.

        Bess You is my Woman.

        There’s a boat that leaving soon for NY.

        Why would they leave? These people know how to adapt and deal and thrive. Leave for what? The PC culture of NY or Frisco or LA or Seattle?

        Weather happens. Live with it!

      • The PC part reminded me of something, but it will have to wait for the next Energy and Policy post.

  6. An interesting post. I had considerable professional exposure to this class of problems during my business strategy consulting career. One expert one vote ‘committees’ is sample selection and sample size dependent. Delphi methods were originally developed by the US military as an improvement. We developed variants on two others that worked better than Delphi ( and yes, we ran head to head comparisons, and yes, we learned what was better. In my business world, if three years later you were wrong you do not get invited back for another engagement.
    One was convening a diverse panel of experts, then facilitating a ‘bounded uncertainty’ version of Raiffa’s decision analysis (probabalistic decision trees). Works great for eliminating the improbable. Not so good for a rational consensus on the most probable. But very useful for developed time staged ‘option plays–if this by then, we do x, else not’.
    The other was a repurposed version of conjoint analysis, originally developed to elucidate consumer preferences for future product attributes, repurposed to elucidate expert future uncertainty attributes. Very good for saying x is a big unknown, y is not so we will just run with whatever the experts say the result is supposed to be. Of course, younhave to have the right attrubutes in the question set for this to work.
    Both processes helped improve decision making quality on multi billion multiyear possible initiatives. We got invited back for more engagements. Sounds like Cooke has articulated a more public policy friendly process.

    • David Wojick

      Unfortunately the improbable (according to the experts) has a nasty way of happening. Think South Carolina floods. Think preparing to fight the last war instead of the next. Keep in mind that believing a false forecast is often worse than believing none. Having also spent lots of time in and around the Pentagon, I prefer none.

      • DW, I have a limited amount of empathy. My Dad is buried at Arlington with the other neck order, awarded for his ‘peacetime’ work there.
        But to proceed with no sense of likely possible futures os to proceed blind. That is possible tactically. Train for everything, respond situationally. Whatspecial forces do.
        It is not possibe strategically. The time frames are too long, the resource commitments too great. Look at the climate change law California’s Gov. Moonbeam just signed into law for an example.
        Like Judith says, a wicked multidimensional problem.

      • A larger flood happened there in SC in 1908, IIRC.

      • Moonbeam is really shining…the shoes of his political pals. A lot of dough at steak …;)

      • David L. Hagen

        David Wojick
        Instead of alarmists, try the real expert on flooding Demetris Koutsoyiannis. He finds conventional statistics severely underestimate extreme events. His extensive analysis found better distribution shapes. e.g.,
        The underestimation of probability of extreme rainfall and flood by prevailing statistical methodologies and how to avoid it, Koutsoyiannis, D., , EU COST Action C22: Urban Flood Management, 2nd meeting, Athens, University of Athens, 2006. See especially slides 15-19

        “It can be shown that the distribution tail of flood is of the same type as that of rainfall. The EV1 distribution, which has been the prevailing distribution in rainfall underestimates risk significantly”
        “The shape parameter k of EV2 is very hard to estimate on the basis of an individual series, even in series with length 100 years or more. However, the results of the analysis of 169 long series of rainfall maxima allow the hypothesis that k is constant (k = 0.15) for all examined zones”

      • David L. Hagen: Demetris Koutsoyiannis. He finds conventional statistics severely underestimate extreme events.

        Thank you for the link to his presentation.

        It goes into my metaphorical pile related to: “natural variability has hardly been studied at all.”

      • Matthew

        I think I would change your statement to:

        ‘natural variability has not been studied in any great detail since the advent of computers.’

        Go back to the days of such as lamb and budyko and much of the research was of a practical nature that centred on the extent of natural variability.

        Lambs protege, Phil jones, realised the knowledge of the past climate had been rather forgotten when he acknowledged in his 2006 paper -concerning the great warming of the 1730’s, that came to a shivering halt in the winter of 1740- that natural variability was greater than had hitherto been realised.

        Tonyb

      • TonyB, quoting me with a correction: ‘natural variability has not been studied in any great detail since the advent of computers.’

        I think there is a problem with my phrase “in any great detail”. I think it has become clearer since 1985 that we do not have enough detailed knowledge about natural variability to estimate the climate sensitivity to CO2. But people have been studying it, and study it still. Consider the discussion of “SCC15” at WUWT.

      • oops!

        “in any great detail” was TonyB’s phrase.

        My phrase “has hardly been studied at all” is clearly too extreme, and he got closer to my intended meaning than I was.

    • David L. Hagen

      ristvan. Anything good docs on these methods?

    • I used to moderate strategy development sessions, and whenever possible steered them to land in phased approaches which emphasized having the ability to gather more information and changing gears/direction on the fly.

      But I met a lot of resistance from individuals who were used to the all or nothing on off projects. One of them, a very senior VP, complained to those gathered in the conference room that I was always trying to get him pregnant in stages the first time I ran him through a phased strategy.

  7. Our nations capitol is teeming with experts
    a great many in the employ of tax payers
    the war colleges, World Bank, Pentagon, CIA, DIA, NASA, the entire alphabet is covered
    they produce mountains of papers
    no doubt consensus here and there
    who thought it good a idea to rebuild Afghanistan?
    who couldn’t see the last real estate bubble coming from a mile away?
    the health care system is becoming a bigger mess

    human affairs may be just like the climate
    the forcings are not well understood
    and we may be beyond our collective control
    to be honest, I hope so

  8. The only problem, well not the ONLY problem, but one problem is the Hockey Team will try to shut out experts that don’t toe their alarmist line.

    Is there a way around it? How do you prevent gaming of the system? We know they will try their best to game it.

    • The data is not on their side. The actual data will will be their downfall.

    • ‘They’ cannot control the blogosphere, although some (SKS, ATTP) try. And it is indelible. Eventually enterprising MSM reporters trying to make their way will latch on. That is already starting to happen. Do not dispair. I am hopeful. Stuff is happening. Increasingly. COP21 is guaranteed to fail.

      • The Masque of Paris nigh,
        Will snow fall with a sigh?
        Red Death viral,
        Downward spiral,
        The lies, the tools, the dies.
        ================

      • I was all over comments on no less than three “1000 year” flood articles on CNBC. Posted Spencers blog article on it.

      • Call upon the
        pathetic fallacy
        of sighing snow
        or angry blizzard
        to put an end
        to alarm driven
        pathetic policy.

      • I certainly hope you are right Rud, the incredible amount of weather reporting in the media is getting really loud and it is obviously being done with an eye to Paris.

        Tony Abbott has been bounced out and replaced with a believer. Steve Harper is doing better in the run up to next week’s election but it still looks tight. That leaves China, Russia and India to hold the line against some very tricksy folk.

  9. An appeal to ‘impartial’ or ‘disinterested’ experts will fail

    In climate science, in any science, there are no ‘impartial’ or ‘disinterested’ experts. If they are ‘disinterested’ they would not become experts. they form opinions and cease to be ‘impartial’ experts. No one gets into any science when they are ‘disinterested’.

    There are no ‘impartial’ or ‘disinterested’ experts.

  10. I didn’t study Cooke in detail and will do, but following JC’s comments I am happy to see this discussion headed in the direction of a having some kind of framework to bring rigor and oversight into the statement of scientific proof or evidence of global warming as we have all seen for years including in the IPCC reports. Anytime you hear talk of “catastrophic risk of … the oceans rising, increased hurricanes,…” or the president making affirmations that “…this is happening, … this is real). If we have a set of guidelines on what is required to reasonably supporting making such statements and the uncertainty bounds surrounding such .. can be very helpful. Let’s all understand the meaning and justification of rhetorical and broad statements. Perhaps it needs to be codified?

  11. Posted this on energy and policy. Looks like cold fusion, but has lots of references, FWIW.

    http://judithcurry.com/2015/10/03/week-in-review-energy-and-policy-edition-15/#comment-735542

  12. Dr. Curry

    A piece to this story that I apparently missed: how comfortable are scientists with their own uncertainty. Does the scientist require closure, finality, a definitive outcome?

    Does the scientists’ ego demand their opinion be regarded as being high?

    It seems to me that people who live their lives comfortably with uncertainty; i.e., not being able to predict the future but have sufficient self-confidence that they will weather the future storm if it comes, are the one’s to listen to, to be the consulars, although they themselves do not profess themselves to be clairvoyant.

    Over self-confidence seems to be one of the seven deadly sins. The person may be right, but I would need a lot more supporting input before acceding their point. Can they stand to being wrong?

    • We can aspire to the adaptability of the cockroach and the durability of the termite.
      ==============

      • kim

        I agree that the last creature on earth, before our sun overwhelms the surface a couple of billion years from now, will be the cockroach. The cockroach, eggs who are transported from New York City tenements in cardboard box linings to far away and oft-times remote locations, have adapted and will continue to adapt.

        Before their time of planetary extinction, I believe that a few of us, maybe in much different forms, will escape earth’s gravity and travel to the stars, dragging as it will, the cockroach with us in such travels.

        As for the termite, besides their secretions/excretions constructing mounds of epic proportion, their legacy I will surmise, is dependent more on our constructions and rousting the environment as opposed to their adaptability to a changing world. Maybe I’m wrong.

        Most of the creatures that I see who claim certainty, being mostly climate scientists and their cabal, are not long for this world, nor, particularly are their insights and values as such are destine to be exposed as worthless as time marches on. The test of time. The inability to model the future. The pronouncements of certainty giving way to the tide of relentless change, an ever changing world and its climate, changing.

      • Moderator or some sympathetic ear.

        I am in moderation, a tomb that collects dissidents and non-conformers. Am I such a creature to live without the light of reason?

    • RiH

      You ask an interesting question regarding the quality of Expert opinion. Obviously, the ultimate answer is ‘it depends on the Expert’, but I would make the following three comments based on my own experience working as and against Expert Witnesses in engineering-related cases:

      1 – Experts are expected to provide a lot of high-quality information, and are typically appointed because they are particularly clever / well-informed. Such people are not inherently inclined to say ‘I don’t know’.

      2 – Following from #1, ego and self-confidence plays a big part in the Expert’s role, and these often act in opposing directions. An Expert’s ego will normally lead to over-certainty of the Expert opinion, while only an Expert with high self-confidence (and high self-awareness and integrity) will be happy to say ‘I don’t know’, or even better ‘this is not known at this time’ and consider that they still have Expert credibility (where in fact this should increase credibility because it indicates integrity and honesty)/

      3 – Experts can get attached to unproven and sometimes even non-sensical ideas, and it can be very difficult to shift such ideas, especially if they originated with that Expert.

  13. Hank Zentgraf

    Interesting but sounds a bit tortured to me. Reality is that our understanding of climate science is too week even to calculate the uncertainties. We passed a guilty verdict on CO2 while in a state of deep ignorance. Time to get rid of the CO2 bias and open up our minds.

  14. I think this ties in here. Apparently Ted Cruz desiccated, yes desiccated :), the Sierra Club President. This is some good reading!


    Ten different times in his testimony, Mair claimed that “97 percent of the scientists concur and agree that there is global warming.”

    “The problem with that particular statistic, which gets cited a lot,” Cruz rebutted, “is it’s based on one bogus study.”

    “Is it correct that the satellite data over the last eighteen years demonstrate no significant warming?” Cruz asked again.

    “No,” Mair responded.

    “How is it incorrect?” Cruz asked.

    At this point, Mair leaned back from his chair on the witness table and listened to a bespectacled aide, who whispered in his ear.

    “Based upon our experts, it’s been refuted long ago, “ Mair asserted confidently. “And it’s not up for scientific debate.”

    “I do find it highly interesting that the President of the Sierra Club, when asked when asked what the satellite data demonstrate about warming apparently is relying on staff. The nice thing about the satellite data is these are objective numbers,” Cruz stated.

    “Correct,” Mair answered.

    “Are you familiar with the phrase, ‘the pause’?” Cruz asked.

    After a long pause of his own (presumably to consult with his staff), Mair responded.

    “The answer is yes, and essentially, we rest on our position,” Mair said.

    “To what does the phrase ‘the pause’ refer?” Cruz asked.

    Once again, Mair leaned back in his chair to hear from his staff.

    “Essentially it’s the slowing of global warming during the ‘40s, sir,” Mair finally responded.

    “During the 40s? Is it not the term that global warming alarmists have used to explain the inconvenient truth—to use a phrase popularized by former Vice President Al Gore—that the satellite date over the last eighteen years demonstrate no significant warming whatsoever? Global warming alarmists call that the pause because the computer models say there should be dramatic warming, and yet the actual satellites taking the measurements don’t show any significant warming,” Cruz asked.

    http://www.breitbart.com/big-government/2015/10/07/ted-cruz-destroys-sierra-club-presidents-global-warming-claims-senate-hearing/

    • Weather don’t mean a thing if it ain’t got that climate schwing.
      ===============

    • If Cruz makes it through the primary morass, he has my vote.

    • Thanks for the link jim2. The Sierra club is a terrorist organization that needs to be taken down.

      • bedeverethewise

        Don’t be silly, the Sierra Club is not a terrorist organization. It’s a confidence scheme.

      • A confidence scheme appeals to greed; terrorism to fear and guilt.

        So we’ve got both; the best of all possible worlds. Every day in every way, things are getting greener and greener.
        ==========================

    • bedeverethewise

      Personally I was not terribly impressed with Cruz’s knowledge on the topic, he was throwing out some simple talking points and sound bites to score points (he was very good as far a politicians go, but that bar is really low). But the guy from the Sierra Club was an embarrassment, what a dope. It was clear he had no idea what he was talking about. He kept repeating I believe the 97% of scientists, like a creationist repeating, It’s in the bible, over and over. I’m sure if he were asked what the 97% were agreeing to, he would have no idea. And to top off the foolishness, he says “I would rely on the Union of Concerned Scientists….” There is nothing scientific about the Union of Concerned Scientists

  15. Seems to play right into the old Progressive meme.

    Get a bunch of the smartest and brightest together and all problems have a solution.

    Works sometimes, but not for wicked problems or those with heavy political overlays.

  16. If only Marx were alive, he’d have all this sorted in no time.

  17. “Uncertain T. Monster is pleased.”

    Good. I keep thinking about Feynman’s thinking about verifying models and not fooling ourselves. We seem to be good at fooling ourselves, especially when we are rewarded for fooling ourselves.

    I’m not certain about uncertainty but we certainly should be aware of it. I tell you what, it certainly isn’t stopping the state of California from diving off the cliff, and nobody knows how deep the water is.

  18. Judith,

    Interesting post, thank you. I agree that the Probabilistic Safety Assessment as used by the nuclear industry would be a great improvement over the IPCC process. However, there are massive problems with bias and/or ideological/political influence in the regulation of the nuclear industry too. The nuclear industry is overly cautious – unjustifiably so. The safety regulations for nuclear are out of balance with other electricity generation technologies. Electricity generation with nuclear power is two orders of magnitude safer than with the fossil fuels. Yet the safety regulations for nuclear are far more stringent than for fossil fuels. The impediments to nuclear are preventing it being cost competitive with fossil fuels so it cannot replace them. Therefore the impediments to nuclear are causing much lower safety in electricity industry than if the impediments were reduced or removed – thereby allowing nuclear to become cheaper so it would replace fossil fuels and avoid around 60 lives per TWh of electricity supplied.

    The point is that the PSA process is good, but it is overridden by politics and the nuclear paranoia. The same will be the case with climate change. So this is only a small part of the solution.

    I believe we need to expose the costs and benefits of the $1.5 trillion ‘Climate Industry’ and the costs and benefits of all the proposed policies. The proposed policies need to be compared on return on investment basis with all other uses for the public funding. AR3 needs to be rigorously debated and explained in layman’s terms to the public. We need to expose that the advocated abatement policies will cost more that the benefits for all this century and beyond (as the red line here shows:
    http://catallaxyfiles.com/files/2014/10/Lang-3.jpg
    Explained here: http://catallaxyfiles.com/2014/10/26/cross-post-peter-lang-why-carbon-pricing-will-not-succeed-part-i/

    And here: http://catallaxyfiles.com/2014/10/27/cross-post-peter-lang-why-the-world-will-not-agree-to-pricing-carbon-ii/

    I’d also point out that climate scientists can talk about science but are no qualified to talk about policy, economics, engineering, financing, or international negotiations and diplomacy. So the scientists should be seen as just one contributor – just one cog in the wheel.

  19. Here’s your problem. You’re asking the wrong experts. From the article:

    Harvard’s Prestigious Debate Team Loses to Group of Inmates

    William Mahar | Getty Images
    Months after winning a national title, Harvard’s debate team has fallen to a group of New York inmates.

    The showdown took place at the Eastern New York Correctional Facility, a maximum-security prison where convicts can take courses taught by faculty from nearby Bard College, and where inmates have formed a popular debate club. Last month, they invited the Ivy League undergraduates and this year’s national debate champions over for a friendly competition.

    http://www.cnbc.com/2015/10/07/harvards-prestigious-debate-team-loses-to-group-of-inmates.html

  20. Hence, a premium on scientists who understand uncertainty and can quantify subjective uncertainty. I really like the idea that better training of scientists on how to assess and think about uncertainty is key to better expert judgment.

    Has the Uncertain T. Monster read: “The Drunkards Walk – How randomness rules our lives“? If so, she might encourage other scientist to read it too :)
    http://www.amazon.com/The-Drunkards-Walk-Randomness-Rules/dp/0307275175

    The Drunkard’s Walk: How Randomness Rules Our Lives is a 2008 popular science book by American physicist and author Leonard Mlodinow, which became a New York Times bestseller and a New York Times notable book.

  21. Bringing hard-number certainty to uncertainty smacks of hubris in humility. Were you really feeling and acknowledging the uncertainty in the first place? Climate still not fantastically complex? Is there still that old turn-of-century confidence in climate as kiddie-console where you can pick between the CO2 or SO2 buttons to make things go warmer or cooler?

    But If even more wobbly numbers are your thing, and if the preservation and expansion of the climatariat is the dream you dream, this would be a grand undertaking. Of course, some dilution of warmist messaging would result, one would need to be more modern-dress Anglican than fundamentalist mullah…but there will still be a Church of Climate to make pronouncements every time the weather sucks. And there will still be those stupendously expensive white elephants called “climate solutions”.

    Of course, I’d like to see the utter obliteration of the climatariat. But that’s just me.

    • … not jest you. (

    • Profound ignorance served up with absolute certainty.

      Love the smell of hubris in the morning.

    • mosomoso’s native suspicion of attempting to quantify slippery things is understandable but misguided. It is only the requirement for regulators to try to quantify costs, risks, and benefits that provides any point of purchase for skeptical citizens. Prior to this “academic” approach, the regulators just said “X is bad, use the best available technology to get rid of X” and there was no argumentative or legal recourse. I remember when the greenies viciously attacked anyone who even suggested “putting a price on pollution” or applying any quantitative analysis to the costs of regulation relative to benefits. And under that pre-quantification regime they were able to enact extremely strong regulations (admittedly in areas where there was plenty of low-hanging fruit, like putting electrostatic precipitators in coal-plant smokestacks).

      By contrast, as the cost-benefit, risk-analysis, market-mechanism approach has gained purchase in the policy community, action on nebulous threats such as CO2 has been delayed and diluted considerably. Pretty much the only policies implemented in the U.S. on greenhouse gases have been those that created pork for producers of ethanol, solar panels, etc, and we have lots of policies like those at all times.

  22. The term “carbon emissions” means soot, like what comes out of a Diesel engine. Are we too lazy/lowbrow to use the proper term which is “CO2 emissions”? Do we not want to distinguish between black dirty carcinogens and mostly harmless gases? Harumph.

  23. In the long term, there is nothing uncertain about a catastrophic anything. That mega-flood, mega-drought, mega-explosion has to come. It’s only in the short term that anything is unlikely or less certain. Want to go into the business of warning us when based on percentages and extrapolations from pretty narrow and recent data? Better be right…because I did say “mega”.

    The problem is not that climate is more complex than I like to think. The problem is that climate is more complex than climate experts like to think. The people who should never act surprised like to be most surprised of all, because that’s how to get handed a cocktail or get invited to Paris. Dangerous stuff.

    Not knowing about the precedents of something doesn’t make it unprecedented. In August of 1849 Melbourne had its only true, full scale snowstorm in recorded history. We’re very lucky to know that. (Less than three months after the snow searing late spring heat was followed by a disastrous flood, within a day.) Why should we be in such a hurry to bury or minimise such info? Because it’s in a newspaper that’s faded and there were no photos? Because it’s too long ago to fuss about? Because climate or climate change only began in earnest in 1980? Or is it that nothing must spoil the “narrative” meant to prompt us into “climate action”?

    Necessary to remind Americans, of all people, of what a certain great river did in 1927?

    Every number has to come up. Instead of predicting the dice, assume snake eyes even while you enjoy double sixes. The whole point of being rich and having money is the ability to engineer and insure. So get rich, and, when you do, spend big on long term things. (Sorry, small government purists.) Ask yourself if a poor country could have responded to 1927 the way America did. (And you don’t have to tell me that the response was not entirely effective and equitable…but think China four years later!)

    Nobody feels like spending on big mitigation projects during nature’s truces. And it’s hard to find a climate expert who thinks drought in the midst of flood, or vice versa. Farmers, foresters and fishermen are more likely to do that. My region had to wait half a century for its next big flood after the 1890s. Trouble is, it only had to wait another year, till 1950, for the next one. We had nearly four decades to relax after 1963’s flood, deluge was the last thing on our minds after the 1990s…but then came the whopper of 2001. Fortunately, we’d learned some lesson from 1949-50.

    Listen to experts, and hope they are experts. But insure the premises, fix the roof, practise evacuating. Put money in thy purse.

    • Put thy money, honey, grain,
      in yer purse, cupboard, pouch,
      fer naychure is dangerous, ouch,
      and ‘vary variable…like Fontaine
      sayeth in fable, grasshopper,
      ayant chante tout l’ete,
      se trouva fort depourvu
      quand la bise fut venu
      … as it does.

      A serf.

      • Children begged at olde English doorways with this one for Hallowed Eve:

        The Lane is very dirty
        And my shoes are very thin.
        I’ve got a little pocket
        To put a penny in.
        If you haven’t got a penny,
        A ha’penny will do.
        If you haven’t got a ha’penny,
        May God bless you.
        ==============

    • mosomoso

      if there is one thing I have learnt from leafing through a thousand years of our climate records is that as regards natural disasters the word ‘unprecedented’ means only until the next ‘unprecedented’ event as early as a generation later.

      But of course these texts we refer to are only ‘anecdotal’ and much less reliable than the completely reliable numbers that the data gatherers like to crunch.

      tonyb

      • Greg Cavanagh

        Rare events within a continent confuse the unwary. What happens is, you have many things that are considered a rare event, for example; hot, cold, windy, wet, dry, snow depth, flood depth. Then you have so many locations where any one of these could happen. If any one of these 7 items I’ve listed had a 1:1000 year event in a town it would make news headlines. But how many towns are there in the States? Google says there are 3007 counties, and 19,354 designated places in the US.

        So for any given year you’ve got ate least 19,354 chances of any one of 7 rare events happening. This makes them not as rare as expected. But if you choose one specific location and wait for a single type of rare event, then you’d be waiting 1000 years :).

      • Greg Cavanagh

        Thanks for a brilliant take on rare events :)

  24. Great insight into the measurement of uncertainty, and at the same time I think the root cause transcends (descends?) logic to roll around in the limbic system.

    The AGW issue is a psychological problem. At any time in mankinds history when confronted with uncertainty we turned to belief in unknown forces that might save: if we only made performed the right rituals, made the right sacrifices, believed unerringly in the forces we prayed to and the priests that represented them.

    Not much has changed it seems, only the names have been changed. The deep-rooted collective OCD in our neurology demands we (most anyway!) perform the ritual sacrifices and our rightness is confirmed when we have some nice heretics to burn!

    Sound depressingly familiar?

  25. There are a couple of issue in this area that we need to think about before we get ahead of ourselves.

    What we are interested in from a policy perspective is risk. Now risk is the impact of uncertainty on the achievement of objectives.

    So the first point is that we are not interested in the science telling us just what the first moment is, we are interested in the second and third moments that tell us more about the uncertainty and therefore the risk.

    The second point is that climate change, in the main, is a slow evolving risk. We are getting plenty of warning. The management of these risks is quite different from what needs to occur with sudden, major impulse risks by way of example. For example slowly evolving risks offer the luxury of exercising options to wait for more information rather than acting immediately.

  26. This kind of algorithm usually works by weighing experts according to past performance with some convex function. Not sure whether this can be adequately simulated with ‘seed questions’.
    With no prior knowledge equal weighting will be best.
    All this provided the judgements are independent.

    But the idea at least demonstrates in rational way how to assess what a rational process might look like and whether and how far the actual process is compliant with this, what its flaws are and how they might be quantitatively evaluated (the flaws).

    • If the experts disagree, then there is no rational basis for creating a probability distribution, the form of which will just be a function of which experts are included. The IPCC is a fine example of this. A room full of alarmist experts and a room full of skeptical experts will generate opposite distributions. All we really know is that the experts disagree.

  27. “The method’s originator Roger Cooke says that when scientists disagree, any attempt to impose agreement will “promote confusion between consensus and certainty”. The goal should be to quantify uncertainty, not to remove it from the decision process.”

    This is Judith’s ongoing confusion – conflating imposed agreement with consensus and then confusing that with certainty.

    • Michael: This is Judith’s ongoing confusion – conflating imposed agreement with consensus and then confusing that with certainty.

      Possibly it’s her “confusion”. The history of the IPCC suggests however that a “consensus” or “agreement” was imposed from the start that (a) climate change was human-caused; (b) bad; (c) preventable by human action; and (d) CO2 accumulation was an “urgent” problem.

      Public attempts to “impose agreement” have been plentiful: a list could start with the call by Sen Whitehouse for a RICO investigation of corrupt support for dissenters and their undefined supporters and with the letter in its favor by the RICO20. It is a long list.

      In short, I do not agree with your assertion that it is “Judith’s ongoing confusion”.

      • The history of the IPCC suggests however that a “consensus” or “agreement” was imposed from the start that (a) climate change was human-caused; (b) bad; (c) preventable by human action; and (d) CO2 accumulation was an “urgent” problem.

        I believe that the consensus was already forming along those lines before the formation of the IPCC based on the science. Otherwise, why would the IPCC have been formed in the first place?

      • Joey, you right their mind was made up and they are sticking with it.

        Now why did they pick that young Mann?

      • matthew,

        You’re confusing options on policy responses with the scientific consensus.

        Another measure of Judiths confusion is her ‘JC s reflection’ on this about the “great promise” of the approach – you can go back 20 year and find examples of studies using ‘expert elicitation’ in climate science.

      • Michael: This is Judith’s ongoing confusion – conflating imposed agreement with consensus and then confusing that with certainty.

        Michael: You’re confusing options on policy responses with the scientific consensus.

        You are shifting your ground. And where did I exhibit that confusion anyway?

        Joseph: I believe that the consensus was already forming along those lines before the formation of the IPCC based on the science.

        the scientific “consensus” changed from a consensus on cooling to a consensus on warming after just a few warm years, and without any additional information (scientific research) on natural variability or the energy transfers in the oceans and atmosphere. The consensus was then enforced via public denunciations of dissenting scientists and behind -the-scenes campaigns to deny them publications, grants, and even (in at least one case) advanced degrees. The evidence accumulated since Hansen’s warning and since the creation of the IPCC has substantially undercut the case that there is an “urgent” need to reduce fossil fuel use.

      • matthew,

        You’re re-writing history.

        There was no “consensus” on cooling.

      • Michael: There was no “consensus” on cooling.

        That’s rewriting history. The definition of “consensus” changed after the early warnings on warming got “enforced” and the degree of dissension and disagreement was hidden from the public.

      • BS.

      • Greg Cavanagh

        Michael: you might have missed the 70’s, but there were constant warnings on the night news that the world was heading into a new ice age. The warnings were prevalent. Then the world started warming and they all changed their tune to warming. Not sure if it was a consensus, but it was equivalent to today’s hype on global warming.

  28. It looks like strategizing how to be considered a science at all, the doubt having been raised. Both sides of the argument continue to want funds.

    If it isn’t a science, then wnat?

    Curiosity is still left, if anybody wants to take it up.

    I’d start by taking the word “science” out of the name of the field. Anything with “science” in its name, isn’t.

    • If it isn’t a science, then wnat?

      It’s a cult – the CAGW Cult.

      Kylie is an example of one of the followers at the bottom of the cult’s hierarchy – swallows everything the cult tells her. She can’t debate (doesn’t understand) what’s relevant so she resorts to calling those who don’t accept her cult’s beliefs. “Deniers”.

    • rhhardin | October 8, 2015 at 7:32 am |
      “I’d start by taking the word “science” out of the name of the field. Anything with “science” in its name, isn’t.”

      Like Risk Science?

      As in “Willy Aspinall is Cabot professor of natural hazards and risk science at Bristol University”

      • There’s no probability space. No measure on the space.

        Fat tails wasn’t radical enough. There’s no tail at all because there’s no distribution. No Borel measure.

        All the rigorous stuff goes “If such and so follows this distrubution, then you’ll be wrong only x times out of y with this statistic.” You don’t get the “if” condition to get started.

        I’d recommend “winging it science,” or better “winging it.”

        There’s good common sense, but it’s not a science without a condition that’s not satisfied.

      • “As in … professor of natural hazards and risk science at … University”

        Is he held accountable if his risk assessments are incorrect? Skin in the game?

      • Like the Italian vulcanologists??

      • No, like the geologists in Galeras.

      • Turistas too.
        ==========

      • Awesome idea.

  29. One of those old sayings which societies ignore at their own risk: “save your money for a rainy day”.

    This does not mean to consume your money, borrow more and then consume that while trying to eliminate all risks of known or unknown probabilities. Save first, then you will have the luxury of a capacity to avoid and accommodate for the unavoidable.

    “Western” civilization is well down the rabbit hole of consuming its accumulated wealth and good will in search of avoiding all risks. At this rate it soon will be incapable of surviving any.

    The first consensus should be that each state shall encourage per-capita wealth accumulation, should measure it, and should be judged by it. All future obstacles and calamities will be more easily avoided or met when each citizen has wealth in the “bank” which can capitalize or be contributed as required without undue hardship.

    Knowledge is one kind of wealth to seek, but massive spending on solutions with very low productivity is wasteful. Borrowing to avoid the unknowable is a fool’s luxury. And finally, any decision-maker who stands to gain by appropriating the wealth of another has a conflict of interest.

    The first task for our society is to become solvent again as quickly as possible. Meanwhile deal pragmatically with known and present problems and hope that our storehouses of accumulated resources can help us find productive and effective solutions as needed to meet the known and present problems of the future.

  30. I’m a geologist in the oil and gas exploration industry. Uncertainty is almost the middle name of what we do. Regardless of how much we study the situation, we are NEVER certain of what we will find until we not only drill and complete a hydrocarbon-bearing zone, but have several months of production. There are too many unknowns.

    I call it a Limits to Knowledge problem. Regardless of how much effort or time you put into some problems, there are issues and there are interrelationships between the issues that are beyond your ability to control, predict or anticipate. There comes a point where you waste time and resources in further “work”: what you are doing is polishing the chrome on a car you don’t know will run.

    In meetings, maps and economic runs become more perfectly presented with time. Uncertainty disappears. After about 18 months, failure is simply impossible to imagine. “Everything” has been carefully considered. I have heard geologists in the office arguing with the geologist on the drilling rig when he reported the target zone wasn’t “there”. The in-office geologist had his maps, you see, and they were unequivocal.

    The problem is not, however, in the behaviour of the staff member. It lies in the behaviour – the demands – of management. Management insist on telling themselves and, more so, the next level of command above them, that they are in control of an uncontrolled situation. It is a perverse effect of the 19th Century English and American belief that God would never through a problem at a man (or woman, these days) that He did not give the man (etc.) the ability to solve. (Otherwise, God is malign.) Uncertainty in this light is a condition of insufficient work or thought, not a fundamental of a complex, interconnecting reality.

    Perhaps philosophies other than those of the West handle uncertainty better, but I don’t know of any. Those that invoke Fate, or God’s Will, on surface do, but they then fall down by saying that a lot of effort is pointless, as the outcome is indeterminable not because data is insufficient, but because a major player’s input – the Universe or God – is not determinable until it happens. Drill or not drill: either the oil exists there or it does not. I suspect human thinking as a species has an inability to rationally handle uncertainty, flipping between false comfort and throwing itself to chance.

    In case you are wondering, I have argued about Limits to Knowledge in a number of technically represented meetings. I argued that further work was inappropriate and would not increase chances of success, only emotional feelings of comfort that, should failure result, enough were “on-side” that no one is to blame. Never made a convert at the meetings, only made the group visibly uneasy. Outside the meetings on an individual case, yes, but not in public view.

    The English have a culture of “muddling through”. It is a way of exerting minimal effort when the problem may be exaggerated, go away by itself or later become relatively insignificant. It is one way of dealing with uncertainty, but, of course, it can be disastrous if the problem turns out to be serious and real. The COP meetings have an aspect of muddling through. If, in the next five years (as some skeptics have predicted), global temperatures start to actually decline, reflecting a 60-year temperature cycle lying on a very minor, CO2-based temperature climb, all the agreements that have NOT been put in cement will become moot. The participants will have had a good career in the service of the public good (yeah, right), some minor positives will have been achieved, and no long-term damage would have been done. I suspect that going slow is a behind-the-scenes strategy going on, hoping that time will demonstrate no real threat actually existed, or, if it does, it exists only for people “we” don’t like.

    • The 60-year cycle is a mirage.

    • Unfortunately, GISS shows a cycle of at least 200 years.

      What’s that about if the evidence and your pet theory don’t agree, then you put your theory in the dust bin.

    • Interesting perspective. Thanks for the comparison to underground petroleum investigations. Having spent over five years in the field one gets humble training in predictions of complex situations when lacking data and complex interpretations of observations. Reminds one of the surface temperatures and abyss temperature estimates without much spatial data coverage. Hard to accept the 95% certainty of projections. Or the 97% consensus, although most sentient humans recognize that is bogus.
      Scott

      • +100
        I work in mobile data comms – both the “data comms” part and the “mobile (UHF radio)” part are both somewhat black arts and inherently unpredictable in the field (ie real world), at least they are for anything finer than rules of thumb. Together with a high number (even percentage-wise) of “experts”, I have come to believe that anyone who says they know what works in every situation is either lying or deluded.
        Climate is this situation squared.
        The only thing that one can do is go by experience and “suck it and see”. Alas, for climate we have no seasoned experts – and the ones we do have, who are suggesting caution because of lack of knowledge/certainty are swept off the stage by self-confident lime-lighters.

  31. We note that poor performance as a subjective probability assessor does not indicate a lack of substantive expert knowledge. Rather, it indicates unfamiliarity with quantifying subjective uncertainty in terms of subjective probability distributions.

    It could indicate either. To conclude as the author does, you would need some way to rule out that the “expert” did indeed lack “substantive” expert knowledge.

    See also “The well-calibrated Bayesian” paper by J. B. Kadane, and comments and followup

    Also consider the book by F. J. Samaniego titled “A comparison of Bayesian and Frequentist methods of estimation”, in which Samaniego shows that Bayesians are unlikely to “perform” better than frequentists unless the Bayesian priors are sufficiently accurate. How could it be known that the Bayesians’ priors were sufficiently accurate? In all other cases, Bayesian inference would be a method that to retard learning from the evidence.

    Structured expert judgment I think is less enlightening in the long run than structured assessments of what are the next important details to be explored. Everyone agrees, I think, that clouds and other aspects of the hydrologic cycle are the next important details to be explored. What the structured expert opinions on the diverse quantitative effects might be hardly matter at all compared to what remains to be learned.

    • David Wojick

      I think ocean circulation and indirect sun-climate mechanisms are the next things to be studied, and they are not details. We do not know enough for there to be important details.

      • stevenreincarnated

        Ocean circulation matters. At least according to this model.

        “A coupled atmosphere-ocean-sea ice general circulation model (GCM) has four stable equilibria ranging from 0% to 100% ice cover, including a “Waterbelt” state with tropical sea ice. All four states are found at present-day insolation and greenhouse gas levels and with two idealized ocean basin configurations.”

        http://onlinelibrary.wiley.com/doi/10.1002/2014JD022659/abstract

        Everything from snowball earth to an ice free earth with today’s external forcings and the difference is ocean heat transport.

      • David Wojick: I think ocean circulation and indirect sun-climate mechanisms are the next things to be studied, and they are not details.

        meh

        You want a discussion of what constitutes a “detail”?

  32. ‘Ignorance most definitely cannot be quantified’

    The context for this assertion is crucial, but as a general proposition, E.T. Jaynes and others would not agree. Jaynes built an entire career out of (Shannon) entropy as a measure of ignorance/non-commitment. As a wild conjecture, his ideas on constrained maximum entropy may even have relevance in climate uncertainty. This method provides probability distributions which are noncommittal except where there is reasonably precise information.

    • Maximum entropy doesn’t fare well with large numbers of variables. It’s one thing for deducing the center of gravity of dice and another for the Navier Stokes equations.

      It’s basically a chi-square test. It doesn’t help with the model you’d need and can’t construct.

  33. Steve Fitzpatrick

    Judith,
    I am not at all impressed with the described approach. If one had God-like knowledge, one couple plausibly come up with suitable questions to ‘test’ the knowledge of individual experts, and then appropriately weight the contributions of individual PDF’s generated by experts. But nobody has that knowledge. Who will decide the ‘correct’ answers? My guess is that the individual experts would disagree about which answers are correct, and whoever chooses the ‘right’ answers will control the output risk PDF.

    Seems to me that you COULD, at least in theory, weight experts by comparing their past projections/predictions to factual reality. But that would require an awful lot of clear past projections/predictions, and not the weasel-word laced projections (could, may, possibly, might, potentially, etc.) so common to climate science. IMO, empirical estimates, based as much as possible on physical measurements (Like Lewis and Curry), are the best way to make rational PDF’s. ‘Experts’ are just too often wrong.

  34. David Wojick

    If one actually constructed a mathematical model that captures human reasoning under uncertainty, there is no reason to believe that it would look like standard probability theory math. That it does is an assumption with no basis, and lots of evidence against. For example, Pascal’s wager does not work in human reasoning.

    • I am not aware of any mathematical model, but there has been lots of research on decision making under uncertainty. Rife with bias and heuristics…

      • David Wojick

        Yes, in fact decision making under uncertainty is one of my fields of research, begun when I was at Carnegie Mellon years ago and worked with Herb Simon. Not sure what you mean by rife with bias and heuristics. There are certainly a lot of necessary heuristics in human reasoning. A typical complex issue involves thousands of considerations, so one is always dealing with incomplete information. But simplification is not bias. Bias implies faulty reasoning.

      • Good salesmen find it fun, David.

      • The red one requires more cleaning. The black then?
        Next please.

  35. My seed question for climates scientists i is – Make model runs from 1979 to present with ECS at 1,2 and 3 C with actual GHG forgings as they occurred. Plot upper and lower troposhere temperature against satellite data. Which model will have lowest MSE?

    • John Carpenter

      ECS is not something you plug into the models. it’s an emergent property of model runs.

  36. David Wojick

    Senator Ted Cruz questions Sierra Club President Aaron Mair on Climate Change. Mair kept spouting the “97% of scientists believe…” nonsense.
    https://www.youtube.com/watch?t=587&v=Sl9-tY1oZNw

     

  37. Shocked, just shocked that a politician acts like one and a lawyer has a shallow understanding of climate.

    But Sierra club knows better. That was an embarrassment. Not as crooked or corrupt as geeenpeace but moving in that direction. How hard would it be to say 97% once and then acknowledge that Sierra club looks at observations and develops conclusions from them.

    Still waiting for the results of the George Mason University and NSF reviews of double payments for the same 40 hours of work from the climate green blob leaders who defrauded one of the two organizations.
    Scott

  38. Climate Change Jabberwacky …from the article:

    A recent spate of shark attacks in the United States and in Australia has sparked a controversial debate over whether to cull the predators to protect swimmers and surfers.

    One reason sharks have been swimming closer to shore and coming into contact with humans is that they prefer coastal waters that are warming as climate change accelerates.

    Ironically, if politicians decide to kill sharks, new research published in the journal Nature Climate Change shows that one unexpected consequence would be an increase in the carbon dioxide emissions that contribute to global warming.

    http://news.yahoo.com/sharks-attacking-climate-change-230111772.html

    What the Jabberwacky says about it:
    How to prevent global warming.
    Why is the ozone layer gone?
    What is ozone.
    Certainly that was NOT the question.
    Why is ozone dangerous.
    Dangerous but fun, presumably.
    Why are sharks attacking.
    Who were you attacking?

    http://www.jabberwacky.com/

  39. “…Another kind of elicitation — the Delphi method — was developed in the 1950s and 1960s. This involves getting ‘position statements’ from individual experts, circulating these, and allowing the experts to adjust their own opinions over multiple rounds. What often happens is that participants revise their views in the direction of the supposed ‘leading’ experts, rather than in the direction of the strongest arguments….”

    That description of Delphi is incomplete or outdated. It is now known that Delphi studies should take care to shield panelists from indicators of majority opinion and also maintain panelist anonymity in order to attempt to avoid this sort of bias.

  40. “Hence, a premium on scientists who understand uncertainty and can quantify subjective uncertainty.”

    There are many scientists who understand uncertainty. There is an internationally recognized Guideline on the theme called Guide to the expression of uncertainty in measurement. Unfortunately there are no signs that IPCC or their independent reviewers are aware of this guideline. No references can be found to this guideline in their work.
    More on that here: http://judithcurry.com/2015/10/07/structured-expert-judgment/#comment-736233

    Regarding “premium on scientists who can quantify subjective uncertainty» I think you can promise a high reward on that capability with no risk of paying it.
    Here is one of Poppers numerous takes on subjective statements:
    “a subjective experience, or a feeling of conviction, can never justify a scientific statement, … within science it can play no part except that of an object of an empirical (a psychological) inquiry. No matter how intense a feeling of conviction it may be, it can never justify a statement. Thus I may be utterly convinced of the truth of a statement; certain of the evidence of my perceptions; overwhelmed by the intensity of my experience: every doubt may seem to me absurd. But does this afford the slightest reason for science to accept my statement? Can any statement be justified by the fact that Karl Popper is utterly convinced of its truth? The answer is, ‘No’; and any other answer would be incompatible with the idea of scientific objectivity.»
    (Ref. The logic of scientific discovery)

    Hence, I think that real scientists would never attempt to quantify subjective uncertainty.

  41. David, There are so many activists writing pages I have just been trying to save space. I think most people with a thoughtful disposition ‘get it’. People need to think for themselves.

  42. Pingback: Structured agreement firms - For Bangun Omah