UK Parliament: IPCC 5th Assessment Review

by Judith Curry

The UK House of Commons Energy and Climate Change Committee has invited submissions to an inquiry on the IPCC 5th Assessment.

The Committee requested submissions that address the following questions:

  • How robust are the conclusions in the AR5 Physical Science Basis report? Have the IPCC adequately addresses criticisms of previous reports? How much scope is there to question of the report’s conclusions?
  • To what extent does AR5 reflect the range of views among climate scientists?
  • Can any of the areas of the science now be considered settled as a result of AR5’s publication, if so which?  What areas need further effort to reduce the levels of uncertainty?
  • How effective is AR5 and the summary for policymakers in conveying  what is meant by uncertainty in scientific terms ? Would a focus on risk rather than uncertainty be useful?
  • Does the AR5 address the reliability of climate models?
  • Has AR5 sufficiently explained the reasons behind the widely reported hiatus in the global surface temperature record?
  • Do the AR5 Physical Science Basis report’s conclusions strengthen or weaken the economic case for action to prevent dangerous climate change?
  • What implications do the IPCC’s conclusions in the AR5 Physical Science Basis report have for policy making both nationally and internationally?
  • Is the IPCC process an effective mechanism for assessing scientific knowledge? Or has it focussed on providing a justification for political commitment?
  • To what extent did political intervention influence the final conclusions of the AR5 Physical Science Basis summary?
  • Is the rate at which the UK Government intends to cut CO2 emissions appropriate in light of the findings of the IPCC AR5 Physical Science Basis report?
  • What relevance do the IPCC’s conclusions have in respect of the review of the fourth Carbon Budget?

The submissions are posted publicly on the internet [link].  A total of 41 submission are posted, ranging from UK government organizations, professional societies, and individuals, and including some foreign contributions.

Submission from UK organizations included   Met Office, Royal Society, Royal Meteorological Society, Department of Energy and Climate Change, Natural Environment Research Council,  University of Reading, Grantham Institute for Climate Change, Grantham Research Institute on Climate Change and the Environment.  These all pretty much recite the ‘party line.’  Of those, I found the submission by the Royal Meteorological Society to be the most thoughtful and interesting.

Submissions from well-known skeptical organizations include:

Several submissions focused on the IPCC organization and process itself, each of these is worth reading:

  • Dr. Ruth Dixon presents a thorough examination of the IPCC Review Process, including the IAC’s recommendations and the IPCC’s response
  • David Holland addresses the question “Is the IPCC process an effective mechanism for assessing scientific knowledge?  Or has it focussed on providing a justification for political commitment?”  His response includes a history of the 5 Assessment Reports, and controversies associated with each.
  • Donna LaFramboise writes a hard-hitting piece drawing on her two books  about the IPCC, arguing that science is the lipstick on a pig of a political organization. 

Robin Guenier‘s submission critiques the studies that purport to identify a large consensus.

Other submissions by individuals that I found interesting include

  1. Pierre Darriulat (bacpierre)
  2. John McLean
  3. Barry Brill
  4. Ian Strangeways
  5. Alain Gadian

A number of submissions make scientific arguments that they believe refute the IPCC’s conclusions.  Of these, Nic Lewis‘ submission is a tour de force.  Not surprisingly, his submission is on the topic of climate sensitivity. This is the clearest explanation I’ve seen of the problems with the IPCC’s arguments regarding climate sensitivity.

JC note: I was asked by a Committee member (via email) to submit evidence, but I missed the deadline.  I received another email Monday, and quickly submitted something (it has not yet appeared on the site).  My submission pulled from other op-eds/testimony that I’ve written, so nothing new that is worth discussing here.

Moderation note:  As a prerequisite for commenting on this thread, please read at least one of the submissions.  Please keep your comments on topic.

288 responses to “UK Parliament: IPCC 5th Assessment Review

  1. “9. In order to conduct a proper scientific investigation, scientists must first formulate a falsifiable hypothesis to test. An alternative and null hypothesis – the simplest hypothesis consistent with the known facts – must be entertained. The hypothesis implicit though rarely explicitly stated in the IPCC’s work is that dangerous global warming is resulting, or will result, from human-related greenhouse gas emissions. The null hypothesis is that currently observed changes are the result of natural variability.” ~NIPCC

    Very sensible.

    • Why are you making the NIPCC look like a bunch of dummies by implying they believe the warming has to be the result of one or the other, and can’t result from some of both?

    • Because, Max_OK, the null hypothesis is that the observed changes are natural. Those are the known facts.

      One could quite reasonably argue that we are not extraterrestrials and that, therefore, land use as a result of living (be it habitat or farming) is natural, and any effects on climate that result from man’s presence are a result of natural processes, in the same way sheep graze a field, a mole creates a molehill, or algae bloom.

    • There are as many null hypotheses as there are hypotheses to test. No null hypothesis is any more justified than any other. That’s the whole idea of the concept.

      • It’s the other around. If your hypothesis is that aliens cause global warming, the null hypothesis is still the same.

        Fact: the null hypothesis of AGW (Alienpogenic global warming) theory — that that all global warming can be explained by natural variation – has never been rejected.

    • I wrote that every hypothesis has it’s own null hypothesis. If the hypothesis changes, so does the null hypothesis.

      Testing a hypothesis is answering a question. Thus questions that can be answered by statistical analysis have each their own hypothesis and null hypothesis.

      Discussion of what’s the correct null hypothesis is meaningless.

      • Not so… AGW could be Anthropogenic or Alienpogenic — two different hypothesized causes — but the null hypothesis stays the same. According to Dr. Spencer the null hypothesis of global warming theory — that all observed climate change is natural — has never been rejected.

    • Waggy, there’s no need to explain. After careful review of what the NIPCC said, I can see it’s not your fault. It’s not that you are trying to make the NIPCC look like dummies, they are dummies. You merely quoted what they said.

      The NIPCC doesn’t understand some stuff about hypothesis testing. For statistical testing purposes the hypothesis and its null should be mutually exclusive, which their’s isn’t since warming may be caused by both man and nature. Obviously, as it’s stated this hypothesis can’t be statistically tested to begin with. To say the null can’t be rejected because the hypothesis can’t be tested is as dumb as saying you can’t have a fever if you don’t have a thermometer.

      • “Obviously,” as you say, if all warming can be explained by natural causes, there’s no room for an hypothesized AGW as a cause for warming.

    • Pekka Pirilä | December 18, 2013 at 5:41 pm |
      “I wrote that every hypothesis has it’s own null hypothesis. If the hypothesis changes, so does the null hypothesis.”
      _____

      Yes, and a researcher can choose his hypothesis. Your hypothesis could be I can’t taste the difference between Coke and Pepsi, while my hypothesis could be I can taste the difference.

      • What you need to buy into — for the sake of truth — is that we first should look for simple ways to separate fact from fiction. So, for example, you don’t purposefully confuse matters with the possibility of there being 1,000s of possible null hypotheses when there is one simple null hypothesis that is sufficient to challenge the credibility of a theory.

    • Ahhhhh.,.. another placid day at CE, and everybody’s having a great time!

      Speaking personally, I would rather dispense with the null/alternative formulation, and think more in terms of a finite mixture of different causal candidates. This better lends itself to directly addressing “some of” versus “most of” versus “none of” versus “all of,” or else simply banishing all of those in favor of talking in terms of proportions of variance explained by the various causal candidates. Bayesian formulations with some set of undogmatic priors over causal hypotheses would also be more groovalistic.

    • Re Simon Hopkinson’s comment on Dec.18, 2013 at 4:49 pm

      Yes, Simon, all warming is “natural” because humans are part of nature.

      Now, we can move on to the question of how much of warming has natural causes and how much has super natural causes.

    • As NW wrote hypothesis testing is unnecessarily limiting. More often than not it’s better to jump directly to making quantitative estimates with uncertainty analysis. That gives also answers to many binary questions.

      • An undeniable benefit of hypothesis-testing is realizing that theories are a dime a dozen and not worth reading when highly questionable and can never be verified–e.g., sure, sure, maybe aliens cause global warming but, natural causes can explain ALL global warming so… let’s not spend a lot of time looking for alien-causation.

    • True, true, runaway big government is a far bigger threat to the well-being of all humanity than is runaway global warming.

    • Waggy, one of the dumbest mistakes you can make in statistics is to think if the null hypothesis isn’t rejected then you accept it.

      • You can’t reject a null hypothesis that is simply a restatement of reality–e.g., that ALL global warming can be explained by natural causes.

    • Pekka Pirilä said December 18, 2013 at 6:50 pm

      “As NW wrote hypothesis testing is unnecessarily limiting.”
      _______

      Indeed, but that doesn’t keep people from resisting the urge to find statistical significance in something, anything.

    • Waggy, I have much doubt about your knowledge of statistics. I suspect you will not be able to answer the following three questions correctly:

      1. A tiny number isn’t statistically significant. True or False?

      2. A Type II error equals two Type I errors. True or False?

      3. A mode is a necessary restroom fixture. True or False ?

      4. A large P-value means frequent pit stops. True or False?

      You might get all the answers right just by dumb luck. I will keep that in mind.

    • After reading this discussion, I’m wondering would vitamins be considered a null hypothesis?

      http://www.latimes.com/science/sciencenow/la-sci-sn-vitamin-supplements-waste-of-money-20131217,0,3582353.story#axzz2nsxWbLIe

    • I think the hypothesis would be something like X vitamin provides a specified benefit and the null would be no benefit found. For example, does Vitamin D reduce further bone mass loss in patients who have already experienced some loss? Of course the hypothesis would have to be even more specific.

      I believe I get all the vitamins I need in food, but if a test showed I needed more of a particular vitamin, I would take it.

    • The null hypothesis should be based on known physics rather than denial of it. Even just the no-feedback response is agreed to, and some don’t even dispute that increases in water vapor could enhance this, and that aerosols could reduce it.

    • ” Jim D | December 19, 2013 at 12:30 am |

      The null hypothesis should be based on known physics rather than denial of it.”

      A great one-liner. The skeptics toss out these natural variability terms and then never do the analysis that is staring them in the face. If they actually consider all the terms that they volunteer then they will naturally find a TCR of 2C and ECS of 3C for atmospheric doubling of CO2.
      http://contextearth.com/2013/12/18/csalt-model-and-the-hale-cycle/

    • thisisnotgoodtogo

      The IPCC hypothesis is defined. It’s that the warming since ’51 is mostly man made.
      It doesn’t matter what exact percentage, it’s over 50% they say.

      So the null to that is available.

    • David Springer

      The null hypothesis is that there is no relationship between two measured phenomenon. If the hypothesis put forward is that CO2 causes surface warming the null hypothesis is that CO2 does not cause surface warming. If the hypothesis is that a new drug cures disease the null hypothesis is that the drug does not cure disease.

      Don’t make this more complicated than it needs to be. Saying the null hypothesis can be anything is what a weasel would want. When it comes to climate weasels just say no.

    • “So the null to that is available.”

      95% confidence that 50% of the warming since 1951 is due to some form of human activity is not likely to be falsified using “Global” surface temperature since about 30% of the warming is over land and GISS interprets high latitude and higher altitude warming as “surface” warming.

      The way GISS interprets the “surface” there can be a 0.4C swing in temperatures with no change in actual “surface” energy. We have to wait and see how much the NH high latitude temperature drop after the internal “AMO” shift and then they can switch metrics or claim the shift is due to human influences.

      So they have a very defensible position even though it really doesn’t mean much.

    • David Springer,

      If the hypothesis put forward is that CO2 causes surface warming the null hypothesis is that CO2 does not cause surface warming.

      OK, but surely that is falsified by the existence of the greenhouse effect.

    • What you need is a method that can give attribution to the various mechanisms — something like CSALT. Unfortunately for the skeptics, the majority of the natural terms oscillate and contribute little to the warming trend. All that is left is the stadium wave and CO2 control knob, and that provides most of the remaining attribution.

    • Webster, “What you need is a method that can give attribution to the various mechanisms — something like CSALT. ”

      CSALT and how the operator of CSALT work are a perfect example of the problem. Some of your “mechanisms” are not causes but effects. Solar, volcanic and orbital cause SOI and dLOD which are internal settling patterns. So either you have something worth a Nobel or you are fooling yourself. How have the general responses been running so far?

      • William DiPuccio (Falling Ocean Heat Falsifies Global Warming Hypothesis):

        “Albert Einstein once said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Einstein’s words express a foundational principle of science intoned by the logician, Karl Popper: Falsifiability. In order to verify a hypothesis there must be a test by which it can be proved false. A thousand observations may appear to verify a hypothesis, but one critical failure could result in its demise. The history of science is littered with such examples.

        “A hypothesis that cannot be falsified by empirical observations, is not science…

        “In brief, we know of no mechanism by which vast amounts of “missing” heat can be hidden, transferred, or absorbed within the earth’s system. The only reasonable conclusion-call it a null hypothesis-is that heat is no longer accumulating in the climate system and there is no longer a radiative imbalance caused by anthropogenic forcing. This not only demonstrates that the IPCC models are failing to accurately predict global warming, but also presents a serious challenge to the integrity of the AGW hypothesis.

  2. thisisnotgoodtogo

    The Royal Society Falsehood

    “6 The Working Group I (WGI) Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change (IPCC) provides a comprehensive and authoritative analysis of the physical science basis of climate change. The latest report confirms that there is unequivocal evidence for a warming world, largely caused by greenhouse gases emitted by human activities.

    ***The IPCC report is based solely on publicly available, peer-reviewed studies**** by thousands of scientists across a wide range of disciplines. The main conclusions are robust and reflect the range of uncertainty, as well as the established science, according to leading climate scientists in the UK and abroad.”

    • Incidentally, please note that I do not accept the wording of your question “Has AR5 sufficiently explained the reasons behind the widely reported hiatus in the global surface temperature record?” The word “hiatus” implies that warming will resume at some point in time but in fact no-one can say with any certainty if this will be in the near future or distant future. In this instance the word “stopped” would also be unacceptable because no-one can be certain that it has stopped. I prefer the neutral expression “absence of warming” (or if you prefer “absence of statistically significant warming”). ~John McLean

    • “I prefer the neutral expression “absence of warming” (or if you prefer “absence of statistically significant warming”). ~John McLean”

      The only short and neutral word is “plateau,” along with “plateau’d” and “plateau-ing.”

  3. Nic Lewis submission is a must read. For any policy maker, for sure. It is also very worth reading for those who have been following his work and articles, being written so clearly and to the point.

    Hats off.

    • Nic Lewis’s includes:

      “A particularly robust way of empirically estimating climate sensitivity is the so-called ‘energy-budget’ method, which is based on a fundamental physical law – the conservation of energy. Energy-budget best estimates
      of ECS fall in a range between 1.5°C and 2.0°C (1.25–1.4°C for TCR), depending on the exact periods chosen for analysis. Using
      the longest available periods that were free of major volcanism gives a ECS best estimate of approximately 1.7°C (1.3°C for TCR).”

      1.25-1.4C TCR = 0.45C-0.5C warming since 1950 caused by CO2 alone. That’s more than half the warming since 1950 due to CO2.

      This backs up the IPCC attribution statement “It is extremely likely [95 percent confidence] more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”

      Nic Lewis continues:
      “Since AR4 a series of papers have derived estimates of ECS and TCR from observational data….the shorter-term TCR measure [is] 1.4°C

      1.4C TCR = 0.5C warming since 1950 caused by CO2 alone (more if you include other GHGs). That’s more than half the warming since 1950 due to CO2.

      Again IPCC attribution statement: “It is extremely likely [95 percent confidence] more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”

      Puzzling isn’t it how skeptics, including Dr Curry, claim to have a problem with the IPCC attribution statement but promote Nic Lewis’s TCR figures at the same time!

      Nic Lewis: “If TCR really is 1.35°C then under RCP8.5 – the worst-case, business-as-usual scenario – the end of the 21st century will be approximately 2°C warmer than today”

      http://1.bp.blogspot.com/-72J4J7YtMS8/UYj_QVtNzLI/AAAAAAAAAAc/3P-Ax695p1g/s1600/Climate_last_542mio_years.png

    • So what, Lolwot? Using IPCC’s theory, and IPCC’s data, and IPCC’s authors best papers, you come to a very different conclusion than IPCC’s, and most probably you have a non-problem with CO2 emissions. But you are very happy because it backs up one of IPCC’s icon statements … which is basically irrelevant because “more than half” of a (too) small quantity does not mean “a problem”.

      Puzzling isn’t it how skeptics, including Dr Curry, claim to have a problem with the IPCC attribution statement, but promote Nic Lewis’s TCR figures at the same time!

      I couldn’t say about Dr. Curry and, I don’t promote anything. I just point out the grotesque situation when a non-working theory (it hasn’t advanced a single bit in attribution in more than 30 years) gives you a non-problem when properly worked out. But we are all very alarmed, oh yes, with a consensus.

    • Antonio (AKA "Un físico")

      Hi plaza, have you read all the pdfs submited to that UK thing? It would be very interesting if you sumarize all them in your blog (in a table-like presentation). I have not visited your blog for a while, but if you get that table, I could check if some of my ideas in:
      https://docs.google.com/file/d/0B4r_7eooq1u2VHpYemRBV3FQRjA/
      have been shared by any of the submitters. [my ideas are: (1) climate sensitivity value estimation is science fiction, (2) abusing of montecarlo methods in order to attribute climate change to mankind is incorrect and (3) climatic models are not reliable as they are based in THAT climate sensitivity and as they require at least 900 years of data compilation to work properly].

    • “so what?” says plazaeme

      Well how about admitting the TCR figures cited back the IPCC attribution statement for a start?

      Reading between the lines you clearly accept that, but like all other skeptics on the thread you are loathe to admit it.

      Instead you make excuses (which I don’t accept by the way) that the IPCC attribution statement being true is nevertheless a “non-problem”

  4. Wagathon: I believe the null hypothesis should be that currently observed changes are the result of the heat emitted from energy use.

  5. thisisnotgoodtogo

    DOE

    “Have the IPCC adequately addressed criticisms of previous reports?

    There were only a few inaccuracies in the 4th Assessment Report, and none were found in the previous WGI report. These had little bearing on the main messages of the 4th Assessment Report. Nevertheless an independent evaluation of IPCC procedures and processes was undertaken by the Inter-Academy Council. As a result, the IPCC’s processes and management were extensively updated and strengthened. The IPCC has also published a protocol for investigating and if necessary, correcting alleged errors in its reports. We are confident that criticisms following the last report have been adequately addressed.”

    Except for
    Conflict of interest, refusal to fix

    Change of wording “few inaccuracies in the 4th” None in WG1 of previous report. Not to say there were none. Just none admitted.

    • The answer to that question should not be an analysis of the errors in AR4 (or even AR5). It should include a look at institutional responses to IAC, selection of authors and lead authors and a count of grey literature cited in AR5.

      I doubt if they repeated their error about Himalayan glacier melt. I am worried that they may have hunted through lobbyist literature to find other factoids that support their ‘subjective Bayesian priors’ (h/t Nic Lewis).

    • thisisnotgoodtogo

      correcting alleged errors

  6. Dang, she’s figured out I can’t read.
    =====

  7. Wow!!! Now we need to know what the UK House of Commons Energy and Climate Change Committee intends to do with these submissions. All I can hope is that whatever happens, they recognize the need for some form of cross-examination of the evidence presented by those who did so. That, to me, is the only way we are going to get to the scientific truth.

    Will cross-examination be allowed? I doubt it. I was told that the key person on the committee is Peter Lilley, the only member of this committee who has a scientific degree. But whatever happens, I suggest we all “fasten our seat belts”.

  8. Reading through a couple of the comments and skimming a few others I was left wondering, whether many of them will have the slightest influence on the conclusions of the Select Committee.

    Contributions by individuals may perhaps have some effect, when some committee members choose to use them in support of their personal views, otherwise the best written organizational comments (like that of RMS) are probably much more effective.

    • All comments should be anonymous, and therefore treated on their scientific merit rather than inflated “me too, me too” authority…

    • RMS’s submission is so passive-aggressive in its avoidance of the issues that it knows motivated the questions that it will be useful only as an unread CYA for status-quo supporters. When you deal with these official bodies you have to formulate your questions like a lawyer in order to force them to confront the sticky substantive issues, such as the ones Nic Lewis surfaced, or the procedural issues raised by most IPCC critics.

  9. Pingback: La clave de la discusión del cambio climático (y el IPCC), en un breve y claro informe al Parlamento Británico. | Desde el exilio

  10. Pingback: La clave de la discusión cambio climático (y el IPCC) en un breve y claro informe al Parlamento Británico. | PlazaMoyua.com

  11. Nic

    ‘. Apart from these known shortcomings in GCMs, it is fundamental to the scientific
    method that when modelled values do not agree with observations then the hypothesis
    embodied in the model is modified or rejected. The refusal in AR5 to accept the
    implications of the best observational evidence and of the over-estimation of warming by
    the climate models and accordingly to either:
     reject the ensemble of GCM projections;
     use projections from a subset of GCMs with ECS and TCR values fairly close to the
    best observational estimates; or
     scale all GCM projections to reflect those estimates
    is unscientific.

    #################

    The issue as Nic points out is NOT that models disagree with observations. This happens all the time. The issue is the reaction to this fact: There are many choices and paths one can take in response to a model not matching observations. Ignoring the discrepancy or waving it away, is not defensible unless the discrepancy is small, or within the uncertainty of the observations.

    The difference between the models and the observations is larger than the uncertainty in the observations. That means something needs to be fixed, modified, improved, or tossed out. That means folks should be listing out all the possible issues, TESTING alternative approaches, and eliminating explanations for the discrepancy.

    • Are you sure you are alright Steve?
      I was expecting you to begin supporting the GCM’s as the best we have and the modelers ‘salt of the Earth’ speech.

    • Steven , Nic points out that “global” estimates of sensitivity to aerosols will require at least an NH to SH comparison. The NH land amplification which remarkably follows SST and not CO2 on the Tmin side, including the switch in DTR trend, looks like an excellent topic for a young post grad.

    • No Doc,

      In a bit i will write up and post our criticism of the models. But, the dolts who imagine that there is some mechanical principle whereby theories and models are simply ‘rejected’ or “falsified’ need to understand that they are just factually wrong. Logically, models and theories don’t get falsified. That is, there is no logical procedure or rule that allows you to automagically reject a theory or model when it disagrees with observations. There is always a practical choice governed by practical, and yes subjective, and yes social, considerations.

    • In my world we do have a system to see if a model is falsified, indeed, the whole of science is based on this fact. If your model cannot reproduce reality, it is crap.
      That you cannot accept this simple postulate, that crap model outputs mean crap models, is telling.

    • Doc, a good post by Mosher. I generally appreciate your comments, but on this occasion I think you should have left your snarc at home.

    • thisisnotgoodtogo

      Mosher said
      “But, the dolts who imagine that there is some mechanical principle whereby theories and models are simply ‘rejected’ or “falsified’ need to understand that they are just factually wrong. Logically, models and theories don’t get falsified. That is, there is no logical procedure or rule that allows you to automagically reject a theory or model when it disagrees with observations. There is always a practical choice governed by practical, and yes subjective, and yes social, considerations.”

      Sure there is falsification of hypothesis, unless you’re talking in meaningless babble.
      You claim that I personally robbed the bank at 10 AM, because I left the house at 9 AM and this and that and the other, but I have the court saying I was in court all that time with judge and lawyers as witnesses.

      Play your nutty games, Mosher.

    • Mosher is arguing semantics again. We are talking about very coarse numerical simulations with largely guessed inputs and a lot of missing physics, not blanket-bans of all poor models everywhere. We are not talking rejection so much as standard % error calculations. We are talking about falsification of the assumptions that the modelers made, not the idea of modeling and correction of models.

      It is not about models not being perfect. It is about fitness-for-purpose or even basic adequacy. Indeed it is not even the poor models fault so much as the circular logic that went into assumptions and the huge reliance placed on unvalidated results.

      Some models can be made to agree with reality if they use zero positive feedback and non-declining natural variation but we must avoid using any projections of declining fossil fuel use since it seems impossible for the near future. But crucially it seems the need to use any models for climate has withered away because there is no catastrophic scenario. The hypothesis was well tested and it failed all tests. The reason is is one or all of the physics that the IPCC admitted was “unknown”.

    • Steven Mosher | December 18, 2013 at 6:03 pm |

      Actually, there is. Scientific theories specify their data by implying them. If a theory implies an observation statement that proves to be false then something in the theory must be rejected. Rejection is not automatic and thoughtless. The implied observation statement must pass tests that prove it is false. Because the theory is complex, the particular statement(s) rejected depends on some deliberation (Quine-Duhem). However, something in the theory has to be changed. As a whole, it has been falsified.

      Modelers recognize no relationship between their models and the observation statements. Yet they expect their models to have the same weight as theories. Nonsense. Utter and total nonsense. Until modelers specify the relationship between simulation and world they are simply cheating.

    • Theo, this was my test. The day a model is not fed real data is day zero and the simulations prior to this date are hindcasts and projections after day zero are forecasts.
      Day zero serves as the cut off between past and future. The models output is modeled temperature and we prepare two series; model minus known hincast and model minus emerging forecasted temperature.
      If the model is as good at forecasting as hindcasting, then the populations of (model-hindcast) and (model-forecast), for the same time period around day zero should be the same.
      I would even allow the modelers to use the MEASURED values of CO2, aerosols and solar output IF their algorithms were cast.
      So if a model was set on Jan 1st 2000, it would be fair to compare the 12 years and 11 months of (model – temp) with Feb 1st 1987 to Dec 31st 1999.
      The two populations should be the same if the models work equally well at forecasting as they do hindcasting.

    • DocMartyn | December 19, 2013 at 3:13 pm |

      Your suggestion just might do the job. It certainly improves upon what modelers do now. But the Devil is in the details.

      What I ask is that modelers undertake to prove that their model runs, simulations, bear to reality some logical or methodological relationship that is different from plain old extrapolation from an existing line on a graph. They have not so much as attempted to do this.

    • David Springer

      failure to capitalize is taken from doltish predecessor above who must think it looks cool or something

      dolts who imagine that theories and models are not sometimes simply ‘rejected’ or “falsified’ need to understand that they are just factually wrong.

      the classic example of falsification is presented by karl popper in the black swan example. it was hypothesized that no black swans exist. popper posits this is a valid scientific hypothesis because it may be falsified by the observation of a black swan. eventually one *was* observed and the no black swan model was falsified

      now lets take a hypothesis which states that over 50% of global warming since 1950 is anthropogenic. this may be falsified by finding of a natural variation which caused more than 50% of the warming

      indeed, the null hypothesis for the above case would be “less than 50% of the warming since 1950 is anthropogenic”

      when it comes to climate dolts and weasels don’t let them obfuscate their way out of driving a stake in the ground where that stake’s position is, at least in principle, falsifiable

      dolts and weasels you know who you are

    • Doc, I appreciate what you suggest, but if the model contains quantities that must be estimated from the data prior to day zero, those quantities will contain sampling error, and for that reason alone the forecast error distribution after day zero (out-of-sample fit) must have a higher expected variance than the backcast error distribution before day zero (in-sample fit). Some people call this “shrinkage” (of the quality of fit).

      However the basic idea you have is a sound one, in principle, because one can use the estimated sampling error of the estimated quantities to construct an expected forecast error distribution. That is the thing to compare against the observed forecast error distribution after day zero.

      I agree with Mosher, though, about what one makes of a model rejection. It depends on a lot of things, not least one’s confidece in the measurements, measuring instruments, and other methods brought to bear in the experiment or empirical study, but also the availability of an interesting alternative hypothesis. And really my readers make these judgments, in light of my report on the escapade.

  12. Patrick Purcell

    The submission from the RMS was actually the weakest IMOP.
    Some of the links in Judy’s post don’t work for me.
    I am left wondering what was the point of asking the questions and inviting submissions if there is no political will to (re)act?
    Overall, the submissions reinforce the impressions of sceptics viz.
    * the IPCC process is politically driven
    * IPCC is still indulging in (uncritical) selection bias
    * IPCC is still giving unjustified credence to the output of computer models
    * IPCC’s handling of statistics is very poor
    * IPCC’s conclusions are not robust
    At least the submissions attest to the fraudulence of the IPCC’s pretense of presenting itself as an objective and impartial assessor of the literature.

  13. May I ask the non-British members here what they think about the way Parliament has done?
    I mean here is the legislative body of the UK asking people and groups to submit their views on a very important policy issue.

    Personally, I would love them to do a Royal Commission next.

    • ” …Royal Commission”

      Depends on the Terms of Reference such a Commission may be given

    • I don’t know what to make of it, Doc. Having been in the US a bit, you know we have these public comment periods at many levels, and I can’t tell whether these are serious or just for show.

    • Doc, many in Australia, including me, have for many years been arguing the need for serious examination of the IPCC output and story, with no result. This UK inquiry is far ahead of anything happening here. There has been no equivalent consideration by Oz governments, and all relevant agencies are CAGW-promoters. Let’s hope the UK inquiry has some positive outcomes, and that note is taken of it in Australia.

    • This is the clearest sign I’ve seen that the British are coming to their senses. Sure, it’s just the eyelids fluttering, but they’re trying to open their eyes.
      ============

    • Curious to me, F, is that in Australia hoi polloi caught on before the elite. Is this embryonic Albionic query a start on cleansing UK politics of ‘Green Crap’?
      ==============

    • Bruce Cunningham

      We shouldn’t count our chickens just yet. Remember the Oxburgh and other panels that were commissioned after climategate. Many were quick to approve of them before finding out that they were whitewashes from the git-go. What will you say when the result of these submissions is to have them publicly declare that after a through review of all submitted viewpoints, it is the same old —-. More Chicken Little than chicken. Wait till the verdict is in before making vacation plans.

      It is worth trying though.

    • Yes, Bruce, but the questions are revelatory. It’s going to be obvious who passes this ‘open book’ test. And it will be on the record.
      =================

    • kim, the wisdom of the masses. But more likely the fact that anti-GHG emissions policies led to huge increase in electricity prices, belief that rain would cease led to costly but unused desal plants, etc, we are bearing high costs for no benefit. The fact that the governments which imposed those costs (at federal and state level) were clearly incompetent in many other areas helped to turn the tide. Knut couldn’t do it, but, yes we can!

    • The founding fathers of America kept the institution of the Grand
      Jury as a means of CITIZENS bein’ able ter check the excesses of guvuhmint.Trouble is with a Royal Commission, who gits ter set rthe agenda? Guvuhmint ‘n elite progressivist bureaucrats? Hmph!

    • As Faustino will affirm, I am certainly non-Brit – so here goes.

      I wouldn’t expect too much from an enquiry since so many people are still drawn to what the IPCC represents: the hope of global scientific consensus through a committee which is at last big enough. But beyond the craving for centrality and certainty, it’s now clear that if Big Coal has much to lose, Big Banking, Big Accounting, Big Gas, Big Green and Big Government have much to gain. Those “Bigs” are each a lot uglier and trickier and more coordinated than Big Coal. The IPCC has plenty of cigar-chomping defenders. When our Australian governments state and federal tell us that “BIg Business is on board” with their latest green initiatives, they’re not kidding!

      Where climate is concerned many are still loathe to ask the Montaigne question: What do I know? If they did, they might have to face the possibility that there is no such thing as climate science – yet. Just one of a number of reasons to consider that possibility: Those oceans which seem to make their way into every climate theory are largely unvisited and unknown, as is that big hot ball underneath them (called Earth).

      Before the human body was anatomised and before surgeons were required to wash their hands, there was medical science of a sort only. We may now have climate science of a sort only, something well worth pursuing empirically with sticky fingers and wet feet – but not worth all this recent trumpeting. (No, all degrees of advancement are not equal. Medical science is sophisticated and incomplete, climate science is primitive and incomplete.)

      This is why there probably should be no such thing as an IPCC and why publicly raising question about the IPCC in the UK parliament is a very mild measure indeed. Shell, Goldman Sachs, Greenpeace, Maurice Strong, Boone Pickens and the ghosts of Enron may feel otherwise.

  14. I would submit two things;

    http://i.snag.gy/BztF1.jpg

    from http://wattsupwiththat.com/2013/12/17/solar-amo-pdo-cycles-combined-reproduce-the-global-climate-of-the-past/#comment-1505034

    and a summary of the report that shows that windmills consume 2 to 3 times as much energy as they produce over their life-cycle.

  15. A fan of *MORE* discourse

    BREAKING NEWS
    Q: “How robust are the conclusions
           in the AR5 Physical Science Basis report?”
    A: “Very robust!
           THE “PAUSE” IS OVER
    ! David Appel and James Hansen Proved Right AGAIN !
    !!! “ENERGY CONSERVATION RULES SUPREME” !!!

    Global Summary Information – November 2013
    November 2013 global temperature
         highest on record
    Year-to-date global temperature
         ties for fourth highest on record

    The globally-averaged temperature for November 2013 was the highest for November since record keeping began in 1880. November 2013 also marks the 37th consecutive November and 345th consecutive month with a global temperature above the 20th century average.

    Many areas of the world experienced higher-than-average monthly temperatures, including: much of Eurasia, coastal Africa, Central America, central South America, parts of the North Atlantic Ocean, the south west Pacific Ocean, and the Indian Ocean. Much of southern Russia, northwest Kazakhstan, south India, southern Madagascar, parts of the central and south Indian Ocean, and sections of the Pacific Ocean were record warm. Meanwhile, northern Australia, parts of North America, south west Greenland, and parts of the Southern Ocean near South America were cooler than average. No regions of the globe were record cold.

    Aye Climate Etc lassies and laddies, those sounds you hear are scientists rejoicing in the vindication of AR5 … and the death-rattle of willfully ignorant climate-change denialism!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • NOAA’s November “record temperature” is just more data being tortured to propagate the myth of manmade global warming. UAH has November 2013 as the 9th warmest and RSS has it as the 16th warmest since 1979. Those satellites are certainly inconvenient for Hansen worshippers like yourself. It is a shame when a formerly great organization like NOAA becomes just another propaganda arm of a corrupt Administration

    • You are both off topic and off your head.

    • Dr. Curry, feel free to delete my post since it is technically off-topic but I could not allow FOMD’s post to go unchallenged. I and perhaps others, would be interested in your thoughts regarding the NOAA, NASA, HadCrut temperature databases vs. the RSS and UAH satellite temperature databases. I did read Nic’s submission and am always profoundly impressed by the amazing work done by Nic and the many other citizen-scientists who post here and on other blogs.

    • A fan of *MORE* discourse

      >FOMD posts:
      Global Summary Information – November 2013
      November 2013 global temperature
           highest on record
      Year-to-date global temperature
           ties for fourth highest on record

      The globally-averaged temperature for November 2013 was the highest for November since record keeping began in 1880  [considerable data omitted] No regions of the globe were record cold.

      Chuck L perceives a conspiracy: “NOAA’s November ‘record temperature’ is just more data being tortured to propagate the myth of manmade global warming.”

      Fascinating, captain … a field climate-change denialism so intense that it has fabricated a cognitive “black hole of global-scale conspiracies” … out of pure ideology.

      Seriously Chuck L, wasn’t Roy Spencer was right to warn denialists of The Danger of Hanging Your Hat on No Future Warming?

      Now that “The Pause” appears to ending (as Roy Spencer foresaw), with regard-setting global temperatures resuming?

      And doesn’t the Pause End have considerable significance for present (and future) IPCC Assessment Reviews?

      The world wonders!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Fan of Sea Rise Hype, take a look at the following and see if can critique without an ad hom.

      http://wattsupwiththat.com/2013/12/19/if-manmade-greenhouse-gases-are-responsible-for-the-warming-of-the-global-oceans/#more-99566

    • I’ve long considered Josh Willis to be one of the most agonizingly conflicted scientists on earth. How can he possibly avoid the unknown unknown bias? I suspect it haunts.
      =============

    • Climate Weenie

      The ‘pause’ may very well end.

      But declaring it over now appears to be wishful thinking on your part.

      Degrees C per century trends from 2001 through Oct 2013:

      RSS MT -0.57
      RSS LT -0.57
      NCDC SST -0.29
      UAH MT -0.05
      NCDC -0.10
      GISS + 0.11
      UAH LT +0.52

  16. I agree that Nic’s is a tour de force. It’s quite something to answer the questions of a Parliamentary select committee and produce something that is going to be useful to beginners and all comers for years to come but that I think is what Nic has done for the sensitivity debate.

  17. Nice job, Nic Lewis

    I am particularly pleased with his bare knuckle knockdown of IPCC use of Subjective Bayesian studies of ECS using uniform priors. (Para 10 – 23, 28).

    Parenthetically, he mentions there are a few Subjective Bayesian Expert Priors.

    (from #12) This is important – estimates arrived at using Subjective Bayesian methods – like those used in many IPCC estimates of climate sensitivity – are personal to a single decision maker: the investigator himself. As the Bayesian statistician Dennis Lindley wrote: ‘Uncertainty is a personal matter; it is not the uncertainty but your uncertainty’.

    “personal to a single decision maker.” What if the decision maker is a panel? Why shouldn’t it it apply. So here is what you do. Each member of the decision unit (person, panel, team, board) give their own prior. Treat each member as (a) equally probable, or (b) probability weighted based upon previous 80% confidence scores, and probabilistically combine all the individual priors into a robust prior for use by the decision unit. I like the dynamics of this approach. One outlier, contrarian in the group still contributes to the prior, but in proportion to (a) their presence or (b) their past prescience.

    Para 33 needs rewriting. Rather than “The refusal in AR5 to accept [implications of a bunch of stuff] is unscientific”, rewrite to be
    “It is unscientific of AR5 to refuse to accept the implications of [a list of stuff].

    I am thankful for Para 38. But I think it should be a stronger point.

    • Nic Lewis’ contribution is great in many ways.

      I’m not so hostile to subjective Bayesian methods if used in a specific manner that actually helps make Nic’s points concrete and clear to decision makers. This specific manner is simple, really: You use a wide range of prior distributions to create posterior distributions, and you report the whole mapping between priors and posteriors. You can include the noninformative prior amongst the prior distributions.

      When the model+data are relatively uninformative, this way of providing information makes it very clear how fragile the posterior is.

    • NW “You use a wide range of prior distributions to create posterior distributions, and you report the whole mapping between priors and posteriors. You can include the noninformative prior amongst the prior distributions.”

      Careful NW, too much common sense at one time may overload some of the faithful, especially since they assumed away half of the possibly outcomes.

    • NW, I smell Extreme Bounds Analysis here.

  18. ” This outlined the factors thought most likely to have contributed to recent changes”

    These are the weasel words provided by the RMetS in item 14 to explain the current hiatus in surface temperature. Even if they had added the words “or lack of them”, for more accuracy the sentence would be equally meaningless.

    This gets right to the heart of the matter. The on/off nature of climate change and the failure of the IPCC’s models to replicate this behavior.This exposes the IPCC’s tattered fragments of science. to which they desperately cling. See my website underlined above for an explanation.

  19. I agree: kudos to Nic Lewis. I try strenuously not to use a technique unless I understand exactly what the hell I am doing. I will often code a statistical model myself just to make sure I do understand it. I also like to carefully look at all data myself (wish I could do it like Steve M). The climate community does not seem to exercise such care, and when their poor use of methods is pointed out they just ignore it and carry on (I could give scores of examples, from improper use of principal components, data mining, data snooping, spatial correlation, upside down data, single cause fallacy…and now uniform priors). I have learned enough about distributions and priors to believe that Nic has precisely nailed it. Great job.

    • David Springer

      +1

    • The CSALT model uses the soli-lunar orbital forcings that Scafetta and Loehle recommend, yet they fudge the data to low-ball the sensitivities just like Nic Lewis’ results. How exactly does that work?

      Could it be that they have an agenda that they are pursuing?

    • David Springer

      WebHubTelescope (@WHUT) | December 19, 2013 at 7:30 am |

      The CSALT model uses the soli-lunar orbital forcings that Scafetta and Loehle recommend, yet they fudge the data to low-ball the sensitivities just like Nic Lewis’ results. How exactly does that work?

      Could it be that they have an agenda that they are pursuing?

      Those you mention don’t fudge the data. You’re projecting. Again.

      Say, if CSALT is a model what’s it say global average temperature will be in 2016? Loehle and Scafetta 2011 publish a number for that from their model. It’s only fair you do too so we can compare projection to projection, eh?

    • The CSALT model can generate a temperature estimate for 2016, save for the unpredictability of the SOI value and any volcanic disturbances that could occur.

      As far as the parameters of the CSALT model are concerned — the value of CO2 can easily be extrapolated to 2016, the value of the stadium wave won’t change much, and the TSI value can be projected. I also use the orbital series of sine waves that can be projected.

      So no problem to give an estimate for 2016, SpringyBoy!

  20. DocMartyn says “May I ask the non-British members here what they think about the way Parliament has done?”
    I am a UK citizen and frankly would have little faith in a “Royal Commission”. It’ll “play a blinder” I’m sure.
    Sadly, anyway, this will all be irrelevant – the EU is running the whole show and the UK, having squandered it’s democratic control over it’s own affairs in most areas, may or may not squeal but it won’t make a blind bit of difference.
    Only when the UK citizens get off their backsides and really start complaining will there be a chance of change. I suspect it hasn’t got painful enough for us yet – but if we are “lucky” enough to have a very bad winter and perhaps experience some form of energy crisis then … who knows?

    • Never mind the winter look forward to Thursday 22th May, which is in 155 days!

    • Jonbuk:
      You and Doc Martyn seems to be interested in knowing in DM’s words:
      “May I ask the non-British members here what they think about the way Parliament has done?”
      As a Canadian of South African and Rhodesian extraction currently retired and living in Kelowna BC I can only say that I hope the outcome of this excercise turns out to be much better than you expect.
      Certainly the input from Nic Lewis and Donna Laframboise and others seems suffciently clear about AR5’s defects to avoid being simply brushed aside. And if Judith Curry’s contribution sees the light of day I am sure it will add much weight to the voices of sanity.
      As I have said elsewhere I find the scientists who refuse to face the mounting evidence that the CAGW hypothesis was a terrible and costly mistake have simply not done their homewrok and can only be descirbed as culpably igonrant. They are the ones who are truly blameworthy for helping along a cleacut con job by refraining from speaking out.

  21. Off topic judith, my apologies. Hope you’ll consider a post on the rising tide of censorship being practiced against global warming “deniers.” I think this is the most important developing story in the climate realm. If I’m not mistaken, the L.A. Times is no longer printing skeptical letter for a further example. I can see this snowballing into something quite significant.

    http://blogs.telegraph.co.uk/news/brendanoneill2/100250918/reddit-has-banned-climate-change-deniers-and-ripped-its-own-reputation-to-shreds/

    • Well, it’s not far off topic. I haven’t followed reddit, so I don’t know the details, but I guess the “deniers” were seen as a nuisance. I and some others who post here probably would be regarded the same way at WUWT. Posters banned from reddit could go where they are welcome.

    • pokerguy, I’m interested in that news too. Personally I’m less concerned when one (or a handful of) private media outlet decides this way, but you are right–there are contemporary instances of snowballing intolerance that are downright frightening. I think it would be a good subject for a post by someone who has followed similar things closely.

  22. Nic Lewis’s includes:

    “A particularly robust way of empirically estimating climate sensitivity is the so-called ‘energy-budget’ method, which is based on a fundamental physical law – the conservation of energy. Energy-budget best estimates of ECS fall in a range between 1.5°C and 2.0°C (1.25–1.4°C for TCR), depending on the exact periods chosen for analysis. Using the longest available periods that were free of major volcanism gives a ECS best estimate of approximately 1.7°C (1.3°C for TCR).”

    1.25-1.4C TCR = 0.45C-0.5C warming since 1950 caused by CO2 alone (and that’s excluding other greenhouse gases). That’s more than half the warming since 1950 due to CO2.

    This backs up the IPCC attribution statement: “It is extremely likely [95 percent confidence] more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”

    Nic Lewis: “If TCR really is 1.35°C then under RCP8.5 – the worst-case, business-as-usual scenario – the end of the 21st century will be approximately 2°C warmer than today”

    That still leaves us quite high with very likely unprecedentedly fast warming! lets hope it’s lower!
    http://1.bp.blogspot.com/-72J4J7YtMS8/UYj_QVtNzLI/AAAAAAAAAAc/3P-Ax695p1g/s1600/Climate_last_542mio_years.png

    • Lolwot,

      Each of 15 clowns measures the heights of 50 women randomly sampled from Manhattan. Each clown independently samples his or her own 50 women. The range of sample average heights recorded by the 15 clowns is 5’2″ to 5’4″. With just the information I have given, can you conclude with 95% confidence that most women in Manhattan are taller than 5’1″?

      Show your work. Extra credit for correctly guessing who the clowns are. Special bonus question: How many clowns did the IPCC ask to get the range they stated?

    • lolwot,
      How exactly do you get that low a TCR? The TCR is actually at least 2C for an analysis that treats CO2 as the control knob.

      By that I mean that CO2 is the indicator of all GHGs, including that of water vapor. So that if the atmospheric CO2 levels go up by a doubling, then the global temperature levels will transiently rise by 2C.
      http://imageshack.com/a/img27/5259/wx7n.gif

      On the other hand, the ECR levels are clearly 3C just from observations of land temperature alone, as these reach a quasi-equilibrium steady state much more quickly.

      I wouldn’t give people like Nic Lewis an inch, as they clearly do not know how to do climate analysis.

    • Web, it’s their own TCR figure they are promoting.

      They are quite obviously and deliberately avoiding the fact that *their own figure* backs the IPCC attribution statement.

      It highlights the dishonesty of skeptics and the game they are playing. They aren’t honestly looking for the truth about how the climate works and man’s influence on it. They are concerned only with denying the link between man and climate.

      Why hasn’t Judith made any comment on what the TCR figures mean for attribution of warming since 1950? I tell you why: because the answer is inconvenient to her agenda.

      They attack the IPCC attribution statement because they don’t want it to be publicly accepted that man is the dominant driver of recent warming. They want to pretend that it’s all uncertain and maybe man only has a bit role in the recent warming.

      Their promotion of Nic Lewis’s figure contradicts that stance. And this contradiction should be hammered home at them again and again.

    • Is “insufficiently doctrinaire” the same as being a “denier” with and “agenda”?

    • Lolwot, so it’s very much like negotiating over a car price, only here we have the objective truth which I will not yield an inch over. With science there is no haggling.

    • David Springer

      Would you buy a used car from a guy named WebHubTelescope?

      Me neither.

      No sale.

    • David Springer

      NW | December 19, 2013 at 1:41 am |

      Lolwot,

      Each of 15 clowns measures the heights of 50 women randomly sampled from Manhattan. Each clown independently samples his or her own 50 women. The range of sample average heights recorded by the 15 clowns is 5’2″ to 5’4″. With just the information I have given, can you conclude with 95% confidence that most women in Manhattan are taller than 5’1″?

      Show your work. Extra credit for correctly guessing who the clowns are. Special bonus question: How many clowns did the IPCC ask to get the range they stated?

      Trust but verify. I’d want to see the data first. Clowns have a tendency to apply adjustments to the data to get the conclusions they want.

      Bonus points if you can tell me when a yardstick is not a yardstick.

    • I haven’t seen any reason to think Nic Lewis’s work is wrong. I’ve seen arguments but none that convinced me it should be dismissed. So I consider his TCR range a valid argument that TCR is lower than thought.

  23. You’ve got to be joking, surely!

    From the Royal Meteorological Society :

    “How robust are the conclusions in the AR5 Physical Science Basis report?

    7. The conclusions of the report are graded in terms of probability and confidence. We believe this provides a clear description of the robustness of the conclusions. The most robust conclusions are indicated by high levels of likelihood and/or confidence. Conversely the least robust are assigned low likelihood/confidence.”

    Do they really “believe” that what they are saying is scientific, or useful in any way?

    To a non believer, their submission looks like a thinly veiled plea for ever increasing “research” funding – to infinity, and beyond!

    They seem incapable of accepting that climate is defined by preceding weather events. It is impossible to average weather events which have not occurred yet. Models will not help, and this fact is slowly becoming apparent.

    And don’t call me Shirley!

    Live well and prosper,

    Mike Flynn.

  24. JC did they ask you to write the questions?
    Thet’re straight out of the skeptics handbook.

  25. Heh, Nic Lewis turns a river into a stable. Now I’ve often seen cars turn into driveways, and wondered at the magic of it all.
    =============

  26. Hah, Ed Caryl has a marvelous neologism, ‘Calamitology’.
    =================

  27. I wonder if Warmism will be as long lasting as Lysenkoism?

    Live well and prosper,

    Mike Flynn.

    • Warmism is proof that Lysenkoism is still thriving.

    • Simon Hopkinson | December 19, 2013 at 3:15 am

      In the sense that CAGW and Lysenkoism are both driven by a common memetic mechanism that is more than capable of perverting science, yes. Many folks instinctively pick up on the similarities between both of these, and religions too, which the common mechanism engenders. Lindzen rates Global Warming worse:

      “Global warming differs from the preceding two affairs : Global Warming has become a religion. A surprisingly large number of people seem to have concluded that all that gives meaning to their lives is the belief that they are saving the planet by paying attention to their carbon footprint.”

    • Simon Hopkinson | December 19, 2013 at 3:15 am

      AHHH triangle brackets around the critical words… sorry changed to square.

      In the sense that CAGW and Lysenkoism are both driven by a common memetic mechanism that is more than capable of perverting science, yes. Many folks instinctively pick up on the similarities between both of these, and religions too, which the common mechanism engenders. Lindzen rates Global Warming worse:

      “Global warming differs from the preceding two affairs [Eugenics and Lysenkoism]: Global Warming has become a religion. A surprisingly large number of people seem to have concluded that all that gives meaning to their lives is the belief that they are saving the planet by paying attention to their carbon footprint.”

  28. “Would a focus on risk rather than uncertainty be useful?”
    Good question. Thoughts? Risk implies risk management strategies. Uncertainty seems to just imply more research before doing anything, without considering any risks associated with delay.

    • Right, let’s consider risks associated with delay, but not consider the risks of spending our limited capital on ineffective strategies. I get it.

    • I also don’t agree with ineffective strategies. Which ones are effective?

  29. Nicholas Lewis’ submission is articulate, clear, succinct and most admirably comprehensive. Apart from his devastating content, his writing puts obfuscating academia to shame

    Do I think this will have much impact ? Kim, you could answer that question

    • Nic Lewis recommends using an energy-budget method as the most robust way of estimating climate sensitivity — see his point #27.

      An energy budget amounts to accounting for all of the free energy terms that are involved with external forcing. Temperature is but the thermal indicator of a change in energy and the other factors involve various kinetic, potential, latent, etc forms of energy.

      So why does not Nic Lewis work out the energy budget as he recommends? Will the truth hurt that much to see the other energy terms compensating for the pause and accounting for the natural variability?

      The problem with skeptics and deniers is that they won’t even eat their own dog food.

    • Thanks, I8888, all I know is what I learn from lurking @ the Bish’s, and what I see with my own eyes as I hither from blogger thither to other blogger. A commenter over there remarked that a comment of mine was ‘unusually specific and optimistic’. Heck, I’ve always been optimistic. Nature rules, and it’s not nice to fool Mother Nature. She has a favorite Sun, you know.
      =====================

  30. @ Dr. Curry;
    Off topic but can we please try collapsible comment threading? The first comment on this article and the 50 useless follow-ups just take all the fun out of reading comments. My only criticism of your excellent blog.
    Regards,
    H.

  31. Thanks for the mention! There’s a typo in your link to my evidence – it should be http://data.parliament.uk/writtenevidence/WrittenEvidence.svc/EvidencePdf/4187

    All of the 40-odd submissions can be found here http://www.parliament.uk/business/committees/committees-a-z/commons-select/energy-and-climate-change-committee/inquiries/parliament-2010/the-ipcc/?type=Written#pnlPublicationFilter

    I was amused that the UK Met Office take the opportunity to point out the “clear need for supercomputing capacity and infrastructure.” Truly the IPCC report has something for everyone!

  32. Prof Curry is correct – Nic Lewis’ submission is indeed a tour de force. Demolishes the IPCC view of climate sensitivity totally.

    The UK is in a period of political insanity at the moment. It is proposing to build huge quantities of wind turbines and install huge quantities of solar panels at public expense by taxing electricity. It will do this despite the fact that this will neither generate any usable electricity (because the supply from both sources is intermittent) nor reduce carbon emissions. The supply being intermittent, it has to be supplemented by conventional generation, and when you add the carbon budget of this to the carbon budget of the turbines and panels you probably end up positive.

    The UK is doing this with the aim of, as the politicians say, ‘tackling global warming’, which presumably means reducing it. However it accounts as an economy for less than 2% of global carbon emissions.

    So even if the IPCC were right about climate sensitivity, which Lewis’ submission makes clear it is not, and even if the programme were to reduce UK carbon emissions, which it will not, the UK would still be engaging at vast expense in an exercise which will have no effect on its alleged motivation, global warming.

    This is the politics of hysteria. The country has a legislated goal of reducing emissions from the current total of some 500 million tonnes annually to about 120 million. There is currently a debate under way about whether this is insufficiently aggressive, and whether a lower target should be set! No-one has the slightest idea how to get even to 120 million, and the Government is not taking the slightest steps to get there, and it must be obvious that to reduce global carbon emissions by about 370 million tonnes will have no effect. To reduce them by a further 50 million or so will have even less effect.

    And yet in Dreamland on the Thames, they are debating whether 120 million is low enough! To think that this was the country of Pitt and Gladstone and Churchill. One does not know whether to laugh or weep.

    • Plus one Hilary. ‘Robust’ :(
      … and then there’s MO’s para 35,
      ‘The ‘mantra’ underlying IPCC is …’

      ‘Mantra’ a.k.a. ‘Vedic Hymn?’ :)

  33. As I have just posted at BH …

    I’ve now read most of the submissions. But perhaps the most predictable, IMHO, is that from a “jewel in the crown of U.K. and global science”, aka a founding pillar of the IPCC, the U.K. Met Office.

    As I was reading through this latest of the MO’s efforts to self-importantly pat itself on the back, beginning with their introductory summary …

    … the very nature of science means it is always developing and progressing and the UK, like any other nation, needs the latest robust scientific evidence to underpin its national policy and decision making. The Met Office Hadley Centre supports the UK government in just this way.

    Furthermore, reports compiled by the IPCC depend on the contributions of national capability science programmes, such as the Met Office Hadley Centre Climate Programme (HCCP) and the academic research that it facilitates and integrates; the underpinning science and evidence base must be driven forward by domestic science programmes.

    … I found myself wondering what the writers would do if there were no such word as “robust” – which they used no less than 13 times. Also sprinkled throughout this 3,390 word submission were:

    21 “scientific”
    8 “consisten*”
    6 “consensus”
    5 “comprehensive”

    and 3 “multiple lines of evidence” – two of which were in the same paragraph (3.)

    They also patted themselves on the back when they noted:

    13 […] Met Office scientists have been pro-active in raising awareness of how to participate in the review process, including in social media forums where criticism of climate science and the IPCC is regularly aired.[6] A number of individuals who disagree with previous IPCC conclusions took an active part in the review process[7], and others registered as reviewers, but did not submit review comments.

    […]

    [6]: For example: (unhyperlinked reference to) http://www.bishop-hill.net/discussion/post/1569976 (hyperlink added -hro)

    [7]: (also unhyperlinked) https://ipccreport.wordpress.com/2013/09/26/did-sceptics-take-part-in-the-review/ (hyperlink added -hro)

    I’m sure it must have been mere “space limitations” that precluded the Met Office from noting that some prominent skeptics chose not to “submit review comments” because they believed it would be “pointless” and/or because they were not prepared to agree to the required “confidentiality” agreement.

    And they patted themselves on the back, again, when they noted the invocation of their unique “fog machine” in July of this year (para 25):

    Recognising that the AR5 timescale would preclude a comprehensive discussion of the pause, the Met Office Hadley Centre produced three linked reports on this topic in July 2013

    Speaking of the MO’s use of “social media” and their “fog machine” … in light of Nic Lewis’ July observation that:

    Writing as an author of the study, I think that the Met Office paper 3 factually misrepresents the results of Otto et al (2013) in more than one place.

    It seems me that – while it could just be coincidence – the MO might well have anticipated Nic’s “tour de force” submission of evidence, which, in turn might have caused them to turn on the fog machine once again, as we saw most recently, with Richard Betts at the “social media” wheel in Making Fog. But I digress …

    What I found most astounding was the MO’s para. 35:

    The mantra underpinning IPCC is that it should be policy relevant and not policy prescriptive and this philosophy continues to be adhered to in the reports including the AR5 WGI report.[emphasis added -hro]

    which seems to be in sharp contradiction to para. 30’s:

    The report’s conclusions provide robust and strong evidence for the effects of human influence on climate. The report also shows that limiting climate change to the UNFCCC agreed global mean temperature target of 2°C above pre-industrial will require substantial and sustained reductions of greenhouse gas emissions, and that many aspects of climate change, such as sea-level rise, will persist for many centuries even if future emissions of carbon dioxide are reduced or stopped. [emphasis added -hro]

    One can only hope that when the committee reviews the evidence submitted, they will have the perspicacity to recognize the self-serving attempts to perpetuate the status quo – if not elevate to a highly undeserved level – the continued shenanigans of the IPCC and its self-important establishment pillars and supporters, such as the Met Office and the Royal Society.

  34. “The null hypothesis is that currently observed changes are the result of natural variability.” ~NIPCC”

    The CSALT model includes the natural variability terms identified by all the skeptics, such as SOI, TSI, Stadium Waves, orbital forcings, and determines the resultant lead CO2 term.

    Recently I did a high resolution study on the solar terms:
    http://contextearth.com/2013/12/18/csalt-model-and-the-hale-cycle/

    Read that and then follow what Judith Lean is doing in terms of the NRL Statistical Climate Model.

    The null hypothesis is clearly null.

  35. Geoff Sherrington

    A perspective from Down Under is that the most basic question, the raison d’être of the IPCC, has not been answered.
    Does extra CO2 in the air cause the air to warm under sunlight or does it not?
    Rider: If CO2 does help warm the air (or the globe in general), does the extra warmth disappear soon after, with no significant effect on climate?
    Many of the simplified examples said to confirm GHG effects have missing data, unjustified assertions or just wishful thinking in their interpretation. It is telling that such deficiencies are widespread in climate science compared to many other branches of science.
    Surely it is folly to make global policy decisions on a matter so lacking in important answers and so replete with poor science.

  36. Do the AR5 Physical Science Basis report’s conclusions strengthen or weaken the economic case for action to prevent dangerous climate change?

    Nicholas Lewis:

    Although the conclusions fail to say so, the evidence in AR5-WG1 weakens the case since it indicates the climate system is less sensitive to greenhouse gases than previously thought.

  37. Donna’s submission is probably a bit too close to polemic to be taken seriously by the committee but contains several uncomfortable truths that many people won’t want to hear.
    Nic Lewis’ entry is excellent but if only one Committee member has a science background the others may fail to understand it, or the importance of what it says.
    The RMS submission reads like a parody of what a politician would write, not a scientific body. For example:
    “How robust are the conclusions in the AR5 Physical Science Basis report?
    7. The conclusions of the report are graded in terms of probability and confidence. We believe this provides a clear description of the robustness of the conclusions. The most robust conclusions are indicated by high levels of likelihood and/or confidence. Conversely the least robust are assigned low likelihood/confidence.”

    Really?
    “Can you confirm that you witnessed the accused murder the victim?”
    “The accused says he didn’t.”
    I think they’re terrified that meteorology in general may go back to being merely useful, instead of deciding the fate of humanity.

    • Donna’s submission is probably a bit too close to polemic to be taken seriously by the committee but contains several uncomfortable truths that many people won’t want to hear.

      Well, certainly there must be some on the committee who would like to hear more. Donna has been invited to present oral evidence. As she wrote in a post today:

      This morning I was formally invited to appear before the committee in person to provide follow-up, oral evidence. Two dates, in late January and early February, have been suggested. Details to follow.

    • “a bit too close to polemic to be taken seriously”

      A bit?? ItsDonna’s usual juvenile ranting.

      ” some on the committee who would like to hear more”

      Politically motivted denialism is always on the lookout for support.

    • The climate war is not going well for you, mikey. Donna LaF. gets to talk about how stupid and corrupt the IPCC is before a UK Parliament committee, while you remain an anonymous nobody spouting inconsequential claptrap on a blog hosted by another lady thorn-in-the side of your unpopular cause. Pathetic.

    • Don,

      Yet somehow I survive.

      There is not the slightest surprise that certain political types want more of the kind of polemic that Donna produces.

      Science? – nah.

    • John Carpenter

      “Its Donna’s usual juvenile ranting.”

      Didn’t read like a juvenile rant to me. Do you have any substantive arguments against what she wrote?

    • John,

      How ‘bout the title, just for starters.

      “The Lipstick on the Pig”.

      Silly enough?

    • John Carpenter

      Silly to judge the content by the title… If I ignored everything based on titles, well… I guess I would be succumbing to my personnel bias and end up being informed on only those things that confirm my bias. Isn’t that frowned upon? I know you read it, what argument can you muster to counter hers?… the whole IPCC = jury argument…. The whole scientific objective body vs. political advocacy club.

    • John,
      Did I suggest that it was to be ignored based on the title? No. This is the kind of debating tactic that Donna indulges in. Are you friends?
      I said it was ‘juvenile’, and titling your submission to a government hearing “The lipstick on a Pig” is pretty silly.

      Donna’s drivel is replete with stupidity, strained allusions, confused analogies, leaps of logic, trivia and bad faith (and what the hell is with the numbers??).

      Let’s just look at a few.
      “”The IPCC is a scientific body,” proclaims the IPCC’s website. But is this true? Does the mere fact that scientists are involved make an entity a scientific body? Would we describe a chess club as a scientific body simply because its members were scientists? “ – Donna

      Er, no, you f**kwit, we’d call it a chess club. And if we had a body made up of scientists, reviewing the science, and writing summary reports of the science, would we call that a chess club?? Apparently Donna might.

      “IPCC personnel survey the scientific literature and, in the course of writing a multi-thousand-page assessment report, make thousands of judgment calls as to what that literature tells ……..Judgment calls are not science……….IPCC personnel can be compared to members of a jury. …No one considers a jury a scientific body – even when forensic science provides much of the evidence. “ – Donna.

      Poor poor confused Donna.
      In her silly hopelessly confused analogy, the ‘IPCC personnel’ are actually scientists ( I think she knows this, but ‘IPCC personnel’ suits the tone of her polemical style better). They are not the jury, they are the forensic scientists. (Nevermind, that in her scientific illiteracy, she doesn’t realise that judgement calls are indeed very much a part of the scientific process).

      Can we stop flogging this dead horse now?

    • John Carpenter

      “Can we stop flogging this dead horse now?”

      Not yet,

      “Donna’s drivel is replete with stupidity, strained allusions, confused analogies, leaps of logic, trivia and bad faith (and what the hell is with the numbers??).”

      That is not an argument against her ideas, let’s try again on an idea…

      If man made CO2 is on trial, to take her confused analogy, do you agree a fair trial would be more likely if the jury is not already biased against it?

      Try not to descend into ‘denier’ tactics… You know, the type that does not address the ideas.

    • JC,

      Who’s “biased against” CO2??

      Scientists understand it’s role very well – we’d be in serious trouble without it.

      CO2 is great…..in moderation.

      See what trouble you get yourself into when you take a lead from Donna’s silliness?

    • John Carpenter

      “CO2 is great…..in moderation.”

      Agreed. However, that is highly dependent on what your definition of ‘in moderation’ is? Some authors may think a moderate amount is less than 360 ppm. Others may think less than 400 ppm. Still others may think less 450 ppm…. So this is not an objective definition. If a large portion of the reviewing authors believe we are no longer at a moderate level now, then don’t you think they will be biased in how they review the literature? This is what Donna is after… who is deciding what goes into the report and how are their biases being accounted for in the weighing of evidence within the literature? It is a legitimate concern and one that should be examined. Clearly scientists have biases. I do. How are they accounted for?

  38. Does the AR5 address the reliability of climate models?
    Nic Lewis answers:
    “Not adequately. Shorter-term warming projections by climate models have been scaled down by 40% in AR5, recognising that they are unrealistically high. But, inconsistently, no reduction has been made in longer term projections.”

    Well, of course. The anthropowarmists count on rapid warming in the next decades, to close the gap between their models and observations. Their future depends on it (“Roll on the next El Nino!”)
    http://www.drroyspencer.com/wp-content/uploads/CMIP5-global-LT-vs-UAH-and-RSS.png

    Unfortunately for them, the evidence points to multidecadal cooling, the centerline of the observed temperature plateau is around 2005. The discrepance between models and observations will increase dramatically by ~2020, I predict 1 C by then.

    • Maybe not true. Since the 4th month of 2012 ENSO has been in neutral. ENSO neutral is forecast to last well into 2014.

      Warming rates 2012 to last present:

      Gistemp – .7C per decade
      UAH – .7C per decade
      HadCrut4 – .4C per decade

      (three months of La Nina/ the majority of the time in La Nina leaning ENSO)

      Right now on Gistemp and Noaa, 2013 is the 4th warmest jan thru Nov in the record. If neutral conditions persist throughout winter, spring, and summer of 2014, 2014 could easily rank as the hottest year in the instrument record.

      Why isn’t it kimooling? Just asking. Why is temperature going up?

    • JCH, The temperature is going up as substantiated by Nic Lewis’ “energy-budget method”.

      You see Nic Lewis is one of those statisticians that finds everything wrong and proscribes an answer, but then lacks the scientific ability to follow through with his recommendation. In this case he recommends as outlined in point 27

      “A particularly robust way of empirically estimating climate sensitivity is the so-called ‘energy-budget’ method, which is based on
      a fundamental physical law – the conservation of energy. ”

      Well, if the CSALT approach that I use and the NRL statistical climate model espoused by Judith Lean are not good examples of energy budgeting, then Nic Lewis will have to think again on what he is proposing. The CSALT approach is giving a 2C for TCR and 3C for ECS.

      If Nic Lewis is a self-described independent self-funded climate researcher, he really ought to be able to defend himself against his peers first.

    • Why is it going up since 2012? I am not sure I understand the question, you mean because ENSO is neutral?

      ENSO is going up since 2012.
      http://www.bom.gov.au/climate/enso/monitoring/nino3_4.png

      Since 2010, temperature is going down. Your graph since 2010:
      http://woodfortrees.org/plot/gistemp/from:2010/trend/offset:-0.35/plot/uah/from:2010/trend/plot/hadcrut4nh/from:2010/trend/offset:-0.30

    • Edim,
      By applying Nic Lewis’ recommended approach of energy-budgeting, we can clearly see that the effects of ENSO act as a compensating factor in obscuring the underlying temperature rise.

      Using an approach such as CSALT, we can apply the ENSO SOI pressure term as a compensating energy budget factor and thus reveal the actual TCR of 2C.
      http://imageshack.com/a/img801/3483/bhb.gif
      Voila, we use Nic Lewis’ recommended approach and we get closer to the truth. See how it works?

    • If we don’t want to draw conclusions from one specific model like CSALT, we may try to look at the data in a way where the highs and lows appear to compensate. Looking at the figures of the Otto et al paper (with Nic as one of the many coauthors) the two last decades form a plausible combination for that. Based on that the best estimate for TCR is about 1.8C, the value proposed in AR5 as well.

    • David Springer

      I’m kind of wondering why such a hot year as 2013 was such a dud for hurricanes. This is the first year since 1964 that no hurricane exceeded category 1. The climate boffin bandwagon position is that global warming results in more severe weather not less. What’s up with that?

    • David, you write “I’m kind of wondering why such a hot year as 2013 was such a dud for hurricanes.”

      Maybe the simple explanation is that we just don’t understand climate. Mother Nature is just doing what she has been for billions of years, and we have, as yet, not worked out what the fundamentals are.

      No doubt the universe is unfolding as it should.

    • CSALT gives a TCR=1.84C using HADCRUT4 and TCR=2.14C using GISS.

      As a tiebreaker CSALT gives a TCR of 2.08C using the NRDC land-sea data.

      That is why I do not understand the low-ball estimates of TCR of 1.3C that Nic Lewis keeps proposing. Can he not interpret the empirical data? It is clear that HADCRUT is giving at least 1.8C for a TCR, and with the Cowtan&Way corrections, it is 1.94C according to CSALT.

    • Hi WHUT,
      Like I said, it all comes down to the next few decades. If the temperatures decline similarly like in the 1950s/60s (actually, I predict even more cooling), any significant A(CO2)GW will be very unlikely and CO2 the knob hypothesis completely disproven.

    • No Edim, the CSALT model will fail if that is the case, as the natural variability factors are not predicted to go into a free-fall. Remember that CSALT includes the factors that Team Denier is using as their non-CO2 explanation salvation.

      Unless you are predicting huge volcanic eruptions or the abrupt cessation of CO2 emissions in the ear term?

    • I’m not predicting huge volcanic eruptions, just cooling, affected mostly by already weak and declining solar activity and continuation of the oscillatory behaviour of the global temperature indices.

    • “This is the first year since 1964 that no hurricane exceeded category 1. The climate boffin bandwagon position is that global warming results in more severe weather not less. What’s up with that?”

      Climate models over estimate the variability of weather on short term time scales.

    • Matthew R Marler

      WEbHubTelescope: That is why I do not understand the low-ball estimates of TCR of 1.3C that Nic Lewis keeps proposing. Can he not interpret the empirical data?

      It is easy to supply an understanding: your csalt model has not yet acquired the authority of, say, Newton’s Second Law, that it should overrule other reasoning. To date, the only “testing” has been successive refittings.

    • A major problem with Nic Lewis’ analysis is that it is an artifice of his assumptions. TCR is defined “as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year.”.

      Note the key qualifier “simulation”. That’s what they call a trick-box, but a rather weak one. All we have to do is define a reality-based TCR and that number can be gleaned from empirical observations, placing it closer to 2C than 1.3C. Forget the simulation aspect and simply look at what the observational data is showing.

    • David Springer

      With an as yet undetermined appendage Steven Mosher pleads:

      “Climate models over estimate the variability of weather on short term time scales.”

      Ok so we can agree they get it wrong on short time scales. What evidence is there they get it right on long time scales? If we hindcast a couple thousand years do they get the Little Ice Age and Medieval Warm Periods right? If we hindcast them a couple tens of thousands of years do they show us a mile of ice over the Great Lakes?

      Do tell, Steven.

    • David Springer

      Actually Steven, it just occurred to me your statement is only half right. A dearth of hurricanes is an extreme just as a legion of them is extreme. The climate boffin bandwagon told us the weather would be extreme in the direction we didn’t want. They got the amount of variation right but got the direction wrong. But you go right on defending them. You’re a brave climate warrior after all risking life and limb… oh wait.

    • Jim,

      “No doubt the universe is unfolding as it should”

      I’m sure it is, with humanity just an observer

      Wasn’t the first line of that:piece:

      “Go placidly amid the noise and haste”

      Funny how you remember some stuff, isn’t it ?…

      Chris

    • Stefan Rahmstorf at RC has a recent post on putting the jigsaw together.
      http://www.realclimate.org/index.php/archives/2013/12/the-global-temperature-jigsaw/

      Enough different groups are looking at piecing together the natural variability factors that they will eventually force Team Denier to eat their own dog food. These approaches include those of Trenberth, Foster&Rahmstorf, Kosaka&Xie, Lean’s Statistical Climate Model, Cowtan&Way, and always Hansen in the background.

      The CSALT model is inspired by this focus on explaining the variability in temperature by the known external forcings and free energy factors that can compensate for temperature. So we take what Scafetta&Loehle, Bob Carter, Clive Best, Wyatt&Curry are feeding us and eat their dog food. The result is that we are seeing values of TCR and ECS that are spot on with the consensus climate science, and perfectly valid explanation for any pause that has occurred, is occurring now, or likely will occur in the future.

      So keep scoring own goals Team Denier!

    • Rahmstorf suggests that the PDO will return the monotonic signal.

      Global temperature has in recent years increased more slowly than before, but this is within the normal natural variability that always exists, and also within the range of predictions by climate models – even despite some cool forcing factors such as the deep solar minimum not included in the models. There is therefore no reason to find the models faulty. There is also no reason to expect less warming in the future – in fact, perhaps rather the opposite as the climate system will catch up again due its natural oscillations, e.g. when the Pacific decadal oscillation swings back to its warm phase.

      it may be 5-7 years away,as we need a solar minimum or volcanic excursion ie a decrease in external forcing,troublesome at least.

      http://www.woodfortrees.org/plot/sidc-ssn/from:1980/mean:12/normalise/plot/jisao-pdo/from:1980/mean:12/normalise

    • Web says
      “Enough different groups are looking at piecing together the natural variability factors that they will eventually force Team Denier to eat their own dog food.”

      Why bother doing the research if the conclusion is already known?

      If you accept the argument in the recent Swanson paper then the models are running hot because they are trying to match the arctic amplification. Cowtan and Way has ramped up the Arctic amplification even higher by finding extra heat in the Arctic. If models are now asked to match this new situation it is only going to further heat the models (sub-polar). Web you seem to think it’s all coming together but there is a possibility that by trying to solve one problem, the mismatch of the global mean temperature of obs and models, you end up with bigger headaches. Finding more heat in the Arctic is only going to further stretch the gap between models and obs in many other metrics than the GMT.

    • The Cowtan&Way correction points out the holes in the HadCrut geospatial map. That is easily reconciled by using the GISS temperature series.

      If you think there are headaches in store, real scientists don’t shy away from the reality of the situation. Insight brings on further insight and the collective understanding improves.

    • Web do you think Nic Lewis’ insights are going to be integrated into the collective understanding? Afterall he’s only really highlighted what was available to all, problem is there is the option of ignoring some insights

    • Matthew R Marler

      WebHubTelescope: Forget the simulation aspect and simply look at what the observational data is showing.

      Nice Try. but all of the estimates of TCS and ECS are model-based, like with your model.


    • Matthew R Marler | December 19, 2013 at 10:22 pm |

      WebHubTelescope: Forget the simulation aspect and simply look at what the observational data is showing.

      Nice Try. but all of the estimates of TCS and ECS are model-based, like with your model.

      I know it is hard for those of you that don’t actually do the analysis and simply pontificate on comment sections of blogs, but a model is different than a simulation.

      The TCR is again defined “as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year.”

      The CSALT model is not increasing the amount of CO2 at 1%/year and then running a Monte Carlo simulation to determine the temperature outcome. Instead, it is determining the contributing factors to the observed data based on a linear model of forcing terms. Whatever the increase of CO2 is, I am using that and not an artificial 1%.

      This is also what Lean and Foster&Rahmstorf have done in the past.

      Whoever came up with the TCR as defined in the italics had a GCM simulation in mind, which is a different beast than a free energy minimization model such as CSALT.

      Yet if someone thinks that these numbers will pop out based on data with no computational support, they are quite naive.

    • David Springer

      The pause continues.

      http://woodfortrees.org/plot/rss/from:2000/trend/plot/rss/from:2000/to:2010.33/trend/plot/rss/from:2010.33/to:2012/trend/plot/rss/from:2012/trend/plot/rss/from:2000

      WebRubTelescoop continues to confuse land-only warming with ECS. Land temp rises more because it tends to be dry, especially in winter in higher latitudes, and thus lapse rate feedback (which is negative) doesn’t play as large a role in reducing CO2 sensitivity. It’s not rocket science just the physical response of a common substance (H2O) to illumination by back-radiation from CO2.

    • It’s warming just the same. Although just one point, the November temperature hit an anomaly record of 0.77C for the month. This was mainly attributed to a hot land area the size of Russia
      http://3.bp.blogspot.com/-nMfUu1ets4g/UrOcjeJqi9I/AAAAAAAAEdI/ADgyzHItUGs/s400/2013.png

      Conservation of energy rules. As the CO2 control knob pushes the external forcing ever higher, the average global temperature has to increase to compensate. That is the way that radiative physics works.

    • Why does Russia so often seem to contribute a disproportionate amount to this compensation?

    • Matthew R Marler

      WebHubTelescope: I know it is hard for those of you that don’t actually do the analysis and simply pontificate on comment sections of blogs, but a model is different than a simulation.

      I agree, but you wrote one clause too many when you said look at the data. And in other contexts, we have been advised to take simulation results seriously, so I am happy to read your disparagement of simulation-based results. Isaac Held’s estimate of the TCR of approximately 1.3C was not simulation based, and it is close to the value that Nic Lewis “endorses” (that’s in quotes because it might not be exactly what he meant, but it reads as a sort of endorsement.)

      My writings are certainly not “pontifications”, in case that disparagement was aimed at me personally. They are simply comments and questions aimed at probing the limits of knowledge and (in some cases) my own understanding. To some degree, I judge my comments by the rejoinders they elicit, and when the rejoinder is an epithet I think I have hit upon a weakness in the theory. fwiw. I like your csalt model, but as I said in my “supply an understanding” comment, it hasn’t the authority to override other models.

  39. Pingback: Nic Lewis’ submission to the AR5 inquiry « De staat van het klimaat

  40. Am I correct in believing that the UK parliamentary web site giving access to all these papers, is unique, where the arguments for and against CAGW are all contained in a single reference?

  41. Interesting submission by Nic Lewis.

    The take home message (paraphrased):

    The worst-case, business-as-usual scenario, with realistic climate sensitivity based on observational results (as distinct from models) and no net natural variations either up or down:
    – the end of the 21st century will be approximately 2°C warmer than today.

    The meta-analysis in Tol (2009), of fourteen estimates from economists, suggests that a temperature of 2°C warmer than today is likely to have a negligible impact on welfare.

  42. Pekka Pirilä

    “Looking at the figures of the Otto et al paper (with Nic as one of the many coauthors) the two last decades form a plausible combination for that. Based on that the best estimate for TCR is about 1.8C.”

    Not so. TCR estimated from the last two decades (1990 – 2009) from the Otto et al data is actually 1.4 C, not 1.8 C.

    But the 1990s isn’t a very suitable decade for estimating TCR or ECS because of the major impact of the Pinatubo eruption. I prefer to use 1995-2011, 16 rather than 20 years but avoiding Pinatubo entirely – or the 17 years 1994-2011, which is only to a modest extent impacted by Pinatubo. For both these periods, the Otto et al data imply a TCR of 1.3 C.

    • Nic,

      Using the latest decade alone or giving it a dominant weight is an obvious case of cherry picking. If the warm phase of internal variability does not have similar weight as the cool phase has, the estimate is obviously biased. We cannot tell for sure the nature of the phases, but including both full decades seems to be a reasonable guess.

      I wrote my previous comment without access to the paper. Checking the figure again, I see that the correct value based on the to latest decades is significantly lower than 1.8 C, but it’s difficult to believe that it would be as low as 1.4 C. The nonlinearity of the TCR scale makes estimating from the figure difficult, but even so 1.4 C seems to be too low.

      As a minor comment, I noticed that you wrote

      These figures were unchanged from the previous report save for a slight increase in the lower bound of the range, from 1.5°C to 2.0°C.

      while the change was actually the reverse, i.e. the decrease from 2.0°C to 1.5°C.

      I do also think that you overstate the value of objective Bayesian analysis. It’s not at all obvious that it has any advantage compared to various subjective Bayesian analyses in a problem of this nature. It’s advantages are discussed by James Berger as an example such discussion. Routinely repeated regulatory use is a good example, where it may really be well justified, but estimating one single parameter of the Earth system is a completely different case. In this kind of application it’s just one alternative among all the others, not any better than use of plausible subjective priors.

      “Objectivity” of objective Bayesian analysis means that the user of the method cannot affect the outcome by further subjective choices after the basic settings of the analysis are fixed. That’s important in regulatory use, but that does not in any way prove that the analysis could be objectively judged superior to the use of some subjective prior in obtaining one single value.

    • Heh, 1.4 deg C, 1.8 deg C; where’s the catastrophic beef, I mean ‘beat’, I mean ‘heat’?
      ============

    • Pekka

      1. 17 years is close to two decades. It’s not cherry picking to go back as far as one can before hitting a major volcanic eruption.

      2. As I said, I have computed TCR using the Otto at al data for 1990-2009 as 1.4 C. That figure is definitely correct.

      3. My statement about the slight increase in the lower bound of the range, from 1.5°C to 2.0°C applied to AR4, not AR5. You have misread what I wrote.

      4. I disagree with you about objective vs subjective Bayesian analysis. Objectivity is highly desirable in science. But aside from that, objective (but not subjective) Bayesian methods generally lead to similar parameter estimates to when non Bayesian methods (such as profile likelihood) are used, and to at least approximate probability matching under repeated sampling. That seems to me a strong argument in their favour.

    • Nic, you write “2. As I said, I have computed TCR using the Otto at al data for 1990-2009 as 1.4 C. That figure is definitely correct.”

      I dispute that anyone has anything more than a guess as to what the value of climate sensitivity is; however defined. What I note is that no-one has measured a CO2 signal in any modern temperature/time graph, so by standard signal-to-noise ratio physics, there is a strong indication that the climate sensitivity of CO2 is 0.0 C to one place of decimals, or two significant digits.

    • Nic

      It’s not surprising and not relevant for this argument that objective Bayesian leads to similar results than non-Bayesian methods. The problem is fully related to the potential of bias in the setup of the analysis. In subjective Bayesian methods the arbitrariness that applies to all methods is not hidden but presented explicitly. No statistical method can get rid of that problem, many only hide it, but subjective Bayesian makes fully open.

      In science the goal is to find the right answer, not to offer a procedure that makes regulatory decisions objective in the sense that every person using the same method gets the same answer, whether it’s right or wrong. Therefore objective Bayesian has no special status.

    • “that no-one has measured a CO2 signal in any modern temperature/time graph, so by standard signal-to-noise ratio physics, there is a strong indication that the climate sensitivity of CO2 is 0.0 C to one place of decimals, or two significant digits.”

      1. The sensitivity to DOUBLING c02 is given by the formula
      Watts For doubling C02 * climate sensitivity. That is a definition.

      2. Climate sensitivity cannot be zero. If climate sensitivity were zero
      the temperature of the earth would stay the same when the sun
      went down. By first principles climate sensitivity is around .4 C per Watt
      This means is the Sun increases by 1 watt the temperature will go UP
      by .4C
      3. Watts for doubling C02 is 3.71 +-10%. This is measured physics.
      Physics used by guys in Operations research.

      #############

      That aside. Describe what you mean by

      ” so by standard signal-to-noise ratio physics”

      DEFINE what you mean. No arm waving. supply the math

    • Is this Nic Lewis serious? He uses only the last two decades even though we have well over one hundred years of data concerning natural variability in temperature and CO2 concentration to work with?

    • Webster, “Is this Nic Lewis serious? He uses only the last two decades even though we have well over one hundred years of data concerning natural variability in temperature and CO2 concentration to work with?”

      Since the last ~17 years is the only period with known low volcanic forcing and since aerosol forcing is one of the largest unknowns, that makes the last 17 years the longest useful period of that type. Nic et al didn’t use JUST the last 17 years but they did use the last 17 years for a valid reason.

    • Steven, you write “Climate sensitivity cannot be zero”

      I do wish you would, sometime, actually READ what I have written. I never said that climate sensitivity was zero. I said it was 0.0 C to one place of decimals or two significant digits. If you read this as a claim that climate sensitivity is zero, then you don’t understand the basics of Physics 101.

      In standard signal-to-noise ratio physics, the objective is to measure a supposed signal against the background of the noise. If no signal can be measured, it follows that no signal can be detected against the background noise. If no signal can be detected, then the logical supposition is that no signal exists, and the climate sensitivity of CO2 is negligible.

      I said climate sensitivity of CO2, HOWEVER DEFINED. That includes your definition as well as mine.

    • The forcings due to volcanic events are clearly identifiable. The Volcanic Explosivity Index (VEI) is logarithmic and so the eruptions that are on a scale of 5 or 6 are the only ones that really matter and there have only been at most a dozen of those in the past 130 years. A VEI of 4 would be 100x less than a VEI of 6 for example.

      Cappy, you sound very frightened about doing the analysis work. Why is that?

    • 1. The sensitivity to DOUBLING c02 is given by the formula
      Watts For doubling C02 * climate sensitivity. That is a definition.

      2. Climate sensitivity cannot be zero. If climate sensitivity were zero
      the temperature of the earth would stay the same when the sun
      went down.

      Mosher, that’s nonsense. What’s the temperature of the earth? You mean the effective temperature? You know that’s not the average Earth’s surface temperature? You seem to think that doubling atmospheric CO2 will affect the effective temperature of the Earth, even more that two are proportional?

    • Webster, “Cappy, you sound very frightened about doing the analysis work. Why is that?”

      Not really, I am not going to do your kind of analysis since “global” surface temperature is pretty much a useless reference. I have compared the regional and hemispheric “sensitivities” of all the major data sets and that pretty much agrees with Nic except for the long term secular trend. Remove the long term secular trend and the amplification due to the “Global” surface ambrosia and you have 0.8 to 1.3 C “sensitivity” to atmospheric forcing. You also have an explanation for the DTR shift, a better understanding of the importance of the Stratosphere and a much greater respect for asymmetry.

    • I should add that there is an assumption that the response to additional CO2 is logarithmatic over the full range of CO2 concentrations. Beer/Lambert does not necessarily apply once absorption has been saturated. So there is no a priori reason to suppose that CO2 added from current levels, has the same sensitivity as CO2 added at much lower concentrations.

    • I wish to make it clear that my own preference for estimating TCR is the approach of Otto et al., i.e. comparing temperatures over an interval at a time when human influence was only a small fraction of the present with the recent data. The approach is not without problems of it’s own, the influence of natural variability is one of the main problems. To get minimally unbiased results, it’s important that best effort is made to compare periods where the effects of the variability are likely to cancel well enough. An alternative is to use some method to remove the bias from internal variability.

      In Otto et al the reference period is 1860-1879, which seems a reasonable choice, but not the only possible reasonable choice. The forcing was rather small before 1990, therefore the data starting from 1990 dominates the conclusions, and the total period since 1990 might be also suitable from the point of view of covering well enough different phases of natural variability.

      One possibility is to use approaches presented in Lean and Rindt, Foster and Rahmstorf, and used also by WHUT in his CSALT analysis to remove internal variability. That might improve the results, but it’s also possible that the consistency of the modified approach suffers as there’s a risk of some kind of double-counting of some effects. It’s good to try several approaches and compare the results, differences tell something on the uncertainties.

      While I have emphasized that objective Bayesian approach is only one of several ways of fixing the prior, and not objectively any better than other priors, my own impression is that it’s not bad either in practice. What seems the most questionable of the commonly used approaches is an uniform prior in climate sensitivity. There are several reasons for introducing a prior that decreases at high climate sensitivity (high meaning much larger than one). One plausible form for the cutoff is inverse proportionality to the square of climate sensitivity. Annan and Hargreaves have proposed such a prior as have certainly many others.

    • Matthew R Marler

      Steven Mosher: 1. The sensitivity to DOUBLING c02 is given by the formula
      Watts For doubling C02 * climate sensitivity. That is a definition.

      2. Climate sensitivity cannot be zero. If climate sensitivity were zero

      As to 1., it is assumed that “the sensitivity” is a constant independent of the state of the climate. There is no justification, published anyhow, for that assumption, as applied to the climate of Earth. “A sensitivity” of great interest is the change of mean global surface temperature to an increase in CO2; a climate scientists use this “sensitivity” in their publications, not confining their usage to “the formula” that you provide.

      “That is a definition”, but it is not the only definition.

      For 2., it is quite reasonable to conjecture that “the” sensitivity of climate to a future doubling of CO2 given the climate as it is now is extremely close to 0, though I personally would probably expect at least an increase in the overall rate of the hydrological cycle.

    • Matthew

      ‘As to 1., it is assumed that “the sensitivity” is a constant independent of the state of the climate. There is no justification, published anyhow, for that assumption, as applied to the climate of Earth. ”

      1. Of course that is asssumed
      2. to test this assumption, you would have to propose a different assumption. So, what different assumption do you propose?
      3. on the assumption that we cannot decide between various assumptions,
      we can nevertheless calculate results subject to these assumptions.
      4. So, suggest a different assumption. calculate the results subject to that assumption, and then we can see how your assumption holds up.

      Of course all of our work is subject to the asssumption that the laws of the universe will continue to hold. Note that we cant test this assumption, but without the assumption we cant really do anything. That is to say ALL methods and approaches have asssumptions so pointing out the assumptions is meaningless unless you have a better assumption or different assumption to test.

      Logically there will always be assumptions. pointing that out is childs play. Suggesting a better assumption is your responsibility

    • Matthew R Marler

      Steven Mosher: 4. So, suggest a different assumption. calculate the results subject to that assumption, and then we can see how your assumption holds up.

      Part way there, but no quantitation yet: of the 3.77 W/m^2 radiated back dowwnard, most goes to increased rate of evaporation of the water at the surface, and much less goes to increased mean temp increase at the surface; hence increased rate of non-radiative transfer of heat from surface to upper atmosphere, slight increase in rainfall as hydrological cycle is faster, and slight increase in cloud cover.

      A possible 2% increase in cloud cover following a 1% increase in mean surface temp has been written about already, but I lack the references to the published instances; I mention them to avoid seeming to claim that the ideas I wrote in the paragraph above originated with me..

      For today, it is sufficient that what you wrote is an assumption; occasionally it is worth pointing out that an assumption is not, as I wrote in another post, as well established as some other assumptions like Newton’s laws.

    • The Stratospheric Aerosol Optical Thickness by Sato is here:-

      http://data.giss.nasa.gov/modelforce/strataer/

      The figure shows that from 1960-2000 there was a lot of volcanic activity producing a lot of solar attenuation. Since 2000, the line is flat. All the contribution from aerosols is close to zero.
      Therefore, it is reasonable to focus on this time frame, but this does not mean that the rest of the record is discarded.

    • Here ya go Webster,

      https://lh6.googleusercontent.com/-zwJvX06pUPk/UrNpBAVCZNI/AAAAAAAAK0k/qaXLnpuhAKI/w803-h419-no/great+equatorial+shift.png

      Notice that the Great Pacific Climate shift was actually more like the great equatorial climate shift just like Toggweiler suggests in his shifting westerlies.

      http://www.woodhous.arizona.edu/geog453013/Toggweiler2009.pdf

      I am not inventing anything just exploring the range of natural variability. Now model that beyatch. That shift is not your typical revert to mean you are assuming with your nutty ideal gas law nonsense in an open dynamic system.

      Since the “global” diurnal temperature range isn’t supposed to be getting larger with more CO2, CO2 didn’t do it. So if you assume CO2 did then try to remove SOI as simple noise even though that shift shows in SOI, you are removing the most interesting part of the signal.

      As Toggweiler said, “. The magnitude of the shift seems to have been very large. If there was a response to higher CO2 back then, it paled in comparison.”. History seems to be repeating itself yet again.

    • Poor Cappy, strung out on Excel spreadsheet graphs and some mind-blowing belief that we are in the middle of a cataclysmic ocean shift that just happens to coincide with the peak and descent of the oil age.

      Perhaps you are having problems digging out the orbital forcings that all your Team Denier mates are proposing?
      Maybe this will help:
      http://contextearth.com/2013/12/18/csalt-model-and-the-hale-cycle/

    • Here is a very simple model of the HADCRU4 Global 1959-2011.

      http://i179.photobucket.com/albums/w318/DocMartyn/SimpleHADCRU4GlobalSineAersolandCEof14_zps467ba107.png

      I fitted to LN[CO2] (using a CE of 1.4 for 2x[CO2])
      Sato’s aerosol 550 nm*minus 0.58 degrees and a sinewave
      I have a sine wave of 57 years +/- 0.122

      The (over)-fit is rather good. I could drop it still further by making the aerosol float by 6-9 months and making my pseudo-AMO have a larger amplitude.

      So quick and dirty gives me a 2x[CO2] that is in complete agreement with Nic, so I would believe his number compared with anything >2.

    • Matthew R Marler

      Steven Mosher: Logically there will always be assumptions. pointing that out is childs play.

      Personally, I think adults should get more engages in that “play”. Some people write as though some assumptions, e.g. the possibility and relevance of “equilibrium”, are actually to be accepted without examination.

    • Doc, Your model is embarrassingly crude.
      This is a real model based on a careful factoring of forcing contributions. Your arbitrary sine wave is replaced with the Wyatt&Curry stadium wave and Scafetta’s orbital terms.
      http://imageshack.com/a/img513/2702/oa57.gif

      The TCR needed comes out to above 2C per doubling of CO2.

      Stop covering for Nic Lewis’ fudging of the numbers.

    • “Doc, Your model is embarrassingly crude”
      Indeed it is, but then again, as Mosher would say, what is the model for?
      The very crude model is designed to test the postulate of Nic, and questioned by Pekka, of climate sensitivity being near 1.4 degrees for 2x[CO2].

      For a value of less that 2.2 one needs, at minimum, something, other than CO2, which caused global warming between 1974 and 1999 and is causing cooling post-1999.

      The Sato aerosol data does not provide this; there is less aerosol cooling post-1999 than in the period 1974-1999. If anything, any usage of aerosols will tend to increase the apparent climate sensitivity. If one were to model the temperature rise in the years 1974-1999 it would be very easy to overestimate aerosol cooling and so to overestimate the value of CO2 climate sensitivity. Researchers living and working in this period could quite easily come to the conclusion that climate sensitivity was very high, >2.5, if they were to believe that the contribution of aerosols was also high.
      The second possibility explored is that there is some regular cycle whereby heat is not thermalized immediately, but stored away from the surface, accumulated and then released, altering the SST. Such cycles appear to exist; the AMO and the PDO. In my ’embarrassingly crude’ model the amplitude of a hypothetical cycle is only +/-0.117 degrees and has a periodicity of around 60 years. The ’embarrassingly crude’ model suggests that one can change the estimate of CO2 induced warming from an estimate of 2.2 degrees into an estimate of 1.4 degrees.
      Finally, the ’embarrassingly crude’ model does catch present temperature data rather well, even though run from 1959-2012, whereas Pekka believed that Nic’s value of 1.4 was someone cherry picked as it only used the recent 17 years of data.

  43. Nic Lewis’s sensitivity only accounts for a maximum of half the warming since 1970 while also ignoring natural variability, so where does he imply the other half comes from? Maybe someone who understands his method can apply it to longer periods like this and come up with the answer. The TCR alone appears to be over 2.5 C since 1970. Lewis might be suggesting this is an illusion of some sort making warming appear more sensitive than it is, but he hasn’t said why this won’t continue to be the case going forwards.

    • Steve Fitzpatrick

      Nic has laid out his rational for a transient sensitivity of ~1.4C. Isaac Held seems to agree with a most likely transient sensitivity of ~1.3-1.4C, as does GISS Model E-R. On what basis do you conclude it is >2.5C?

    • Faith, SteveF, JimD is a man of faith :)

    • Jim,
      You could do a ballpark estimate for the 1880-present period which has a temp change of ~0.9C as we did here .

      Radiative forcing due to GHG is 2.3 W/m^2. Solar influence over this period is 0.05 (negligible). However, at the surface, radiation is 4*(0.9/288)*240 = 3 W/m^2 (2.9 if you use Stefan’s law as done in the post). The OHC is 0.6 W/m^2 (Stephens 2012, Levitus 2012). Therefore net forcing at the surface has to be 3.6 W/m^2. This is progress – only inexplicables (or Lindzen) can now deny that there is positive feedback. Specifically, feedback amplifies forcing by 3.6/2.3 ~ 1.6

      So, if no feedback sensitivity is 1.2C per doubling, with (linear) feedbacks the sensitivity is 1.2*1.6 ~1.9C.

      So, with nonlinear feedbacks you expect something higher. Estimates subject to revisions in OHC, aerosol forcing updates.

    • BTW, that would be an ECS estimate, albeit based on linear feedbacks.

    • RB, “Therefore net forcing at the surface has to be 3.6 W/m^2. This is progress – only inexplicables (or Lindzen) can now deny that there is positive feedback. Specifically, feedback amplifies forcing by 3.6/2.3 ~ 1.6”

      But what is the feedback to? If you take surface Tmin and compare to SST, you get the ~1.6 to 2.0 gain but that started well before CO2 was a factor. Most of the amplification is in the 30N-60N region which has more to consider than just CO2. Then in ~1985 Tmin slowed down while CO2 hit its stride. So you have an answer but maybe not the right question.

    • Capt, I know there is some theory there somewhere that you have which will meet its death-knell with more satellite and ocean observations.

    • RB, “Capt, I know there is some theory there somewhere that you have which will meet its death-knell with more satellite and ocean observations.”

      No, actually the satellite and ocean observations pretty much confirm the theory that there was a little Ice Age and that the oceans that a while to “recharge”. The long term secular trend is ~0.06 per decade and since increased average SST increases absolute humidity Tmin has rising in step with SST. That is not really a theory that is pretty much basic thermo.

      knmi climate explorer always for simple masking of BEST Tmin so you can ply with yourself.

      https://lh4.googleusercontent.com/2y_agzYkhtdZBInKoQemi7oHa5R-jVFifOnN4ucR8uI=w777-h459-no

      The 0-30S band in that comparison started the warming trend first and has started the stabilizing trend first. BEST made a point to show that DTR shift in 1985 which most everyone blissfully ignores. WHOI published on the centennial scale Pacific Oscillation which is also blissfully ignore and Toggwieler and Briereley have published on the impact of shifting westerlies and zonal/meridional temperature gradients also blissfully ignored. You are not going to be able to ignore the obvious much longer.

    • RB, “knmi climate explorer always for simple masking of BEST Tmin so you can ply with yourself.”

      That should be “KNMI Climate Explorer allows for simple masking of BEST Tmin so you can play with it yourself”

    • Nic’s rationale does not include the possibly important effect of natural variation in his chosen short period, and so his full ECS only gives 0.35 C for the CO2 change between 1970 and 2013. The actual change is nearer twice that, which I would regard as a fail on his part. It also better fits a transient sensitivity near twice his. Therefore I can very simply derive a transient climate sensitivity using a longer period than his that fits the data better.

    • JimD,

      What do you mean by Nic’s short period.

      His own paper is an extension of the work of Forest et al that covers the whole 20th century. The paper of Otto et al where he contributed covers the period from 1860 to 2009 or more precisely compares the first 20 years of this period with the last four decades separately and as a sum.

      The authorship of Otto et al is really impressive. The approach is simple, but that is well justified at least for TCR. Some questions remain on the best way of expressing the outcome, but only at a level of little significance. More refined methods lead to somewhat different results. Some corrections can be tried to correct for specific short term effects, but the applicability of these corrections to this particular comparison must be studied carefully. I believe that the reference period has been chosen to minimize the influence of such corrections.

      The paper that Nic wrote alone is a separate issue. He is perfectly right in his criticism of the interpretation of the outcome of Forest et al papers. My own preference is that the results would not be presented at all as a PDF that can be formed only by combining the likelihood function that’s the actual result with a prior. They are, however, commonly combined with an uniform prior in climate sensitivity.

      The uniform prior in climate sensitivity is, indeed, suspect, and several arguments have been presented against it. Reaching high climate sensitivities requires a positive feedback close to the point of instability. An uniform prior in CS corresponds to a prior in feedback strength that diverges when approaching the point of instability. Something alone the line of thinking of Jeffreys on the noninformative prior for a scale parameter of the type of climate sensitivity seems to be required.

      Jewson, Rowlands and Allen wrote an paper on using Jeffreys’s prior that’s invariant in changes between equivalent sets of parameters of a single model in climate forecasts, and Nic used this proposal to reanalyze the case of Forest et al 2006. Jeffreys’s prior has the nice property that the results are the same independently of the way a fixed model is formulated mathematically as long as it’s the same model. That leads, e.g., automatically to a specific and often plausible way for handling scale parameters. Jeffreys’s prior leads commonly to results that are not badly implausible, but it’s has replaced the dependence on the choice of parameters considered by the dependence on the model used to define it, in case of Nic’s paper on the MIT 2DCM.

      As I wrote Jeffreys’s prior is commonly plausible. It’s probably best alternative we have, when a great inherent value is given on the objectivity in the sense that a decision made once removes further subjectivity in the analysis. That does, however, not guarantee at all that the original decision of selecting the Jeffreys’s prior leads to most correct results. At best it can prevent the introduction of really bad priors, when the goal is to get the best estimate for a parameter like ECS, rather than minimizing the subjectivity over all other considerations.

      For a parameter that has meaning outside of one single model, as ECS has, each separate analysis based on a different model would have it’s own Jeffreys’s prior. That does not really make sense.

    • Pekka, yes, it looks like you are right that a longer period was used which makes it harder to understand why their sensitivity fails to explain the last 43 years. However, I can’t follow what Lewis is doing to get such a low aerosol forcing, which is the central reason for his low sensitivity.

    • Jim,

      The method that Nic is using, and what Forest et al used previously, has aerosol forcing as one of the three free parameters. The other two are climate sensitivity and deep-ocean diffusivity. All three are determined by the method, not constrained explicitly. They are results of a method that’s opaque in the sense that it’s difficult to follow how the values are obtained. His paper contains more discussion on the deep-ocean diffusivity, joint distributions of aerosol forcing and other parameters are not discussed.

    • With regards to the Otto paper at least, (or in the calculation I showed above), aerosol estimates come from IPCC AR5 which includes expert assessment of both direct and indirect aerosol forcing and for the industrial period is about -0.45 W/m^2 each.

    • RB,

      My above comment has no connection to the Otto et al paper, but only the paper of Nic Lewis where he applied the objective Bayesian approach.

    • Pekka – a period that is dominated by either La Nina or ENSO neutral that was leaning La Nina, which means energy loss from the oceans is dampened and the surface air temperature consequently dampened. I think a sensitivity estimate based on those conditions is pie in the sky. A momentary happy thought.

  44. David Springer

    corrected attributions

    andrew adams | December 19, 2013 at 7:38 am |

    ds: If the hypothesis put forward is that CO2 causes surface warming the null hypothesis is that CO2 does not cause surface warming.

    aa: OK, but surely that is falsified by the existence of the greenhouse effect.

    What experiment isolates CO2 and establishes how much if any greenhouse effect it causes in vivo?

    • I guess measurements of OLR at the TOA would be one indicator. My understanding is that the contribution of each GHG to the overall GHE can’t be calculated exactly but CO2’s contribution is estimated to be around 20-25%.

      In any case your null hypothesis only addressed the question of whether CO2 caused surface warming, not the actual amount of warming it caused.

  45. The answer to this question is no, a separate maintenance fee may not be charged on most rental properties.

    In the mid-1990s when the BPL had an $8,500,000 fundraising campaign, the
    Kings donated $2,000,000. The latter are incurred in order to sustain the daily operations within
    a particular calendar or fiscal year, regardless if the amount involved is substantial or minimal.

  46. “It is sensible to ask for a scientific summary of the IPCC work, not addressing policy makers but as objective as possible a summary of the present status of our knowledge and ignorance about climate science. Such a report must refrain from ignoring basic scientific practices, as the SPM authors blatantly do when claiming to be able to quantify with high precision their confidence in the impact of anthropogenic C02 emissions on global warming. Statistical uncertainties, inasmuch as they are normally distributed, can be quantified with precision and it can make sense to distinguish between a 90% and a 95% probability, for example in calculating the probability of getting more than ten aces when throwing a die more than 10 times. In most physical problems, however, and particularly in climate science, statistical uncertainties are largely irrelevant. What matters are systematic uncertainties that result in a large part from our lack of understanding of the mechanisms at play, and also in part from the lack of relevant data. In quantifying such ignorance the way they have done it, the SPM authors have lcist credibility with many scientists. Such behaviour is unacceptable. A proper scientific summary must rephrase the main SPM conclusions in a way that describes properly the factors that contribute to the uncertainties attached to such conclusions.” ~Professor Pierre Darriulat

    Would that be about 1 in 10 million… (1/6)11

    • Naw… that’d only be about 9 aces in a row. Getting 11 aces in a row’d be more like 1 over 362 million. Assuming AGW theory is valid what is the chance of getting 15 years in a row of no warming?

  47. Matthew R Marler

    However the quality of the contributions turns out, I think that the idea of a legislative body soliciting technical comments on the IPCC AR5 is a good idea.

  48. Matthew R Marler

    from Nic Lewis: 2. Climate sensitivity is a measure of how much the climate system warms each time the concentration of carbon dioxide in the atmosphere doubles. There are two principal measures used. Equilibrium climate sensitivity (ECS) is the amount of warming once the world ocean has fully warmed up, a process that takes more than a thousand years. ECS is believed to be a fairly stable property of the climate system, but is difficult to estimate accurately.Transient climate response (TCR) measures how much warming will take place over a 70-year period during which the carbon dioxide concentration doubles. TCR is easier to estimate than ECS, but may be less stable, and although more relevant to warming towards the end of the century is less useful than ECS when projecting over a wide range of timescales.

    3. Climate sensitivity is of direct policy relevance since it, and the level of uncertainty as to its value, is a key input into the economic models which drive cost-benefit analyses.

    I think he articulated the problem well. I don’t agree with his estimation of the importance of ECS, but that’s my opinion, and he put in a reasonable description of ECS (i.e., not a literal “equilibrium”.)

  49. Matthew R Marler

    another quote from Nic Lewis: Most of the observationally-based estimates of climate sensitivity explicitly adopt a ‘Bayesian’ statistical approach. A Bayesian approach demands that the researcher set out in mathematical terms a starting position for the value of the property of interest [2] – in this case the climate sensitivity. This ‘prior’ is then combined with the data to give a final result – the ‘posterior’.[3] If the data is good quality then the final result will be little affected by the prior. But when data contains a weaker ‘message’ – as when estimating climate sensitivity – the choice of prior can greatly influence the final answer, and therefore be very contentious.

    Prof Francisco Samaniego (UC Davis) in his book “A comparison of frequentist and Bayesian methods of estimation” makes the point that a ‘prior’ has to be at least sufficiently accurate (beyond a “threshold” of accuracy) in order for the Bayesian estimate to be an improvement over the frequentist estimate (“better” in a Bayesian sense that he defines), and has some discussion of when that may be the case.

  50. Matthew R Marler

    last quote from Nic Lewis: The meta-analysis in Tol (2009) [22], of fourteen estimates from economists, suggests that a temperature of 2°C warmer than today is likely to have a negligible impact on welfare.

    Prof Curry wrote: Of these, Nic Lewis‘ submission is a tour de force.

    I second that appraisal.

    • David Springer

      too late Craig Loehle seconded it and I thirded it right after him

    • Matthew R Marler

      David Springer: too late Craig Loehle seconded it and I thirded it right after him

      Not a problem: they are all virtually tied.

  51. A cursory review of the submissions so far show 13 submissions supportive of the IPCC, 23 critical, and 4 that focus only on one of the questions (Dixon on implementing IAC recommendations, more critical than not, Gruenier on consensus and does not see good evidence, Institute for Science and Society focused on Media reporting on AR5 – interesting read, and James Painter focus on communication re: uncertainty).

    Plenty of material for those on either side of the debate to use to support their cause. Portions of the Met Office submission read a little like what Obama might write in defense of Obamacare – pretty comical. Many supportive submissions call for immediate/urgent/dramatic reductions in GHGs or we are all going to die next week, or something along those lines.

    Many of the critical submissions offer specific examples of problems with the IPCC and the report itself. I personnally like Donna LaFramboise overall exposure of the the IPCC fraud and am pleased to see that she has been invited to provide additional perspective.

    While I am skeptical that the Parliamint will give much weight to the skeptical perspective, at least the submissions have all been made public, and so far, the process looks like it will be transparent, which if true will be a refreshing change.

  52. Walt Allensworth

    So even if CO2 drives temperature, I just see that China in now producing more CO2 than China+USA were back in 2006, and shows no indication of slowing down the pace of building coal-fired plants to produce cheap electricity.

    What the US does just doesn’t even really matter anymore. We could stop producing CO2 and self destruct as a nation, and the slight bend in the plot of CO2 vs. time would barely be noticeable.

    Do we really think that China is gonna pay big taxes or reparations to the rest of the world over ACO2 or CAGW, or even care what the ROW thinks?

    • True, the BRIC countries don’t give two schitts about what frightens Western academia.

    • The Middle Kingdom, and the rest of the BRiCworld, are happy to let the developed West heap guilt on itself and to whine for reparations from the sinful. The Chinese have figured out that a warmer climate benefits them, and they have their own tree ring study, you know. It’s Tibetan, untouched by Mann.
      ===================

    • Well, of course I meant ‘BRICworld’. No offense to the Anglophonic Mother of Many Tongues.
      =========

  53. Beth:

    We had the same problem with the previous government in the UK. Asleep at the wheel, gross overspending and light touch regulation that we and our kids will be paying for for decades, So much for social engineering.

    I see another possible agenda for this enquiry. The noise of disent is growing louder by the day over the climate circus and many mp’s suspect that the country has been led by the nose for years by the green lobby, consistently over egging their case. This inquiry could just be the start, an evidence gathering stage to justify a major change of policy after the next election. We won’t get it from the other side, as they got us into this mess in the first place and will acquiesce without question any time Europe (or the US, for that matter) blinks…

    Chris

  54. Peter Lilly is a smarty chap. If you wanted to brief him, which 3 papers do you think he would need and which 3 people do you think he should question?

    • Doc,

      Not sure if that’s directed at me, but where there is major disagreement in the evidence, perhaps all submissions should have been anonymous, to force the reviewer to filter personal bias. Any evidence which seems irrational or overstated should be rejected, irrespective of where it comes from. If Mr Lilley remembers and applies his critical facilities without political interference, then he should be capable of reaching his own conclusions. It does appear as though it will be subject to public scrutiny at least.

      I’ve only read a subset of a subset of the papers and am not qualified to comment on much of the science, but where there is so much disagreement, from well qualified people on all sides, then there is obviously a problem in applying any of it to public policy.

      In some ways, the EU has the uk by the short and curlies, in that we are legally obliged to meet the targets, while at the same time being taken to task over nuclear power subsidies. So what do they suggest, more useless windmills ?. Perhaps we should just tell them to sod off and we’ll go our own way…

      Chris

    • I wonder who voted on behalf of, the world –e.g., was the poll translated into Chinese and Indian, or was it simply based on a sampling of travelers at international airports?

      • Source: Source USA TODAY/Stanford University/Resources for the Future poll of 810 adults nationwide, conducted by Abt SRBI Nov. 20-Dec. 5. Margin of sampling error: ±4 percentage points.
        By Frank Pompa and Karl Gelles, USA Today

      • I doesn’t look to be to different (probably on the high side in favor of global warming) from this poll that had over 1500 sample and shows sample by political leaning on the second page:

        http://www.people-press.org/2013/11/01/gop-deeply-divided-over-climate-change/

        I’m surprised myself of these results I thought it would be closer to 50/50 based a lot on political leaning.

    • Hello David,

      I followed your link, which led to another link to the ARGO web site, where I found this:

      “* How accurate is the Argo data?
      The temperatures in the Argo profiles are accurate to ± 0.005°C and depths are accurate to ± 5m. For salinity,there are two answers. The data delivered in real time are sometimes affected by sensor drift. For many floats this drift is small, and the uncorrected salinities are accurate to ± .01 psu. At a later stage, salinities arecorrected by expert examination, comparing older floats with newly deployed instruments and with ship-based data. Following this delayed-mode correction, salinity errors are reduced further and in most cases the data become good enough to detect subtle ocean change.

      * How much does the project cost and who pays?
      Each float costs about $15,000 USD and this cost about doubles when the cost of handling the data and running the project is taken into account. The array has roughly 3000 floats and to maintain the array, 800 floats will need to be deployed each year. Thus the approximate cost of the project is 800 x $30,000 = $24m per year. That makes the cost of each profile around $200. 28 countries have contributed floats to the array with the USA providing about half the floats.”

      Do I really believe that a $15k float, running unattended and uncalibrated in the open ocean will produce temperature data over an expected temperature range of 0-30 C with 5 millidegree error bounds over its operational life (4+/- years)?

    • You ‘gotta have faith Bob…

    • Bob:

      “Do I really believe that a $15k float, running unattended and uncalibrated in the open ocean will produce temperature data over an expected temperature range of 0-30 C with 5 millidegree error bounds over its operational life (4+/- years)?”

      Probably not. To get that kind of resolution from the temp sensor processing, you would need 14 bits of conversion accuracy minimum, whereas most data logging of that type typically uses 12 bit a-d converters. Even if you assume that they are using 16 bit converters, you still have to account for sensor scale factor and offset drift over temperature and time period. Equipment for such a harsh environment would need extensive temperature compensation to maintain accuracy. Even then, 0.005C sounds a bit far fetched, though perhaps not impossible..

      It should all be in the original spec for the float, which should have numbers for required resolution, drift and accuracy, assuming that this has been published somewhere…

      Chris.

    • One example of, how accurate the floats are in their temperature measurements is given by this report, where the performance of three early Argo floats is analyzed after 2-3 years of use.

      It was found that the temperature calibration error was about 0.002C at that point, clearly good enough for the stated accuracy of 0.005C.

    • Another question (actual question, not rhetorical) is: If you had two ARGO floats connected by a rigid 10 meter pole so that they would have to travel together, up, down, and sideways, and the pair traveled the ocean doing the normal ARGO mission, would the data from the two buoys track within +/- 0.005 C?

      In other words, are the temperature gradients between closely spaced collection points small enough to make milli-degree/year trend lines based on data collected from randomly drifting buoys meaningful?

    • Pekka,

      Very interesting paper indeed. In summary, what they found was that the temperature measurements were within calibration range, while salinity and pressure were out of range, with pressure significantly so ?. Ok, they must be using 16 bit converters to get that resolution and accuracy. Apologies if this is obvious, but the op was provoked by the fact that digitising an analog signal is only ever an approximation. Just as computer floating point is an approximation, limited by the number of bits used to represent the real world analog value.

      A couple of things I’m not clear about. They appear to be referencing depth measurement baseline by measuring sea surface pressure, or atmosphere and that they use the pressure sensor to measure depth, reference that baseline. If they measure temperature at depth and on the way up or down at various points, how would the temperature measurement be affected by errors in pressure / depth measurement ?. For example, if they measure temp at 10ft and it’s really 11 ft, how would that affect the results ?.

      Of course, they could apply known drift characteristics in the software to compensate for sensor aging, but if not, the temp readings may be degraded significantly subsurface…

      Chris

    • Chris,

      One point to notice is that these were early floats. Many of the early floats had problems, some so bad that their data is not used at all. From the Argo sites you can learn more about the problems and, how they have been resolved.

      There are several manufacturers and many independent groups are operating Argo floats. Under such conditions it’s essential that their technical performance is monitored and reported constantly.

      Bob,

      In atmosphere having an accuracy of 0.005C would probably be of little value, but in the ocean temperatures are much more stable and temperature differentials smaller. For the estimation of the total ocean heat content (OHC) a lesser precision would probably be almost as good, because errors of individual measurements always cancel to a large extent as long as the floats do not have common systematic errors. For he details of vertical profiles accurate data of both temperature and salinity are perhaps more important as buoyancy differentials are sensitive to both, and determining them accurately is valuable for learning more on the oceans themselves.

  55. David Springer

    answer to previous question “When is a yardstick not a yardstick?”

    When it’s a treemometer!

    HAHAHAAHAHAHA – I kill me sometimes.

  56. Lauri Heimonen

    Judith Curry:

    ‘The Committee requested submissions that address the following questions; an excerpt:

    ”Is the IPCC process an effective mechanism for assessing scientific knowledge? Or has it focussed on providing a justification for political commitment? ”’

    IPCC has been founded, already 25 years ago, to ‘assess scientific knowledge’ in order to ‘provide a justification for political commitment’ to AGW. At present the first priority must be to ask, why there is no proper evidence for the recent warming believed to be caused by antropogenic CO2 emissions. As an answer e.g. in my comment http://judithcurry.com/2013/12/16/how-far-should-we-trust-models/#comment-426153 I have stated: ”In the comment of mine http://judithcurry.com/2013/12/13/week-in-review-8/#comment-425381 I have proved that the mere recent increase of anthropogenic CO2 emissions can rise only 0.005 ppm CO2 in atmosphere per year. Instead the recent total CO2 increase in atmosphere has been about 2 ppm, where, according to natural laws, the anthropogenic share is 0.08 ppm; the total increase of CO2 content in atmosphere has been caused by warming of global sea surface, especially on the areas where sea surface CO2 sinks are.” Already this can prove that AGW based on anthropogenic CO2 emissions is practically nonexistent.

    In addition to the increasing trend of recent CO2 content in atmosphere, according to my interpretations, empiric observations prove that the trends of CO2 content in atmosphere have followed temperature during the last century, during the glacial and interglacial eras, and during the last 100 million years. For instance, as CO2 content in atmosphere during the Cretaceus Period has been about four times the present content, this agrees with assessments calculated by using the equation claimed by Lance Endersbee; ”Oceans are the main regulators of carbon dioxide” http://www.co2web.info/Oceans-and-CO2_EngrsAust_Apr08.pdf , and http://jennifermarohasy.com/blog/2009/10/lance-endersbee-1925-2009-civil-engineer-academic-scientific-sceptic-mentor .

    My slogan: as to complex climate problems, as metallurgist I try to avoid to be analoguous to alchemist.

  57. Pingback: Nic Lewis’s submission to the UK government | And Then There's Physics

  58. These services are required less frequently, but are very useful.
    Insurance is a must for the business, and in
    many cases, can make a difference in landing the job. So your total for
    this client’s home would be $50 per service visit.

  59. Pingback: Weekly Climate and Energy News Roundup | Watts Up With That?

  60. Asking for outside submissions is something Parliament should have done before it passed the climate change act. Much before.

  61. Pingback: UK Parliament: IPCC 5th Assessment Review | Climate Etc. | Gaia Tahoe

  62. Pingback: Review of submissions to IPCC Inquiry | The IPCC Report

  63. Pingback: Death(?) of expertise | Climate Etc.