Week in review – science edition

by Judith Curry

A few things that caught my eye this past week.

The Case for Teaching Ignorance. [link]

Whither field hydrology? Need for discovery science & outrageous hypotheses [link]

Crucial ocean acidification papers come up short. [link]

Metal poisoning of plankton perhaps triggered 2nd largest mass extinction event 445 may [link]

New paper:ENSO cycles of 2000, 565, 65 and ∼22 years [same as 22-year solar geomagnetic cycle [link]  …

New paper shows large rise of solar geomagnetic activity over the 20th century [link]

New paper: Robust reef growth in Indian Ocean kept up with sea level rise > 20X faster in past  [link]

Analysis on Chilean megafaunal extinction causes. [link]  …

Great Barrier Reef species more likely to face extinction from climate change than other marine life [link]  …

Determining the likelihood of pauses and surges in global warming [link]  …

“Why does industry fund some of my research?” @WyoWeeds explains:[link]

We found only one-third of published psychology research is reliable – now what? [link] …

University of Florida reallocates Monsanto donation after public threats to Prof @KevinFolta. [link]

Biothecist Alice Dreger resigns her faculty position at Northwestern over censorship [link]

Climate change: think risk not fact.  “Shun the shoehorn of consensus” [link]

Concerns mount over whale deaths in Gulf of Alaska [link]

The rockfall physics of Wile E. Coyote and the Road Runner [link]

UCSD Scripps gets $5 million to fight climate change [link]  …

Sea-air CO2 exchange in the western Arctic coastal ocean [link]

Walter Munk, ‘Einstein of the Oceans,’ at 97 [link]

#GMO scientist Kevin Folta receives apologies from PLoS [link]

In defense of Stigler (pursuant to Feynman integrity post) [link]

My interview on The Space Show [link]

JC note:  The big issue this week has been the 10th yr anniversary of Hurricane Katrina.  I am planning two posts (hopefully first will be ready on Monday).

I just finished a demanding project with a short time fuse, I breathed a half of a sigh of relief, than got blindsided by something new (stressful but potentially interesting).  So I am losing the battle at the moment to keep up with things, but I will try to get the Katrina posts out next week.

 

 

 

 

301 responses to “Week in review – science edition

  1. Pingback: Week in review – science edition | Enjeux énergies et environnement

  2. Also, sea-level rise rate accelerating according to the latest NASA announcement. Since 1992, it is 50% larger than the 20th century average.
    http://www.huffingtonpost.com/entry/global-sea-levels-climbed-3-inches-since-1992-research-shows_55df8216e4b0c818f61756ba

    • How does NASA know what sea level rise was throughout the 20th century? Tide gauges are not that accurate, because they do not measure sea level (the sea not being level), nor are they a representative sample of the globe. Plus there is pesky tectonics, with plates rising and falling locally and all that. Oh, and how do they compute the global average from this convenience sample? By adjusted area averaging?

      Sounds like more false accuracy to me. We are talking about mm per year, right, globally, in say 1905?

      • Curious George

        They seem to say “the sea level rise as measured by satellites since 1992 is much higher than what tide gauges show.” Possible, but then who is right?

      • David Wojick

        Given the fundamental change in methods the change may well be an artifact. Moreover, trying to estimate global average sea level using tide gauges is a rough guess at best. It is another surface statistical model, except we know the gauges are inaccurate. I used to live in the Seattle area, with 10 foot tides. The tide levels can vary by a foot from one day to the next just due to variations in wind.

    • Look up “Fernando’s Sea Level Rise” by me. I downloaded sea level data into excel, & made a simple graph. Based on that graph it looks to me like the recent NASA articles are part of the U.S. Regime’s propaganda campaign. They saw the slight acceleration caused by El Niño and cherry picked the data.

    • Curious George

      I like Jim’s selection of scientific sources. I’ll try it for myself.

    • Nothing that a little “adjusting” can’t fix

      “…over the last decade a slowdown … of about 30%, has been recorded. It coincides with a plateau in Earth’s mean surface temperature evolution, known as the recent pause in warming…We find that when correcting for interannual variability, the past decade’s slowdown of the global mean sea level disappears ”

      Mission accomplished.

      http://www.nature.com/nclimate/journal/v4/n5/full/nclimate2159.html

  3. Link appears wrong for: Biothecist Alice Dreger resigns her faculty position at Northwestern over censorship [link]

    • That threw me as well. However, the link is interesting and appears to be somewhat connected. Reminds me of the Jack Nicholson contention in a Few Good Men: “You can’t handle the truth!”

  4. Dreger’s reasoning seemed goofey. Especially when there’s a perfect reason for more kids. Affluent middle class westeners are dying. The world will be third world in a century.

    (Slightly dramatic but agrees with Buchanan’s work completely)

  5. “Determining the likelihood of pauses and surges in global warming…”

    The continued saga of internal variability nonsense. The oceanic modes act as amplified negative feedbacks to solar wind conditions.

  6. Judy,

    Trying to stay in the belly of the news cycle, while remaining well-informed and relevant is a Sysyphean challenge which is not meant for the faint of heart. I have scars all over my passport, so I know a few things about which I speak.

    Although, you seem to be a “prepared spirit,” as Freud put it.

    The Word Merchant

  7. Judith, do your thing. We denizens can wait. You are MUCH more important.

  8. with regard to the katrina issue, would like to mention that i looked at the total annual ACE index for all hurricanes 1945-2014 and did not find any trends. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2630932

  9. The “Climate change: think risk not fact” by Joe Smith opens with a quote by Tamsin Edwards at the PLOS blog (Sept 2014): “That pause in warming of the atmosphere surprised the media and public, even though scientists always expected this kind of thing could happen in the short term.”

    Is this correct, or myth-making? Are there mentions of the pause, casual or in the literature, before the pause? I have found none.

    There is a discussion of this in “Pause for thought“ by Ed Hawkins, Tamsin Edwards & Doug McNeall in Nature Climate Change, March 2014. They mention that the literature and IPCC discussed natural variability, and that the models showed “decadal periods of cooling embedded within long-term cooling…” — but give no examples saying that a pause was possible. In fact they say the opposite:

    Although several papers have estimated the probability of a pause, they were published after it had started. Also note that the IPCC has not included a clear statement of the chance of a slowdown in any of its SPMs.

    The first mentions of a possible pause in warming were greeted with strong denial — not “yes, we know that’s possible”. That suggests that Edwards is wrong, and climate scientists had not “always expected this kind of thing could happen”.

    • ‘Is this correct, or myth-making? Are there mentions of the pause, casual or in the literature, before the pause? I have found none.’

      It is simply a consequence of known variability. Many different things could happen, and there is no reason why anyone would speculate about this particular form in advance. There was a pause in the Dow Jones index for the first decade of this century – a good deal more convincing than the “pause” in GMST. Plenty of people would have thought that could happen, but I doubt you’ll find anyone predicting in advance exactly that phenomenon.

      • Nick,

        Your comment is so confused it’s worth analysis.

        (1) “It is simply a consequence of known variability…. a good deal more convincing than the “pause” in GMST.”

        First you say it exists, then you say you’re not convinced it exists.

        (2) “Many different things could happen, and there is no reason why anyone would speculate about this particular form in advance.”

        So you believe Edwards is incorrect and that a pause was not expected.

        (3) “There was a pause in the Dow Jones index for the first decade of this century”

        Of what relevance is this? The Dow Jones Index traded between 53 and 103 from its creation in the mid-1880s until WWI. There was no widespread belief back then that stocks tended to rise.

        (4) “Plenty of people would have thought that could happen…”

        Can anyone identify that verb tense? Or how Nick knows what climate scientists “would have thought” in the past?

      • “First you say it exists, then you say you’re not convinced it exists.”
        I’m not convinced it is rightly called a pause. But the behaviour of temperatures in the last decade or so is within the range of known variability.

        “So you believe Edwards is incorrect and that a pause was not expected.”
        For an editor, you pay very little attention to getting things right. TE did not say a pause was expected. She said “scientists always expected this kind of thing could happen in the short term”. Just like the myriad things that could happen to temperature (or the Dow) in the next decade.

        “Can anyone identify that verb tense? “
        In describing the Fabian language, they call it a perfect subjunctive.

      • “So you believe Edwards is incorrect and that a pause was not expected.”

        Another example of why this is so silly. Suppose a new wage deal – every week your boss tosses a coin. Heads, you get paid, tails not. Suppose there is a pause – four weeks no pay. So you ask Tamsin who says
        “scientists always expected this kind of thing could happen in the short term”
        It could. That doesn’t mean someone should have been predicting that particular lean period.

      • Fabius,

        Your comment is so confused it’s worth analysis.

        No, it isn’t. Try reading it again, slowly this time.

      • Nick,

        I’ll try this once more. Point 5 is the important one. The first 4 illustrate the futility of the debate, as you wildly toss chaff in every direction.

        (1) “First you say it exists, then you say you’re not convinced it exists.”
        Your reply: “I’m not convinced it is rightly called a pause.”

        It exists, it doesn’t exist, you don’t like its name. You’ve covered all the bases!

        (2) “But the behaviour of temperatures in the last decade or so is within the range of known variability.”

        Yes, that’s a logical explanation of the pause, well–supported by the literature. But it is an explanation of the pause, not an argument about its existence. Conflating these 2 questions doesn’t help.

        (3) “TE did not say a pause was expected. She said “scientists always expected this kind of thing could happen in the short term”.

        And the difference is …?

        (4) “In describing the Fabian language, they call it a perfect subjunctive.” {about “Plenty of people would have thought that could happen…”}

        No. It’s like a third conditional, but substituting a subjunctive for the initial IF: Would + past perfect subjunctive | would + infinitive.

        It’s a guess about people’s thoughts in a hypothetical (i.e., didn’t happen) past situation. Impressive mind reading about a fictional past!

        (5) “every week your boss tosses a coin.”

        Like your previous analogy to the stock market, this misrepresents the situation. The reaction to early reports of the pause shows that many scientists did not regard a pause in warming as 50-50 chance, like a coin toss — but rather an extremely low probability outcome. Hence no prior mention of it and the pushback to early reports of it.

        The pause was an unexpected outcome, hence it’s value — and the scores of papers debating its likely causes.

        Thomas Kuhn believed that predictions were the gold standard test for a paradigm. Prediction of the pause would have been powerful validation of the current climate science paradigm. But, despite assertions by skeptics, failure to make such predictions does not refute it.

        The real significance of the pause is, imo, as a evidence of the dysfunctional nature of the public policy debate about climate change. Skeptics make bizarrely broad assertions about the pause. Activists ignore scientists and deny the pause exists. It’s all chaff.

        Tolerating this nonsense is a luxury we can no longer afford.

      • Fabius,

        Tolerating this nonsense is a luxury we can no longer afford.

        Yes, that’s why I suggested that you read it again, slowly this time. However, if you want to continue spouting nonsense, there’s probably not much I can do to stop you.

      • Poor kenny. Editor is wiping the floor with nicky.
        CO2 is supposed to be the control knob. Ex post facto excuses for the pause, including blaming it on natural variability are amusing. Your alarmist crowd weren’t making silly analogies with the stock market and flipping coins back in 1998, when you had the big El Nino to wave around. The story was we were headed for imminent disaster. Nobody said anything about natural variability overwhelming the control knobbery of ever increasing CO2. And you people wonder why your credibility with the folks is so low.

        http://data.myworld2015.org/

      • Editor is wiping the floor with nicky.

        I’m pretty sure he thinks he is. The truth, however, is probably very different. Our Editor should probably consider the possibility that reading a few books on philosophy doesn’t one a philosopher make. What I suspect our good Editor is really doing is illustrating that he can’t really be bothered reading what others have said and giving it some thought before responding. He’s probably lucky that Nick is far too polite to really say what he thinks about our good Editor, and will probably simply stop wasting his time discussing this with someone as obtuse as our good Editor. I should probably do the same. There’s only so much time one can spend discussing this topic with someone who understands it as badly as our good Editor does, and who clearly has no great interest in gaining a better understanding.

      • A prediction from 2008, from the article:

        However, records of past climate changes suggest an altogether different scenario for the 21st century. Rather than drastic global warming at a rate of 0.5 ° C (1° F) per decade, historic records of past natural cycles suggest global cooling for the first several decades of the 21st century to about 2030, followed by global warming from about 2030 to about 2060, and renewed global cooling from 2060 to 2090 (Easterbrook, D.J., 2005, 2006a, b, 2007, 2008a, b); Easterbrook and Kovanen, 2000, 2001). Climatic fluctuations over the past several hundred years suggest ~30 year climatic cycles of global warming and cooling, on a general rising trend from the Little Ice Age.

        http://www.globalresearch.ca/global-cooling-is-here/10783

      • Nick Stokes: It is simply a consequence of known variability. Many different things could happen, and there is no reason why anyone would speculate about this particular form in advance.

        Most of this “known” variability was not acknowledged to be at work until after the hiatus was well underway. Even now, the mechanisms of the known variability are only partly “known” (see the droyspencer link provided by David Wojick), or quantified.

        Plenty of people would have thought that could happen,

        A large number of supportive quotes from Hansen, Schneider, Ehrlich, Holdren, Mann, Schmidt and others advising of likely slowdowns in the warming rate would be helpful on this point; quotes from before 2000. The people who “would have thought that could happen” appear to have been completely silent on the possibility until after the hiatus in surface warming was well under way.

      • Oy.

        “Dr. LATIF: Exactly. So – and this is the reason, because we have the short-term climate fluctuation, therefore, it doesn’t make sense to look at short periods to assess the human impact on climate. So you have to consider several decades. Only then you see basically the long-term warming trend, and therefore, we can’t really draw any inferences from this hold in the last 10 years or so, you know, with regard to global warming.”

        NOVEMBER 22, 2009

        http://www.npr.org/templates/story/story.php?storyId=120668812

      • Nick may be correct; it may not be be a pause, it may be a peak!

      • “appear to have been completely silent on the possibility”
        Like the Editor, you seem to have no idea about stochastic outcomes. Take the coin example. At any point in time, the probability of the next 4 tosses showing tails (TTTT) is 1/16. So is the probability of THTH, or HHHH. Saying that TTTT is possible is no kind of prediction, and you can’t write papers about it. And there’s no point in saying afterward, well if you knew it was possible, why didn’t you say it was going to happen? At the time, it was a 1/16 chance. The fact that with hindsight it happened, doesn’t change that. All you could have said at the start that it was a 1/16 chance, as likely as HTTH, no more, no less.

        And suppose you do get a T as the first of the sequence and someone says, well, why were you then reluctant to predict a TTTT pause? There is (would have been) still only a 1/8 chance.

        You won’t find 10 year old predictions of a 2015 El Nino either. But it was always a possibility.

      • Jim D: There were some early attempts at cautioning against cherry-picking starting dates for trends, but that fell on deaf ears. At that point they could see they were up against the statistically illiterate, and what’s the point. None of the skeptics joined in on the statistics argument against cherry-picked start dates and significance, so here we are.

        Does that address something in the threads above it, and if so what?

      • Don’t be silly, kenny. Leave that to your minion, little willy.

        The alarmist story back in 1998 was that CO2 controls climate. Natural variability was out. Now it’s back style to explain the pause with silly analogies to the stock market and coin flipping. We are not buying that crap, kenny.

      • Nick Stokes: Like the Editor, you seem to have no idea about stochastic outcomes.

        Before the pause was well underway, what climate scientists informed the Congress or the IPCC or anybody that Hansen’s 1988 and other warnings of a constant (or possibly increasing) rate of temperature increase through 2020 might be wrong? Given the truth of your argument about stochastic outcomes (which have been elaborated with provisions for autocorrelation and in other ways), why was the existence of the pause in surface warming so strongly denied at first?

      • Don Monfort | August 29, 2015 at 1:40 pm |
        Now it’s back style to explain the pause with silly analogies to the stock market and coin flipping.

        So little is known about actual real live climate physics that flipping a coin may be the best way of guestimating what the climate may be in the future.

      • Klimate Konsensus scientists behave like a cat falling off a table, looking up at you with that “I meant to do that look”.

      • “Hansen’s 1988 and other warnings of a constant (or possibly increasing) rate of temperature increase through 2020 might be wrong”

        Hansen did not project a constant rate of temperature ris through 2020. His 1988 projection is here:

        You can see that in scenario A, for example, there is actually a “pause”, much like the recent one, between 1998 and 2009. B has a pause between 2010 and 2019. But these are random perturbations. GCM’s like Hansen’s generate essentially random weather, but respond to forcings as shown in the scenarios. Different runs will produce different events, as the published spaghetti graphs show.

      • Nick

        I have already referenced that graph. It is scenario b that most closely matches emissions. The pause or dip is very minor. Essentialy it predicts a sharp rise and should be read in conjunction with his congress performance from 1988 and of course his paper with his colleague, previously cited.

        Tonyb

      • Nick

        I have just re read hansens congress testimony and his scenario definition

        As you can see, scripts reckon that emission rates have accelerated and we shouldether fore be looking at scenario A not B when looking at hansens prediction

        https://scripps.ucsd.edu/programs/keelingcurve/2015/02/12/is-the-rate-of-co2-growth-slowing-or-speeding-up/

        Not much sign of a pause.

        Tonyb

      • Tonyb
        “Not much sign of a pause”
        Well, here’s reality:

        Also not much sign of a pause.

        I’m not concerned here about which scenario is best. I’m just showing that they all have variations in the rate of rise, just as measured temps do. Of course, now there are so many GCM runs that we can gather statistics on pauses, almost pauses etc. That doesn’t mean they are predictable. GCM’s get climate trends from random weather responding to forcing.

        However, RealClimate did post Hansen’s scenario numbers a few years ago. Scenario B matched 2012 CO2 ppm almost exactly.

      • Nick

        Here is the Hansen congress testimony again

        http://climatechange.procon.org/sourcefiles/1988_Hansen_Senate_Testimony.pdf

        Please refer to page 48 and also the keeling curve I linked to showing increasing emissions

        The result is that it shows a continuation of the previous 20 years trend ( prior to 1988) which Hansen himself describes as scenario A.

        Also I note you have posted a land and ocean index. Hansen was referring to land only.
        tonyb

      • > Thomas Kuhn believed that predictions were the gold standard test for a paradigm.

        A quote might be nice.

        ***

        > Prediction of the pause would have been powerful validation of the current climate science paradigm.

        Is that notion of paradigm validation in Kuhn too?

        ***

        Telling how Da Paws is related to Da Paradigm might be nice.

        Identifying Da Paradigm might be nicer.

      • > Hansen was referring to land only.

        And Nick was referring to reality.

        ***

        > The result is that it shows a continuation of the previous 20 years trend (prior to 1988) which Hansen himself describes as scenario A.

        Comparing 1999 to 2003 in scenario A undermines this result. Even 2006-2007 looks quite flat.

        ***

        I note that TonyB is referring to scenario A only.

      • Tony,
        Many words are wasted on trying to interpret which scenario Hansen favored when etc. Only two things count. What numbers did he actually use in the scenarios for the calculation shown? And what numbers do we have today?

        The actual numbers are in files here. Scenario A has 410 ppm in 2015. Scen B has 406, and Scen C has 369.5. The differences between A and B mainly lie elsewhere – B allowed for a volcano (much like Pinatubo), and of course there are other gases, including CFC’s, which were still being emitted in 1988. Both A and B overshot a bit on CO2.

      • stevenreincarnated

        AR1 addressed what would happen should the Montreal Protocol be ratified. The only excuse that can be used logically for the failed projections is that they didn’t have a firm grasp of atmospheric chemistry back then. Hardly a resounding endorsement of the state of the science at that time. It may or may not be better now only time will tell.

      • “AR1 addressed what would happen should the Montreal Protocol be ratified. “
        Hansen submitted his paper in Jan 1988. The AR1 came out in 1990. The Montreal protocol was agreed in Sept 1987, and not ratified by US until Apr 1988. When you design a scenario, you have to use the best information at the time. Assuming CFCs would just go away would have been optimistic. Anyway, as said above, it’s no use second guessing. What matter are the numbers used in the calc, and what actually happened as time unfolded.

      • Nick Stokes: Anyway, as said above, it’s no use second guessing. What matter are the numbers used in the calc, and what actually happened as time unfolded.

        Nick Stokes: It is simply a consequence of known variability. Many different things could happen, and there is no reason why anyone would speculate about this particular form in advance.

        I think we would all be better off now if the many scientists who would have known of the possibility of an epoch of non-warming had explained this as clearly right after Hansen’s congressional testimony as they are explaining it now. Meanwhile, the scientists who confidently predicted warming through the first 2 decades of the 21st century are even more confident now in their long-term expectation of warming than they were when they made their now falsified expectations of short-term warming known. “Would” they now acknowledge that natural variability might produce another 15 years of non-warming, or are we stuck with every short-term uptick being more evidence of long-long term warming, and every downtick being random variation?

        We should not be “second-guessing” Hansen. All we should note is that the model he used with the CO2 concentration change that has occurred modeled a higher rate of temperature increase than has occurred. Because he was wrong we know that he used too high an estimate of sensitivity. Does he now know the correct value for sensitivity, and does it accurately model the past 20 years when given the actual CO2 trajectory?

        Lastly, my oft-repeated question: Prior to the pause, who alerted the public that, contrary to Hansen’s public warnings, that his predictions for the CO2 trajectory we have experienced might be too high because of not accounting for random variability?

      • “that his predictions for the CO2 trajectory we have experienced might be too high because of not accounting for random variability?”

        That is just completely muddled. Hansen adopted scenarios. One, C was quite low. The fact that there are several clearly indicates uncertainty. Not so much about random variability, but human choices about emissions.

        If you meant predictions for temperature, Hansen had plenty to say in his paper. Eg:
        “Interpretation of Figure 3 {the one above] requires quantification of the
        magnitude of natural variability in both the model and
        observations and the uncertainty in the measurements. As
        mentioned in the description of Figure 1, the standard
        deviation of the model’s global mean temperature is 0.11°C
        for the 100-year control run, which does not include the
        thermocline.”

        “The model predicts, however, that within the next several
        years the global temperature will reach and maintain a 3σ
        level of global warming, which is obviously significant.
        Although this conclusion depends upon certain assumptions,
        such as the climate sensitivity of the model and the absence
        of large volcanic eruptions in the next few years, as
        discussed in Section 6, it is robust for a very broad range
        of assumptions about CO2 and trace gas trends, as illustrated in Figure 3.”

        etc etc

      • Nick

        Yesterday we were discussing the accuracy of Dr Hansen’s 1988 prediction of temperature and the ‘pause.’ Here is the original Hansen 1988 document

        http://climatechange.procon.org/sourcefiles/1988_Hansen_Senate_Testimony.pdf

        and the scripps co2 data.

        https://scripps.ucsd.edu/programs/keelingcurve/2015/02/12/is-the-rate-of-co2-growth-slowing-or-speeding-up/

        In the 1988 paper, on page 48, Hansen seems to say quite clearly that scenario A is based on a continuation of the previous 20 years trend of co2 to 2020. The scripps data shows this was exceeded.

        Hansen cites a reference 1 in his 1988 testimony relating to a giss 3D model of 1988. It is here. Unfortunately it is pay walled so only parts can be viewed

        http://pubs.giss.nasa.gov/abs/ha02700w.html

        In your 6.35 you linked to co2 data (from climate audit) which you said directly related to this 1988 paper.

        As I am unable to view the original data for myself, you merely need to confirm your 6.35 was directly taken from the data used in hansen’s 1988 paper, that was complied at the time and comes directly from reference 1 (pay walled) and I will obviously accept this in good faith.

        Can you confirm the climate audit material is identical and contemporary to the 1988 paper? Thanks

        tonyb

      • Tony,
        The data was posted at RealClimate by Gavin Schmidt some years ago, and Gavin believed that it was the data associated with the 1988 paper. I can confirm that the CA files are a copy of that posted data. Hansen was still head of GISS at the time, so I presume Gavin’s belief was based on word from Hansen. Unfortunately, I can’t now locate the RC reference.

      • Tony,
        Here is a data file at realclimate that is attributed to Hansen et al 1989. The numbers are slightly different, but still supporting my argument. In fact Scen B is there 398ppm CO2 in 2015, which is very close. I think this is the file I originally saw, so the provenance of the CA version is a little less clear.

      • Tony, ps although that file says it relates to Hansen 1989, the filename is H88_scenarios.dat.

      • Nick,
        If memory serves me right, Hansen’s model also had an ECS of 4.2C which is near the upper end of the probable range.

      • Nick

        I accept your good faith understanding of the situation, which is confirmed by your careful and circumspect phrasing ‘attributed to’ and ‘believed’ and your uncertainty of the CA data provenance. These are hardly ringing endorsements that the data mostly unavailable to view in Hansen’s full paywalled 1988 paper is the same as that posted by Real Climate and CA.

        I think it would be best if we can un-equivocably confirm the data is one and the same and to do that I will need to read the full version of the pay walled article.

        thanks for your help

        tonyb

      • Tony,
        I don’t think it’s paywalled. It’s here. I’ve also checked and in Section 2 it does indeed say that the Equilibrium Sensitivity for this model is 4.2C.

      • ATTP

        Thanks very much for helping but I can not scroll all the way down before there are gaps and then the pdf seizes up preventing further scrolling. In the abstract it asks for 6$ in order to rent it so I had assumed this was deliberate

        If you have the time can you see if you can scroll ALL the way through to the end on a non university computer as you will no doubt have subscriptions in place which might enable you to see more than I can!

        Once again, thanks for your help

        tonyb

      • Tony,
        I’m at home and not running a VPN and it works fine for me. No idea why it wouldn’t for you as this seems to be a NASA archive, not from the journal itself.

      • Official responses to the Climategate emails that surfaced six year ago, in late Nov 2009 helped me write this Sequel to Climategategate

        https://www.researchgate.net/publication/273132711_Sequel_to_Climategate

      • @tonyb…

        It’s a PDF, but appears to be simply a set of full-page images, often the case for older documents. If your trouble reading it is in your browser, you might try saving it off as a file, and pulling it up in Adobe Acrobat, if the computer you’re working on supports it.

      • Attp and Ak

        Thanks for your help. I do run adobe but now I know that it is not paywalled I will try and access it from a different computer.

        tonyb

      • Weird – works just fine for me.

      • From the chart Nick posted, the temp rise since 1980, according to Hansen A, should have been around 1.5 C. Even using HADCRUT4 global mean, it’s been only half that. I suppose you can argue that’s due to the random component in Hansen’s model and natural variability in the real climate, but policy makers would probably assume, or even be told by some climate scientists, that this is the temperature rise to be expected.

        http://www.woodfortrees.org/plot/uah/from:1980/plot/hadcrut4gl/from:1980

      • Nick Stokes: Hansen adopted scenarios.

        “Adopted”! That’s a good one.

        They were scenarios! How often do people like me have to be reminded that they were only scenarios!? No cause for alarm, or redirecting trillions of dollars of investments, except maybe into movies as exciting as “Mission Impossible” or “Raiders of the Lost Ark”. Do the people attending the upcoming Paris Conference understand that these are nothing but scenarios? Have you explained that to our commentators Friend of More Discourse and John Sidles? How about Dr Marcia McNutt, editor of Science Magazine, who asserted that the time for debate is over — has she accepted that these are only scenarios?

        Nevertheless, the temperature “scenario” for the CO2 “scenario” that most closely approximates the actual CO2 trajectory since then over predicted the temperature. The “scenarios” that included an 18 year pause in the surface warming were omitted unaccountably from his Congressional testimony.

        You would torture the language rather than admit that the forecasts for the early 21st century were wrong because the natural variability was so poorly known. “[We have to act by 2012, because after that will be too late!]” That message of alarm was based on inaccurate modeling based on incomplete and inaccurate knowledge.

      • Tony,
        More info on the original scenario numbers. I think the file at RC does, as it says, relate to Hansen 89. That would be this paper, on the effect of CFCs. Table 2 in that paper has a selection of numbers from the scenario, and they align with that file.

        I think the numbers at CA are likely from Hansen88. They are similar but not identical to the RC numbers. You might ask, why different, for a paper just one year earlier. But as Hansen says in the ’88 paper, those calculations started in 1983, with Scenario A results reported in 1984. So I would expect that the scenarios would have been recalculated with updated data for the 1989 paper.

      • “How often do people like me have to be reminded that they were only scenarios!? “
        Yes, I sometimes wonder about that. Very many, it seems.

        Most science relates to scenarios. If you jump out of a plane, how long does it take to reach the ground? What if? Science can do that calc, but it can’t predict whether you will jump out of a plane. If you put a whole lot of CO2 in the air, how much will it warm? Science can calculate that, given how much you put there. That last, it can’t predict.

        And of course, the people in Paris understand that. They are trying to influence how the scenario will unfold. If science knew unconditionally how temperatures would evolve, they could all stay home.

      • stevenreincarnated

        Nick, it doesn’t just matter what the final numbers are. It matters how the numbers got to where they were. A poor understanding of the carbon cycle and a poor understanding of atmospheric chemistry are not to be dismissed out of hand as unimportant. Why are we even arguing about this? It is clear he was wrong, both sides agree on this. One side says well he was right because if we change this and this and this it makes him right. They are first making the mistake that it is a given that they are right and second no, if you have to change everything to make something right then it wasn’t right to begin with. Wouldn’t it be better just to say he was wrong, here is why, and now we think we are better? It wouldn’t look so much like spin that way.

    • It is almost inevitable after an El Nino like 1998 that was 0.3 C above the trend line, that a long “pause” starting with that would ensue. That perturbation was worth 2 decades of the warming rate. The “pause” was always constructed with the 1998 step at one end, and is just invisible if you take 1998 out.
      http://www.woodfortrees.org/plot/gistemp/from:1998.5/mean:12/plot/gistemp/from:1970/to:1997.5/mean:12

      • Jim,

        “It is almost inevitable … that a long “pause” … would ensue”

        If only you had written that in 1999! But that doesn’t address Edwards’ statement.

        “The “pause” was always … and is just invisible if you…

        I don’t understand your point. Do you believe the climate scientists who wrote the scores of papers about the pause/hiatus (especially those exploring its causes) were mistaken? Or they were wasting their time?

      • They were addressing the skeptosphere who had been taking the “pause” seriously and in a rising chorus as long as it was not addressed. As you see, it hangs on 1998, and climate is never about one year. Or half a year.
        http://www.woodfortrees.org/plot/gistemp/from:1998.5/mean:12/plot/gistemp/from:1970/to:1998/mean:12

      • This has been linked to on Climate Etc. about a billion times, but hey, what harm is there in one more?

        Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.”

      • richardcfromnz

        Jim D, you have identified a non-MDV spline in GISTEMP that passes through 2015. Similar exists in HadCRUT4.

        The spline is central to this sequence:

        1895 – neutral
        1910 – MDV maximum negative (-ve)
        1925 – neutral
        1940 – MDV maximum positive (+ve)
        1955 – neutral
        1970 – MDV maximum negative (-ve)
        1985 – neutral
        2000 – MDV maximum positive (+ve)
        2015 – neutral
        2030 – MDV maximum negative (-ve)

        This is in respect to HadCRUT4: http://www.woodfortrees.org/plot/hadcrut4gl

        Now see relevant model mean vs observations comparisons:

        IPCC AR5 WGI Figure 10.1 (a)

        Model mean vs GMST (HadCRUT4)

        Model mean trajectory:

        1895 – from below (Yes, valid)
        1925 – from above (Yes, valid)
        1955 – from below (Yes, valid)
        1985 – from above (No, invalid)
        2015 – from below (No, invalid)

        Obviously the CO2-forced model mean does not pass through the MDV-neutral spline after 1955, the trajectory becomes much steeper. This is highly problematic. It implies that CO2 does not drive the secular trend in GMST after MDV is removed.

        More in page 2 of comments at Climate Conversations Group:
        http://www.climateconversation.wordshine.co.nz/2015/08/fatal-deficiencies-destroy-scientific-case-for-climate-catastrophe

      • richardcfromnz

        The respective ST and MDV signals referred to above are as per this paper Macias et al (2014):

        ‘Application of the Singular Spectrum Analysis Technique to Study the Recent Hiatus on the Global Surface Temperature Record’
        Diego Macias, Adolf Stips, Elisa Garcia-Gorriz (2014)
        http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0107222#pone-0107222-g005

        Figure 1
        file:///C:/DOCUME~1/User1/LOCALS~1/Temp/journal.pone.0107222.g001-2.png

        Figure 1. SSA reconstructed signals from HadCRUT4 global surface temperature anomalies.

        The annual surface temperature (gray line), multidecadal variability (MDV, blue line), secular trend (ST, red line) and reconstructed signal (MDV+ST, black line) are indicated. ST represents 78.8% of the total energy of the series; MDV accounts for 8.8% of the energy and the reconstructed signal for 88%. The dashed thin red lines indicate the range of variability of the ST obtained by applying SSA to the temperature time series obtained for each individual month.

      • richardcfromnz

        European Commission’s Joint Research Centre (JRC) report of Macias et al (2014) had this accompanying graph:


        HadCRUT4 Temperature Anomaly 1850-2013 (°C) (blue and red bars). Secular trend (red line), multidecadal variability (green line) and reconstructed signal from the statistical analysis (black line). Hiatus periods are indicated with grey bars in the x-axis.
        Credit: © EU, 2014

      • catweazle666

        Jim D: “As you see, it hangs on 1998, and climate is never about one year.”

        Utter absolute drivel.

        The “pause” was visible as a least squares linear regression trend line.

        As usual, you demonstrate you have absolutely no idea what you’re wittering about.

      • We can also overlay CO2 to get some idea of the sensitivity related to this trend. If these lines are parallel, the transient sensitivity is 2.4 C per doubling (1 deg C per 100 ppm in this range). Skeptics are still unconvinced that these trends are related while the AGW theory explains it rather well with their typical mid-range transient rates.

      • Regarding the European Commission’s Joint Research Centre (JRC) report of Macias et al (2014) graph, shown above, the natural variability (MDV green line) appears to be a simple sine wave. Not likely. Also it apparently only applies at the surface, not in the atmosphere, so one wonders what mechanism is involved. The satellites show two successive hiatuses, one from 1978 to 1997 and the second from 2000 to now, the second hiatus being slightly warmer than the first.

        Pretty much hiatuses all the way down. But then warmers ignore these global measurements in favor of the dodgy surface statistical models like HadCRU.

      • Jim D | August 29, 2015 at 9:36 am |
        We can also overlay CO2 to get some idea of the sensitivity related to this trend. If these lines are parallel, the transient sensitivity is 2.4 C per doubling (1 deg C per 100 ppm in this range). Skeptics are still unconvinced that these trends are related while the AGW theory explains it rather well with their typical mid-range transient rates.

        As the Hansen discussion illustrated the predicted IPCC and global warmer future CO2 levels are SyFy disaster movie material, the purest of fantasy.

        Hansen was predicting 5.2 PPM/Y CO2 increases. We should be at 492 if 560 PPM in 2028 was going to happen. Obviously not.

        The actual increases since 1988 have averaged 1.78 PPM/Y.

        5.2 PPM/Y starting from 1988 gets you very close to the IPCC 2100 940 PPM.

        Your TSR of 2.4°C is Ftsr = 12.8 ln (C/C0). That is 2.4 times the IPCC CO2 alone forcing and about 20% higher than the IPCC TSR.

        But we will use your 2.4 and a real worst case of 560 PPM by 2100.
        2.4 * ln (560/400) = 0.81°C.

        0.81°C is still in the “beneficial” warming range. It looks like”We have nothing to fear but fear itself (and spiders)”.

      • Hansen was predicting 5.2 PPM/Y CO2 increases. We should be at 492 if 560 PPM in 2028 was going to happen. Obviously not. …

        He never predicted ACO2 would be at 560 in 2028. He did not predict anything in that conversation. Nothing.

        He discussed what a theoretical world would look like in 2040 given a doubling of ACO2. So the first question, if you have an ounce of inquisitiveness, is what did he mean by doubled ACO2. Doubled in 1988, or doubled by 2040?

      • JCH | August 29, 2015 at 12:04 pm |
        Hansen was predicting 5.2 PPM/Y CO2 increases. We should be at 492 if 560 PPM in 2028 was going to happen. Obviously not. …

        He never predicted ACO2 would be at 560 in 2028. He did not predict anything in that conversation. Nothing.

        He discussed what a theoretical world would look like in 2040 given a doubling of ACO2. So the first question, if you have an ounce of inquisitiveness, is what did he mean by doubled ACO2. Doubled in 1988, or doubled by 2040?

        I am going to use Skeptical Science as my source – based on the theory that global warmers can’t be wrong or lying all the time.

        http://www.skepticalscience.com/Examining-Hansens-prediction-about-the-West-Side-Highway.html
        James Hansen reports the conversation as follows:

        “Reiss asked me to speculate on changes that might happen in New York City in 40 years assuming CO2 doubled in amount.”

        The original conversation happened in 1988. So doubled in 40 years is doubled by 2028.

        Now RCP8.5 has 468 PPM of CO2 Eq. and 442 of CO2 in 2028, with 561 PPM of CO2 Eq. and 499 of CO2 in 2042. But RCP8.5 was written in 2011. When Hansen made his prediction – fossil fuel consumption had increased 42% in the 70s (68 to 78) and 17% in the 80s (78 to 88). He could have well meant 560 PPM in 2028. The Chinese cranked the rate back to 32% in the 00s (98 to 08).

        The 90s (88 to 98) had only a 11% per decade fossil fuel consumption increase. That would have messed up his prediction.

      • Jim D: It is almost inevitable after an El Nino like 1998 that was 0.3 C above the trend line, that a long “pause” starting with that would ensue.

        If only someone had alerted James Hansen and the IPCC to this “almost [inevitability]”! Then the early reports of a hiatus in surface warming might have been received more respectfully, and the mechanisms responsible might have been studied and elucidated earlier.

        But no, this almost inevitability was totally unpredicted, not acknowledged at the start. But much has been learned during the unpredicted and erstwhile nonacknowledged surface warming hiatus. Now what is “almost inevitable”? Another step change as followed the 1998 el Nino? No change in global mean temp (less than 10% of the step change that followed 1998)?

      • There were some early attempts at cautioning against cherry-picking starting dates for trends, but that fell on deaf ears. At that point they could see they were up against the statistically illiterate, and what’s the point. None of the skeptics joined in on the statistics argument against cherry-picked start dates and significance, so here we are.

      • The author was trying to get Hansen to move 2100 to 2040, so doubled in 1988 actually makes more sense.

      • oops, I nested this wrong. I apologize.

        Jim D: There were some early attempts at cautioning against cherry-picking starting dates for trends, but that fell on deaf ears. At that point they could see they were up against the statistically illiterate, and what’s the point. None of the skeptics joined in on the statistics argument against cherry-picked start dates and significance, so here we are.

        Does that address something in the threads above it, and if so what?

      • matthermarler, that was addressing your point about why people didn’t predict a pause after the 1998 anomaly of 0.3 C. I think the assumption was that no one would cherry pick trends from 1998 as much as the skeptical community have, but they have made a cottage industry out of it, and that is the factor that was unpredictable.

      • Jim D: matthermarler, that was addressing your point about why people didn’t predict a pause after the 1998 anomaly of 0.3 C.

        That was not my point. My question was why did no one alert Congress, the IPCC, James Hansen, the public etc that there was a possibility of a pause. Who first, among the major climate scientists promoting the theory of catastrophic AGW, first alerted the public that Hansen’s 1988 et seq predictions of a linear or supralinear trend in surface warming through 2020 might not be accurate? And given the possibility that everyone now says everyone acknowledged all along, why was there so much resistance to the pause when it started happening? Granted nothing much can be accurately predicted, why did nobody up through 2000 acknowledge publicly the possibility of near on 2 decades without surface warming?

      • “Granted nothing much can be accurately predicted, why did nobody up through 2000 acknowledge publicly the possibility of near on 2 decades without surface warming?”

        Chicken Littles never mention any caveats as they run around screaming “The sky is falling! The sky is falling!” They are trying to make a sale.

      • matthewmarler, that was the point. You only get a pause by some dodgy statistical methods involving cherry picking a start date, and ignoring that you are starting with a high anomaly. Who could have predicted those methods being used?

      • Jim d: “You only get a pause by some dodgy statistical methods involving cherry picking a start date,”

        Utter drivel.

      • Jim D: Who could have predicted those methods being used?

        Let me try again. Who first acknowledged the possibility that Hansen’s 1988 et seq predictions of linear or supralinear increase in global mean temp through 2020 might be wrong? At least some of those people who, we are now told, knew it all along surely alerted the public? Who, when?

      • matthewmarler, you can take a trend in the 30-year temperature, and not see a pause at all. In fact it is looking quite linear. There are good reasons for that. The pause is insignificant on the time scales that really matter.
        http://www.woodfortrees.org/plot/gistemp/mean:120/mean:240/plot/gistemp/from:1985/trend

      • JCH | August 29, 2015 at 1:20 pm |
        The author was trying to get Hansen to move 2100 to 2040, so doubled in 1988 actually makes more sense.

        It is hard to tell what was going on. Hansen may have implied CO2 would double quickly, the author may have been feeding him a scenario to get a sound bite. The prediction was made in 1988 so it would be from 1988 and would have reflected Hansen’s state of mind and public statements at the time.

        But given the prediction presumed a much faster CO2 increase Hansen is off the hook.

      • Jim D: matthewmarler, you can take a trend in the 30-year temperature, and not see a pause at all. In fact it is looking quite linear.

        So you can’t see on that graph what everyone now admits has happened, and what Nick Stokes has assured us everyone would have expected some time or other, and which you wrote happened without its having been predictable.

        Remind me again, which of the people who already know about natural climate variability first warned the public that it was possible that the world would warm more slowly than Hansen predicted?

      • Yes, the trend in the 30-year temperature didn’t even wobble, which says something about the insignificance of the “pause” in the long-term climate trend. The rise rate is also smack-dab in the middle of the IPCC estimates of transient sensitivity, being near 2 C per doubling.

      • richardcfromnz

        David Wojick

        >”Regarding ….Macias et al (2014) graph, shown above, the natural variability (MDV green line) appears to be a simple sine wave. Not likely.”

        Well, this was the signal extracted by SSA. Similar can be achieved by Empirical Mode Decomposition (EMD). HadSST2/3 in particular.

      • richardcfromnz

        Jim D

        >”Yes, the trend in the 30-year temperature didn’t even wobble, which says something about the insignificance of the “pause” in the long-term climate trend. The rise rate is also smack-dab in the middle of the IPCC estimates of transient sensitivity, being near 2 C per doubling.”

        Problem is, as I’ve demonstrated upthread, the secular trend (ST) from 1955 onwards is NOT smack-dab in the middle of the IPCC CO2-forced model mean i.e. CO2 is NOT the driver of the secular trend in GMST.

      • richardc, you can calculate sensitivities from past observations and those agree with the IPCC. If you want to use the models, you have to allow for the forcing used to drive them not being accurate. For example, the solar lull was not programmed into the forcing and possibly higher emissions from China could have increased the aerosol forcing. Both would cause the models to run hot maybe by a couple of tenths since 2000. On the other hand, the CRUTEM4 land temperatures have risen over 0.7 C since 1983, so those have kept up with the models.

      • richardcfromnz

        Jim D

        >”richardc, ….. If you want to use the models,”

        The IPCC want to use the CO2-forced models Jim, not me. The models are the implementation of man-made climate change theory (proven false – see below), so I don’t see why they should be ignored.

        >”you have to allow for the forcing used to drive them not being accurate.”

        Well yes, the forcing driving them is obviously not only not accurate but much of it (CO2 forcing) is superfluous i.e. I do NOT have to “allow” for that.

        Re man-made climate change theory proven false by the IPCC’s own climate change criteria, go to this thread header:

        https://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-728167

      • Jim D: you can calculate sensitivities from past observations and those agree with the IPCC.

        You can also take Scafetta’s model for temps since the end of the Ice Age as your model for natural variability: if you do that, you can compute an estimate of climate sensitivity to CO2 close to 0. That yields another scenario for the future.

        So many scenarios, so little evidence!

        It is of critical importance to know as much as possible about all of the mechanisms of natural variability and their quantitative relations to measurable surface temp. With all of those scenarios, or sequences of modeled future temperatures, we should be able to tell in 2 or 3 decades whether any of them have been accurate enough against out of sample data for us to consider them as forecasts for the subsequent decades.

        For now it is useful to acknowledge that the alarms and warnings about the early decades of the 21st century were based on inadequate knowledge and models that forecast too high.

      • When you look at the timing and rising rate of both warming and GHG forcing and its other signatures, it weighs in favor of GHGs, along with an actual testable physical explanation that your other candidates lack.

      • Jim D: When you look at the timing and rising rate of both warming and GHG forcing and its other signatures, it weighs in favor of GHGs, along with an actual testable physical explanation that your other candidates lack.

        that depends entirely upon your choice of model for the natural variation.

        Going forward, given that (a) the hiatus has lasted up til now and (b) everyone would have understood in 1995 that a 15+ year hiatus was not unlikely; can the hiatus continue through 2030? What are the odds? (note, that is a conditional probability: given that the hiatus has lasted about 15 years, what the the distribution of end-times for the hiatus?) The two earlier 20th century hiatuses (?) lasted 30+ years, so if the residual about the trend is stationary, this one ought last at least a while longer, with its expected end after 2028. If the residual about the trend is a cycle plus a stationary trend, then the current hiatus is almost certain to end later than 2025, its expected end being about 2030.

      • matthewmarler, the hiatus promoters have now retreated to just the satellite datasets that don’t even show the incipient 2015 El Nino in July which is already prominent in all the surface datasets. Once it shows there, they will have to wave the white flag. It won’t survive 2016.

      • Jim D: the hiatus promoters

        Who is that? Nick Stokes? Kevin Trenberth?

        How long will the hiatus last?

    • Fabius –

      So is your contention that Tamsin is lying, or just that she is confused about what she and her colleagues thought might potentially happen in the short term?

      Also, would you mind clarifying what relevance you attach to answering this question? What difference do you think it makes whether or not Tamsin and her colleagues thought, in the past, that short-term “pauses” might happen within a longer term trend of rising temps?

      • Joshua,

        (1) “So is your contention that Tamsin is lying”

        That’s a too-harsh description. History shows that its commonplace for people to describe the past in a favorable light.

        (2) “she is confused about what she and her colleagues thought might potentially happen in the short term?”

        I think most of us are sometimes confused about what we “thought {in the past} might potentially happen {in the future}”. A large body of research shows the fallibility of memory.

        That’s why we have the literature as documentation of what was thought in the past.

        (3) “what relevance you attach to answering this question?”

        That’s a great question! I’ll expand on what I said upstream..

        Thomas Kuhn believed that predictions were the gold standard test for a paradigm. Prediction of the pause would have been powerful validation of the current climate science paradigm.

        But, despite assertions by skeptics, failure to make such predictions does not refute it.

        The failure to predict the pause and the many contradictory explanations are important inputs to the public policy debate, indications about the robustness of the current climate science paradigm — a key factor to evaluate before spending trillions and perhaps ever revising the world economy on its forecasts.

      • Somehow missed this answer earlier.

        ==> “I think most of us are sometimes confused about what we “thought {in the past} might potentially happen {in the future}”. A large body of research shows the fallibility of memory.

        That’s why we have the literature as documentation of what was thought in the past.”

        Hmmm. Sure. All memories are essentially reconstructions that are put together in the present. Lots of room for error there, and tons of room for any number of biasing influences to lead to a present-time reconstruction that differs from what actually existed in the past.

        But your assertion here seems highly implausible to me. This isn’t a matter of some more or less random recollection of what she thought in the past. This is a description of the very roots and foundations of her work. I’m quite sure that if I asked you to characterize your thinking about some very fundamental and basic concept that was crucial in your work (and that of your colleagues) some 20 years ago, you would be very reluctant to give much credence to people who had no actual familiarity with what you thought passing judgement on whether you remember what you thought accurately.

        And with good reason. Of course, there would be a possibility that indeed, your recollection would be entirely unreliable, just as it’s possible that Tamsin’s recollection is unreliable and you’ve been able to detect her error. But I’d have to say that the probabilities seem to me that what is far more likely is that you are allowing your own biases to confuse your thinking, where you think that your ability to read the tea leaves is more reliable than Tamsin’s recollection about very basic concepts in her work.

        But it’s a bit hard for me to detect a clear answer to my question. It seems that your conclusion is that Tamsion is confused about her, and her colleague’s thinking about a very basic concept in her work – and that you are better able to see what she really thought than she is. Is that right?

        ==> “The failure to predict the pause and the many contradictory explanations are important inputs to the public policy debate, indications about the robustness of the current climate science paradigm — a key factor to evaluate before spending trillions and perhaps ever revising the world economy on its forecasts.”

        Looking aside the subjective nature of “the failure to predict the pause….” and taking your characterization to be true for the sake of argument….

        I’d say that while people seek to leverage what they perceive as past errors to be meaningfully useful for evaluating (in particular doubting) the validity of scientists’ current theories, in fact, that is usually fallacious thinking that is employed to serve an agenda.

        A good example is the argument that “skeptics” like to make that somehow theories that existed in the 70s that there was significant risk from cooling is meaningful for evaluating the validity of current day scientific theories about the risks of warming.

        If you want to explain why I should be less concerned about long term warming because of a relatively short-term reduction in the longer term trend of accelerating warming in SATs only, then make the scientific argument. Looks to me like you’re trying to leverage fallacious reasoning to take a shortcut.

      • Steven Mosher

        “Thomas Kuhn believed that predictions were the gold standard test for a paradigm. Prediction of the pause would have been powerful validation of the current climate science paradigm.”

        Was there a failure to predict “the pause”?

        That is a hard position to maintain.

        First. Do we mean “pause in general” or do we mean the “actual” pause that occurred?

        Let’s take the later first.

        What do we mean by the “pause that actually occurred” Let’s say for simplicity that the purported pause “began” in 1997 and continues to today. And so our question then is this. Prior to 1997 did anyone predict that a pause would start in 1997?. The answer to this is most likely no.
        And the principle reason for that would be the inability of prediction tools to predict exactly the start dates of any ‘natural cycle’. The tools still suffer from that today. However, you can find model runs (predictions) that come in at or below the actual warming. That is, if you look at all the model runs (predictions) you will find a very small number that underestimate the warming.. that effectively “predict” a pause.

        If on the other hand if we mean were pauses in general predicted? the answer is yes.

        What was missing was any discussion or attention to these odd patches and singleton runs.

      • So is your contention that Tamsin is lying, or just that she is confused about what she and her colleagues thought might potentially happen in the short term?

        Tamsin Edwards has (very recently) clearly suggested that more CO2 into the atmosphere might result in less sea-level rise. How many people have caught this point, even with a hint here?

        If ten years from now, all the models are starting to show highly variable sea-levels from slightly different patterns of CO2 emission, will people suggest she’s lying if she says scientists studying the issue had known about the possibility?

      • > Thomas Kuhn believed that predictions were the gold standard test for a paradigm.

        Cf.

        https://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-728163

    • Editor, my web search skills are not legendary, but attempting to delve into the origins of the concept of a Pause did not produce what I expected. I set the project aside because it needs more time (and maybe more skill than I have). I was looking into this in support of my post here at Climate Etc on Pause memes a while back:
      https://judithcurry.com/2015/07/03/a-key-admission-regarding-climate-memes/

      I originally thought (I guess like pretty much everyone) that Pause memes were a skeptic invention, and skeptics certainly pressure the Consensus with this concept latterly. However, as my post above demonstrates, Pause memes actually *defend* the Consensus from far worse, and are emotively selected by adherents in preference to other memes like ‘stopped’, or ‘cooling’ (both of which were the skeptics preferred pressure before March 2008), and “we don’t know”, all of which would produce very negative emotions considering the long attachment of adherents to rising GST as the main icon back before 2008.

      In researching that post, it occurred to me that if Pause memes defend the Consensus, maybe they also arose there. I’m still not sure, because I doubt that what I turned up are the only references to be turned up. However, there seems to be very few mentions of ‘pause’ (or ‘hiatus’ , or ‘lull’) that are in context, prior to March 2008. And most of the blogs, skeptical or orthodox, don’t have any mention until much later than this, including (afaict from web and site searches) SkS, Tamino’s, Jo Nova, Donna LaF, Climate Audit, and others. Climate Etc wasn’t even around then. What mentions there are in a couple of papers (and using basic search skills I will have missed some), didn’t ‘take off’ as a widespread concept, didn’t spawn a spread of Pause memes.

      What happened in March 2008 was Keenlyside et al:
      http://www.nature.com/nature/journal/v453/n7191/pdf/nature06921.pdf

      As far as I can tell, the discussion around this paper is what appeared to trigger Pause memes. Discussion occurred at WUWT on April 30th of the same year, and at RC on 8th May, and perhaps critically at Revkin’s NYT blog on 1st May, transmitting to a wide audience. The ball seemed to be in play then, with mentions moderate yet increasing. Pause memes then appeared to get a boost about 10 months later from a spat between George Will and Revkin, when Will accuses Revkin of just ‘inventing’ the concept of a Pause without any scientific backup (so Will seems to be attacking the *defensive* role of the Pause for the Consensus). See here:
      http://wattsupwiththat.com/2009/02/27/george-wills-battle-with-hotheaded-ice-alarmists/

      After this time there are too many mentions to track.

      Well there may well be much that I missed. Yet if this is approximately the path of Pause meme origination, whether it is a skeptic invention, a Consensus one, or an emergence from the boundary play between the two, I guess depends on what position one perceives Keenlyside et al and Revkin to occupy in the debate, as they both appear to be critical in the origination of Pause memes. A monster caveat here is that I’m not familiar with the relevant scientific literature, and if a lot of this is hidden from standard internet searches (certainly many papers came up), then there could for instance be an academic story prior to 2008 yet not in the public domain.

      • Andy,

        Great detective work! Keenlyside (2008) is also the earliest I’ve found. Your research showing the debate it sparked is a valuable addition to this history. It also sparked some mention in the news:
        Next decade ‘may see no warming’“, Richard Black, BBC, 1 May 2008.

        Here’s the first mention I’ve found about the pause: “There IS a problem with global warming… it stopped in 1998“, Bob Carter (retired professor and head of the School of Earth Science at James Cook U), op-ed in The Telegraph, 9 Apr 2006.

        Here’s a listing of early papers (section 9) and newspaper articles (section 7) discussing the pause:
        http://fabiusmaximus.com/2012/10/14/climate-global-warming-44028/

      • Keenlyside is an AMO paper. The AMO doesn’t cause anything. The reference I cited is from 2008, and it states the PDO is capable of masking AGW. And it did. One misconception is the pause started in 1998. Nothing started in 1998. After 1998, it got warmer. After 2005, the trend weakens because of strong La Nina events and a back-to-back La Nina events. The PDO is a beast. And then the pause went paws up. Because the PDO was due to go positive, and it did. The pause is dead. It’s overrated. The Pause has made complete fools of a lot of very intelligent people.

      • Editor of the Fabius Maximus website | August 29, 2015 at 10:58 am

        Thanks, Editor.

        I found the Carter link, plus others from 2006 up to early 2008 that use the terms ‘stopped’ or even ‘cooling’, though several stem from Carter’s article so aren’t independent (and unfortunately I didn’t save the links).

        However, these mentions do not raise up the concept of a ‘pause’, i.e. a mechanism which perforce *must* return to warming again (due to ACO2 impact), which is a fundamental difference. They do not imply that the warming must resume by any means. Such mentions of ‘stopped’ and ‘cooling’ are skeptic challenges to the Consensus.

        However, once Keenlyside and associated discussion on this paper had kicked the concept of a Pause into the game, this appears to develop as a Consensus defense, not a skeptic attack, e.g. George Will referring to Revkin:

        ‘In a story ostensibly about journalism, he simply asserts — how does he know this? — that the last decade, which passed without warming, was just “a pause in warming.” ‘

        It may seem strange to us know, when skeptics use the concept of the Pause to pressure the Consensus (e.g. how long it has gotten), but back in 2008 ‘a pause’ was a handy defense against ‘stopped’ or ‘cooling’.

        The very use of the word ‘pause’ defends the concept that there is still AGW in the system that will return to dominate. This is why the concept spread through the Consensus (as Lew has noted). It is an emotively much more acceptable term than the alternatives. And what is essentially happening now is that the skeptics are using memes that were selected by the Consensus folks themselves; both sides are using the language of ‘a return to warming’ at some point, which is what ‘a pause’ really means.

      • P.S. Lew notes the spread within the Consensus that is, but has the wrong reasons as to why.

      • This is one of the key “pause” articles.

        The contentious part of our paper is that the climate system appears to have had another “episode” around the turn of the 21st century, coinciding with the much discussed “halt” in global warming. Whether or not such a halt has really occurred is of course controversial (it appears quite marked in the HadCRUT3 data, less so in GISTEMP); only time will tell if it’s real. Regardless, it’s important to note that we are not talking about global cooling, just a pause in warming. …

        Probably one of the best, but still, like Mann, AMO guys.

      • andywest2012: Yet if this is approximately the path of Pause meme origination, whether it is a skeptic invention, a Consensus one, or an emergence from the boundary play between the two, I guess depends on what position one perceives Keenlyside et al and Revkin to occupy in the debate, as they both appear to be critical in the origination of Pause memes.

        That’s an illuminating post.

        A thorough history of the “pause” meme and the contexts in which diverse writers made diverse claims about the pause in surface temperature warming would be a good read. For me, a “history” of pause memes consists of my memories. For example, as I recall, Mark Steyn and George Will started incorporating pause memes in their commentaries at a time when nearly all of the peer-reviewed scientific articles were ignoring it; and when the major players (Schmidt, for example) were still denying that there was a pause in the surface warming. Anastasios Tsonis et al published a model that predicted a pause in surface warming at a time when the ongoing pause in surface warming was still unacknowledged by Schmidt at RealClimate. (iirc, he and I discussed this, and he persuaded me that Tsonis et al predicted a pause in warming, not an actual decline in temperature as I first thought.)

        I think you have shown us that it would not be an easy undertaking.

        And while I am at it, I want to thank Editor of the Fabius Maximus website for his post in response to yours.

      • I hope this isn’t a dupe link, but here is a “projection” from 2008. To my way of thinking, it this was put out for public consumption, it’s a prediction, but I’m not too bothered about that one way or another.

        https://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-728083

      • Steve McIntyre

        Prior to 2008 and including 2008, there were a number of articles and posts that temperatures were rising slower than models, with the tropical troposphere featured heavily. Douglass et al 2007 had argued this. Their observation was correct, but their statistical comparison was flawed. Santer et al 2008 purported to rebut them, but cheated on their data, using data only to 1999. McKitrick and I submitted an article that rebutted Santer, pointing out, in passing, that his reasoning would also show that the difference between observations and zero trend was “statistically insignificant” – not a point that we were attempting to argue, merely pointing out the implications of his method. The article was fiercely rejected, with one reviewer saying that such comparisons had no place in science.

      • Matthew

        Here is dr hansens testimony before congress in 1988 complete with his estimation of future temperature rise

        http://climatechange.procon.org/sourcefiles/1988_Hansen_Senate_Testimony.pdf

        Tonyb

      • Steve McIntyre,

        “McKitrick and I submitted an article that rebutted Santer, … The article was fiercely rejected, with one reviewer saying that such comparisons had no place in science.”

        Have you posted this online? It would be a useful marker in the history of the pause: when mention of the pause was still too far out of consensus for some climate scientists.

        Also useful would be the reviewer’s comment, if proper to post.

      • Pause memes seem to be traveling through the Consensus rather like a wave. At the leading edge some fierce resistance to any concept of a slow-down, as noted by Steve M. But behind that and as time goes by, a increasing contextual acceptance by many. After all, the bitter pill of a ‘slow down’ is much sweetened if it is defined to be ‘only temporary’, hence ‘a pause’. By mid-2013 likely acceptance by a majority; at least major orthodox orgs publicly acknowledge ‘the pause’ around this time. And yet there is still resistance from some who promote the competitive meme that ‘warming continues unabated’. For instance Lewandowsky and Oreskes, plus Karl et al. So resulting in tension within what is still, nevertheless, a Consensus on calamity. Whatever it turns out that the climate is doing, this is a social process, not a reflection of science.

      • Steve McIntyre | August 29, 2015 at 2:21 pm

        Thanks for the pointer Steve; I read the Conclusions and Summary of Douglass et al. I guess with Christy and Singer onboard this would be regarded as a skeptic work. Also the Discussion and Conclusion of your M&M paper. Afaics these make no speculation about the future (other than a note in Douglass et al cautioning on the use of projections from unskilled models). Is this right? The concept of a likely return of warming has to be explicit to qualify as a pause meme.

        The all-too-clear resistance to your submission reveals the emotional defense that appears to be such a powerful and constant feature of the climate Consensus. Such a blunt challenge: ‘models and data disagree’, threatens the very foundations of the climate Consensus, and I think has helped usher in the concept of ‘a pause’, which is emotionally much less challenging.

        Whatever the impact of ACO2 and whether the pause does or doesn’t turn out to real (in the sense of resumed MM GW), it seems that emotive drives are dominating the output of the Consensus, not the science.

      • climatereason: Here is dr hansens testimony before congress in 1988 complete with his estimation of future temperature rise

        Thank you. I have read it plenty of times. A Friend of More Discourse linked to it repeatedly, as have many others. His testimony, in and out of the Capitol Building, entailed “alarms” and “warnings” of catastrophes, and he has continued issuing “alarms” and “warnings” of catastrophes. He has even issued dates by which time we have to act to reduce CO2, or else the catastrophes will happen for certain. Did he ever say or write the equivalent of “We must act by 2012 to avoid catastrophe, unless the pause lasts through 2030 and then we have to act by 2040” or some such? Did anyone on the catastrophic side of the AGW debates ever seriously consider the consequences should natural variability produce a pause (as in the earlier 20th century) like the pause that we have been experiencing — before the pause in surface warming was indisputably underway?

      • andywest2012: And yet there is still resistance from some who promote the competitive meme that ‘warming continues unabated’.

        A case in point: 2014 was the warmest year ever (33% chance) and 2015 is shaping up to be the warmest year ever (even without an el Nino yet!) as evidence for continuing warming; no hint of them being possibly just transients above a line that has been flat for close on 18 years. Even in a discussion devoted to natural variability the possibility that these high points are results of natural variability unrelated to CO2 increase is discounted. There is no certainty that the hiatus in surface warming will continue till 2030 or later, but a transient spike in 2014-2016 is not much evidence that it won’t.

    • Steven Mosher

      Funny

      ““That pause in warming of the atmosphere surprised the media and public, even though scientists always expected this kind of thing could happen in the short term.”

      Is this correct, or myth-making? Are there mentions of the pause, casual or in the literature, before the pause? I have found none.”

      Absence of evidence, is well, you know, not very convincing. The understanding of those who actually “do the science” is that the the increase in temperatures will not be monotonic: translation: there will be pauses, dips, slow downs, etc. That no -one comments Directly or explicitly on these probable events is immaterial. It is, however, indicative of the focus of the science. The science is focused on explaining the increase
      over long periods, not the blips, stalls, pauses. This is the bias. As far as it goes it’s not that bad of a bias. There is always and forever an operational choice that every scientist must make: Do I spend time on the oddities, in this case natural variation, or do I try to understand the regularities. is there ‘meaning’ in that outlier, or is it just a waste of time?

      Collectively, with a few notable exceptions, most scientists have not focused on natural variation.. for the paradigm it’s a residual. The decision to ignore it is pragmatic. Its hard to publish a paper on the Possiblity of a pause before one actually happens. I can see the title:” Possible unicorns
      in future temperature data” compelling shit!.

      The pause will pass. It’s presently unclear what changes will be made to “the science” as a result. I can count these.

      1. More attention to temperature records at northern latitudes
      (cowtan and way)
      2. More Attention to model/data comparisons ( Cowtan et al)
      3. More attention to volcanic forcing ( i forget who)

      One might hope that the “pause” would tempt a few more folks to take an interest in the “residuals” — probably wont happen.

      • In 2006 Phil jones published a paper stating that natural variability was greater than scientists had hitherto expected. He came to this conclusion after researching the almost as warm as today 1730″‘s and the bone chilling winter of 1740 that brought the warm decade to a sudden halt

        Tonyb

      • “…….most scientists have not focused on natural variation.”

        Of course they haven’t. All incentives within the institution of climate science are skewed away from it. The Golden Rule rules. Those who have the gold rule. And every where you look , the money is in AGW.

        Is it really a surprise how things turned out?

      • Steven M,

        (1) “Its hard to publish a paper on the Possiblity of a pause before one actually happens.”

        Perhaps you are correct. However this effect of natural variation seems like a logical thing to have mentioned before the pause, if anyone had expected it. I doubt any resolution of this debate is possible, so let’s move to a different aspect of this topic.

        Equally important for the public policy debate imo was how climate scientists reacted after the pause became visible. Some of the first few papers on the pause (in 2008-09) pointed to natural variation as a cause — let’s ignore their reception to make a different point. However there were few public comments by scientists in the news, unlike those trumpeting every .1°C new record today. (See the papers and news stories here.)

        But the public policy debate quickly went off a cliff. Many skeptics said the pause disproves…. Many activists said anyone seeing the pause is a denier. Scientists tended to condemn the former (rightly so), and tended to ignore the latter. IMO their tolerance for activists who ignore the science set the stage for activists’ later divorcing the IPCC as “too conservative”. It was a large missed opportunity, imo.

        Roughly, so far as I can tell, by 2011 the discussion in the p-r literature turned from existence of the pause to focus on its causes. Natural variation was one of the most frequent explanations — among a long list of candidates. Meanwhile activists were still loudly denouncing anyone who pointed to the pause as a “denier”, with little pushback from scientists.

        (2) “It’s presently unclear what changes will be made to “the science” as a result.”

        You might be right. I prefer to believe that the scores of papers about the pause will advance the science. Time will tell.

      • Steven Mosher

        Perhaps you are correct. However this effect of natural variation seems like a logical thing to have mentioned before the pause, if anyone had expected it. I doubt any resolution of this debate is possible, so let’s move to a different aspect of this topic.”
        #################################
        In other words, I’m right and you want to “move on” The simple fact is that some model runs have predicted “pauses”. Go look at them. The simple fact is these have been short and inconsequential. So, nobody writes about them, until they were made into a bigger issue.

        ##########################################
        Equally important for the public policy debate imo was how climate scientists reacted after the pause became visible. Some of the first few papers on the pause (in 2008-09) pointed to natural variation as a cause — let’s ignore their reception to make a different point. However there were few public comments by scientists in the news, unlike those trumpeting every .1°C new record today. (See the papers and news stories here.)

        How is that Equally important? The theory remains the same. The theory has always allowed for natural variation, hence early comments that we had to wait for the climate signal to overwhelm natural variation.

        ############################
        But the public policy debate quickly went off a cliff. Many skeptics said the pause disproves…. Many activists said anyone seeing the pause is a denier. Scientists tended to condemn the former (rightly so), and tended to ignore the latter. IMO their tolerance for activists who ignore the science set the stage for activists’ later divorcing the IPCC as “too conservative”. It was a large missed opportunity, imo.
        ###############
        In general no side in the debate does a good job of keeping their
        “troups” in line. I could easily argue that Dr. Curry, Dr. Spencer, yourself, and others have not done a good enough job keeping the Goddards and Watts of the world in check when they make allegations of criminal fraud.
        ya, we need more nannies. start on your team, I’ll work on mine

        ##############################
        Roughly, so far as I can tell, by 2011 the discussion in the p-r literature turned from existence of the pause to focus on its causes. Natural variation was one of the most frequent explanations — among a long list of candidates. Meanwhile activists were still loudly denouncing anyone who pointed to the pause as a “denier”, with little pushback from scientists.

        natural variation is a just code for “we dont understand” its not a odds with other explanations.

        (2) “It’s presently unclear what changes will be made to “the science” as a result.”

        You might be right. I prefer to believe that the scores of papers about the pause will advance the science. Time will tell.
        ####################
        Name ONE.

      • Fabius,

        However this effect of natural variation seems like a logical thing to have mentioned before the pause

        It was. I’m not doing your homework for you, but if you go through the various IPCC reports you will find discussions of decadal variability. This topic has been covered extensively elsewhere. At best, you’re arguing that scientists should have considered the possibility of some kind of slowdown in the near future, and mentioned this possibility to the public, rather than left it buried in the scientific literature and in IPCC reports. Maybe they should have, but that doesn’t change that decadal variability was discussed in the literature and the lack of a time machine means that they can’t go back and change what they explicitly presented to the public now that they know what actually happened.

        This kind of “they should have” game that you’re playing is remarkably tedious. You should probably familiarise yourself more with this topic before going off all half-cocked, but there’s nothing that can be done about that now. However, you do have the opportunity to do better in future.

      • Max, if the scientists had made explicit that we might see short term variability or a “pause” while at the same time the long term trend would be up due to AGW back in the 90’s, would you still be complaining? Do you really think that would have changed things?

    • Tamsin Edwards statement is correct. Even before the pause, anyone who had looked at an individual climate model’s global temperature would have noticed it had decadal scale ups and downs, not the steady upward rise displayed by their mean. Was it worth commenting on wiggles that were expected from internal variabilities? Probably not. This was seen in both the models and observations.

      • catweazle666

        Jim D: “Even before the pause, anyone who had looked at an individual climate model’s global temperature would have noticed it had decadal scale ups and downs”

        Not downs approaching 20 years, it hadn’t.

        Stop making stuff up, your mummy will be cross.

      • catweazle, and you have clearly not seen the spaghetti plots of individual GCM ensemble members, otherwise you would not be saying that.

      • catweazle666

        ” you have clearly not seen the spaghetti plots of individual GCM ensemble members, otherwise you would not be saying that.”

        Oh, I certainly have, Jimbo – and I’ve even posted links to them on this very site on more than one occasion, they are a fine source of mirth and merriment to anyone who has a true grasp of what is involved.

        Not only do you utterly fail to understand the implications of those “ensembles” of over 100 “spaghetti” graphs with regard to the utility of the computer games climate models that output them, but you utterly fail to grasp that statements such as you have just posted confirm beyond a shadow of a doubt that you are entirely full of smelly brown stuff.

        In fact, you really haven’t the first idea, have you?

        Oh, and by the way, unlike you, I have actually been paid good money to write computer models with a view to understanding real-world phenomena – the first being in 1971 and concerned with subsonic heterodyne effects in motor vehicles. But unlike computer games climate models, mine were for engineering projects, so it was necessary that they were demonstrable in the real world, lives could be dependent on them. Plus, I can assure you that had I presented an “ensemble” of “spaghetti”, I would have been looking for another job very rapidly indeed.

      • catweazle, good, so when you say you can’t see long-term downs, are you not looking carefully enough, or what?

      • catweazle666

        “catweazle, good, so when you say you can’t see long-term downs, are you not looking carefully enough, or what?”

        You really, really don’t get it do you?

        I would do better explaining the finer points of quantum magnetohydrodynamics to my cat than trying to inform you as to the utter ridiculousness of “ensembles” of “spaghetti graphs” as a representation of real-world events.

      • catweazle, you need to retrace the argument. You started by arguing with the statement I supported from Tamsin Edwards. The models said downward trends were possible, and you said they didn’t, and when caught being wrong on that, you started to say some angry stuff about models that was off the original point hoping I would be drawn into your game and forget. That’s the way it goes around here. I am used to it.

  10. Not sure what the Alice Dreger link was supposed to be to, but more information about the resignation and the issues that prompted it are on her site at http://alicedreger.com/resignation_NU

  11. An article in the New York Times proposes a study of “ignorance”. Of course, being the NYT, they don’t really mean it. Humility from Big Smug? Jerk the other one.

    In fact, the study of ignorance just means more word-stews and number-pies crammed with dodgy percentages and probabilities, reinforced by models and push polls, and adorned with many buzz-words (lovin’ “epistemic” this month). Yes, that final frontier territory called Unknown will be mapped, tamed and brought within the fenceline where it can be made to behave.

    Really, a simple dunno might serve better. Especially about climate, where you really dunno.

    • Personally, I think one could learn far more from an independent study of the prevalence of ignorance among the good and the great (aka the NYT’s authors – both the regular and most of the guest kind – and commentariat).

      My guess is that such a study would not reveal a particularly pretty picture of the NYT.

      For some strange reason, almost every other day I receive subscription offers from the NYT. I’ll take their 10 freebies/month. But beyond that?! I’m not sure they could pay me enough to subscribe!

    • Curious George

      There are limits to our knowledge. It is important to be aware of these limits; but to call them “ignorance” is a little too strong for my taste. Good bye, New York Times. Learn how to use words.

    • Curious George

      [Feynman on a quote from a Brazilian physics textbook] “Triboluminiscence is a light emitted when crystals are crushed.” … But if, instead, you were to write, “When you take a lump of sugar and crush it with a pair of pliers in the dark, you can see a bluish flash. Some other crystals do that too. Nobody knows why. The phenomenon is called ‘triboluminiscence.’ Then someone will go home and try it.”

    • I’m hoping that Dr. Curry will start a class entitled “Introduction to Climate and other ignorance”. May consider crossing the pond to attend.

  12. Crucial ocean acidification papers come up short. [link]

    golly.

    • http://www.nature.com/news/crucial-ocean-acidification-models-come-up-short-1.18124

      Cornwall and Hurd analysed 465 studies published between 1993 and 2014 that manipulated seawater chemistry and found that experiments often failed to implement widely accepted measures to ensure quality.

      Overall, Cornwall and Hurd found that in only 27 cases could they be certain that an appropriate experimental design had been used, and in 278 cases, the design was clearly inappropriate. The remaining studies had insufficient detail on experimental set-up — a problem in itself, note the researchers.

      Only 27 of 465 studies (5.8%) had a chance of giving valid results.

  13. The Case for Teaching Ignorance — It’s been done:

    Like the group’s other albums, Everything You Know Is Wrong is made up of a series of surreal skits that make satiric comments on contemporary culture. This album addresses and parodies pseudoscientific beliefs of the mid-1970s.
    https://en.wikipedia.org/wiki/Everything_You_Know_Is_Wrong

  14. Two papers mentioned in the intro are related to the solar magnetic cycles.
    Spectral analysis of the CRUtem4 shows that the solar magnetic periodicity is the strongest component (by a whisker)

    components in order of increasing periodicity: the ENSO, AMO, Hale Cycle, AMO

    • this is an interesting analysis.

    • But … but … back radiation!!

    • David, thanks for the notice. Fascinating. There is substantial other directional evidence for this in bias corrected radiosonde data, in GPS occultation, and in other satellite data. Essay Humidity is Still Wet, and much more extensively the climate chapter of Arts of Truth.
      Even more fascinating, working backwards from observational effective sensitivity (say 1.7, Lewis and Curry) through Lindzen’s Bode net feedback model (grey earth CO2 at 1.2, IPCC ECS 3 so IPCC Bode net feedback 0.65) gives net Bode feedback of about 0.25. Presuming net cloud feedback about zero rather than modeled ~0.15 (essay Cloudy Clouds) then the WVF is about 0.25, not about 0.5 as in the CMIP5 ensemble average (since IPCC says WVF by itself about doubles CO2 sensitivity). And Spencer’s Figure 2a, eyeball integrated over all latitudes and altitudes, produces about that value (~half of modeled figure 2b), based on the constant RH (IPCC result, AR4 WG1 box 8.1) and constant Q (zero WVF) lines in Spencer’s figure.

    • It is very hard to draw much conclusion from ten years (2005-2015) when there was not a large El Nino and little tropical ocean trend. We need to see what happens after this next El Nino step.

      • JIm D: It is very hard to draw much conclusion from ten years (2005-2015) when there was not a large El Nino and little tropical ocean trend. We need to see what happens after this next El Nino step.

        No conclusion! (or at least not much conclusion!) I concur! I have been consistently (I think) recommending against drawing conclusions since I have been posting here.

        But previous to this, conclusions about future disasters were drawn when none of this information was available at all! The authors disconfirm the common assumption of a constant relative humidity across temperature changes and disconfirm yet again the accuracy of the Calusius-Clapeyron relationship.

        We need to see what happens after the next few decades of persistent and dedicated empirical research like this. It would have been wise had some of the prognosticators said We need to see what happens after this next [large] El Nino step consistently after the 1998 large el Nino. Instead, the dominant message was that the warming of 1975-1995 would continue unabated or perhaps accelerated.

        This research provides another mechanistic (partial) explanation, to go with the others, of the unpredicted hiatus in surface warming. The hiatus that has led so many scientists and commentators to say and write that they knew about natural variability all along, though they never used to mention it and initially denied the hiatus in a chorus before developing their explanations for it.

        Let me conclude this post by saying Do not draw conclusions until all of the mechanisms have been studied in great detail over long periods of time!

        In the meantime, prepare for larger disasters than the maximum disasters of the last 150 years. Californians might harken back to the extraordinary rainfall of 1863.

      • I said from ten years. From 60 years, yes, there are trends to pay attention to.
        http://www.woodfortrees.org/plot/gistemp/from:1950/mean:12/plot/esrl-co2/from:1950/scale:0.01/offset:-3.3

      • Jim D: I said from ten years. From 60 years, yes, there are trends to pay attention to.

        I concur again! Pay attention to long term trends. Avoid conclusions unless they are confirmed by adequate out of sample data.

        You cherry-picked the starting date, and avoided modeling the natural variability, whose importance everyone now acknowledges.

        Meanwhile, absorb the information that the quoted study (of course it need confirmation) discredits some of the assumptions commonly made in the global climate models. This is yet another way that the models are wrong.

      • stevenreincarnated

        Jim, you talk about cherry picking start dates and then always start in 1950. The only thing I see special about 1950 is that it was colder than 1940.

      • We could start in 1850 if you want but we don’t have good CO2 measurements from back then, and 75% of the total addition has been since 1950.

      • stevenreincarnated

        I’d rather go back to 1750 and claim most the warming was due to a change in solar driven ocean heat transport. It just makes everything so much easier to explain from the MWP to the LIA to how a small change in solar can bring us out off glaciation.

    • David Wojick: Roy Spencer has an interesting post

      thank you for the link

    • That Spencer name rings a bell. Ah, yes:

      Bottom line: I agree with you that cloud feedbacks are the most poorly-handled of all important feedbacks and that the uncertainty is so large that even its sign is unknown. They’d have to be quite negative, though, to negate the other (more precisely known) net positive feedbacks, though, and that is unlikely.

      Also, Spencer has recently dropped below my credibility threshold so don’t bother citing him here unless the work is corroborated.

      http://blog.chron.com/climateabyss/2011/08/roger-pielke-jr-s-inkblot/#comment-1463

      There are 57 occurences of “predict” already on this page. All this “prediction” thing has been solved in 2011 by NG.

      INTEGRITY ™ – Inkblots, Inkblots Everywhere

      • Willard, I once again break my general rule not to reply to inanity. Yours just above was just too inane. The proffered link at the head of this comment subthread is to an important piece of not yet published DOE sponsored research on water vapor feedback/temperature responses as detected by satellites. A BIG deal. Read AR4 WG1 Box 8.1 for how big.
        Your retort is not to the substance. It is to the researcher, an ad hom. (En passant, the research grant says DOE disagrees with you about them). And even then you provide a drive by reference to a blog, climateabyss, objecting to a RPjr post deemed an ‘inkblot’.
        A not so friendly suggestion. UP YOUR GAME. If you have something useful to say, how about a guest post on it? If you have useful on topic commentary, make it. If this is all you have to contribute, why not spend your time on sites that badly need the traffic? Appell, ATTP, Real Climate, a very long list. But please stop polluting the waters here.
        Your posts resemble recent EPA remediation results at the Gold King Mine near Silverton, Co.

      • You’re breaking your general rule just every odd week or so, Sir Rud, and hammering the table with your BIG deal might not help you strongharm NG into accepting Roy “GW Nazis” Spencer as a credible source.

        Had you paid any attention to that blog, you’d notice that (1) Junior’s “inkblot” remark was not related to NG’s, contrary to what you claim with your silly sideswipe and that (2) Senior appreciated that post enough to go on his usual rope-a-dope of talking points with NG in the comment.

        ***

        There’s very little in your overall comments, and that’s if we count the very little you handwave around in just about every odd post or so at Judy’s. I’d be delighted to pay due diligence to it, but I charge triples to bags of wind like you.

        Oh, and please bear in mind that your alpha-male executive technique only work in person, and that they mostly make you look ridiculous online.

      • You’re breaking your general rule just every odd week or so, Sir Rud, and hammering the table with your BIG deal might not help you strongharm NG into accepting Roy “GW Naz*s” Spencer as a credible source.

        Had you paid any attention to that blog, you’d notice that (1) Junior’s “inkblot” remark was not related to NG’s, contrary to what you claim with your silly sideswipe and that (2) Senior appreciated that post enough to go on his usual rope-a-dope of talking points with NG in the comment.

        ***

        There’s very little in your overall comments, and that’s if we count the very little you handwave around in just about every odd post or so at Judy’s. I’d be delighted to pay due diligence to it, but I charge triples to bags of wind like you.

        Oh, and please bear in mind that your alpha-male executive technique only work in person, and that they mostly make you look ridiculous online.

      • Rud,

        If I may add an element of brevity to your advice to willard?

        Instead of Up Your Game, I believe Up Your’s is a more appropriate response to willard.

  15. Data. The drought is increasing in intensity in the US again.

    • The “US” drought is primarily a west-of-the-rockies phenomenon. The southeastern states are seeing rain from the remnants of TS Erika even as I write. Unless it is the sole landfall of this hurricane season, expect even less “drought” as the map is updated over the next few months.

  16. We found only one-third of published psychology research is reliable – now what? [link] …

    Not surprising.

    “What passes today for Freud bashing is simply the long postponed exposure of Freudian ideas to the same standards of noncontradiction, clarity, testability, cogency, and parsimonious explanatory power that prevail in discourse at large. Step by step we are learning that Freud has been the most overrated figure in the entire history of science and medicine–one who wrought immense harm through the propagation of false etiologies, mistaken diagnoses and fruitless lines of inquiry.”
    The Memory Wars: freud’s legacy in dispute

    • It may not be surprising, but it is deeply depressing, and yet, I’m amped as heck about it.

      • Depressed ha ha. Im amped about it. The left is increasingly using pseudo scientific therapeutic designations to attempt to neutralize their opponents, so any attack on the field pleases me.

      • So, maybe the psychotherapist is wrong about you after all!

  17. A quote from Willis, which is excellent.


    Any natural regulatory system has bounds on the variations it can control, and there are events that could alter or destroy the regulation.

    Second quote from Willis which is excellent.


    “Willis- The sun has no effect whatsoever on climate you are correct I apologize also to L svaalgard

    Willis says below that
    Eliza, I have never said that, nor anything even remotely resembling that. Those are YOUR WORDS, not mine.

    I have to apologize to Willis for not listening to him carefully enough because if one really listens to what he is saying he is opened to solar, while also saying there are events that can destroy or alter the natural regulatory system of the climate.

    So I have a starting point with Willis , which at one time I thought I did not have.

    Another point we agree on is if the sun varies enough it will have an impact on the climate. Everyone submits to this ,the disagreement however, is not if solar variation will change the climate but does the sun vary enough to accomplish this?

    This leads to my argument with Willis , which is the so called 11 year sunspot normal cycle is not where one is going to be able to find solar/climate connections, because the EXTREMES in solar activity are not strong enough in degree of magnitude or long enough in duration of time to have a climate effect. In addition the 11 year cycle gong from weak to strong sunspot activity cancels the climate effect it may have before any significant impact could come about.

    In other words thresholds can not be reached in the climate system due to these 11 year variations in solar activity. This is the wrong place to look if one wants to find a solar climate connection.

    The place to look is when the sun enters an extreme period of prolonged minimum solar quiet and when one looks at these periods the data does show a climate/solar correlation to one degree or another.
    The problem is there are other factors superimposed upon even this extreme solar variability which although keeps the lower global average temperature trend in place there are periods of rising temperatures within the overall lower temperature trend.

    Why ? Because within any global temperature trend initiated by solar variability one has to take into account the following factors;

    1. all solar minimum differ as was the case recently with the 1996 solar minimum versus the 2008-2010 solar lull, which effects the climate in a different manner..

    2. the stage of where earth is in respect to Milankovitch Cycles is either going to work in concert or against the current trend the solar variability is exerting upon the climate. Right now I would say Milankovitch Cycles are on balance acting in concert with minimum prolonged solar activity.

    3. the geo magnetic field can enhance given solar activity effects or diminish given solar activity effects upon the climate. A weaker field compounding given solar effects.

    4. land /ocean arrangements and elevations. Right now acting in concert with reduced solar activity very favorable for cooling.

    5. the ice dynamic/snow cover which when at a critical stage can enhance or diminish the solar impacts. Right now not that favorable.

    6. the rogue terrestrial event such as a super volcanic eruption or the rogue extra terrestrial event such as an impact could turn things upside down in the climate system.

    7. this being very important which is the elusive thresholds which I think are out there but I do not know what degree of solar extremes are needed to bring them about, but there must be solar extremes that will bring them about. This is also probably tied into the initial state of the climate , for example point 5, which is to say just how far is the climate system of the earth from that inter –glacial/glacial threshold at the time the prolonged minimum solar conditions commence, which I think go a long way in the climatic effect the given solar variability will have upon the climate. .

    8. the normal earth intrinsic climate factors which superimpose themselves upon the big general climatic trend regardless if they are associated directly with given solar activity or not.

    9. Lunar input- which could possibly enhance or diminish given solar activity.

    My best guess based on the historical climatic record is the solar extremes needed to have a clear climatic impact and not one that is obscured have to be slightly less then quote so called normal 11 year sunspot minimums but more importantly the duration of time has to be longer.

    Once this is in when combined with the points in the above the climate result should come about, with the exception if point 6 were to take place.

    Possible important (some) secondary effects due to solar activity which in turn can moderate the climate.

    cosmic ray change moderates cloud coverage.

    ozone changes moderates atmospheric circulation

    geological activity moderation.

  18. http://onlinelibrary.wiley.com/doi/10.1029/2009JA014342/abstract

    This study is the reason why sunspots should not be the tell all indicator for what effects solar activity may or may not have upon the climate of the earth.

    This study also shows that not all solar minimum are alike and why a long period of sub solar activity is needed to not only wind down the sunspots but the associated coronal holes which can keep the solar wind high and or geomagnetic activity high despite otherwise very low solar parameters which in turn can diminish the given solar activity’s influence upon the climate. Also the heliosphere and it’s relationship to earth’s position to it has to be taken into account.

    This is why I have a list of solar parameters and a given duration of time associated with them in order to give a guideline as to when I think solar activity changes are enough to have a climatic impact.

    This article explains this ,and I will send the solar criteria following this post.

  19. THE CRITERIA

    Solar Flux avg. sub 90

    Solar Wind avg. sub 350 km/sec

    AP index avg. sub 5.0

    Cosmic ray counts north of 6500 counts per minute

    Total Solar Irradiance off .15% or more

    EUV light average 0-105 nm sub 100 units (or off 100% or more) and longer UV light emissions around 300 nm off by several percent.

    IMF around 4.0 nt or lower.

    The above solar parameter averages following several years of sub solar activity in general which commenced in year 2005. The key is duration of time because although sunspot activity can diminish it takes a much longer time for coronal holes to dissipate which can keep the solar wind elevated which was the case during the recent solar lull of 2008-2010 ,which in turn keep solar climatic effects more at bay. Duration of time therefore being key.

    If , these average solar parameters are the rule going forward for the remainder of this decade expect global average temperatures to fall by -.5C, with the largest global temperature declines occurring over the high latitudes of N.H. land areas.

    The decline in temperatures should begin to start to take place within six months after the ending of the maximum of solar cycle 24,if sub- solar conditions have been in place for 10 years + which we have now had. Again the solar wind will be needed to get to an average of below 350km/sec. , which can take time because not only do the sunspots have to dissipate but also the coronal holes.

  20. Interesting interview. ‘CMs are not cooked ….’
    more like bad recipe with too much of the wrong ingredient

  21. About the “The Case for Teaching Ignorance”

    “Far too often … teachers fail to emphasize how much about a given topic is unknown. … The time has come to “view ignorance as ‘regular’ rather than deviant.”

    This applies to everything educational. History unfortunately gets relived because “teachers fail to emphasize how much about a given topic is unknown”. Relative to history sometimes ignorance is for political purpose, therefore while I generally agree with the educational insight about teaching ignorance, sometimes ignorance is based on deviance when it’s purposefully not taught.

    My case point can be tied directly to the green movement and the climate debate today. It’s astonishing how we’re in many ways reliving a “new” manifesto today that was created by what became part of the Nazi movement.

    Select paragraphs:

    Fascist Ecology: The “Green Wing” of the Nazi Party and its Historical Antecedents

    “We recognize that separating humanity from nature, from the whole of life, leads to humankind’s own destruction and to the death of nations. Only through a re-integration of humanity into the whole of nature can our people be made stronger. That is the fundamental point of the biological tasks of our age. Humankind alone is no longer the focus of thought, but rather life as a whole . . . This striving toward connectedness with the totality of life, with nature itself, a nature into which we are born, this is the deepest meaning and the true essence of National Socialist thought.” 1

    At the heart of the völkisch temptation was a pathological response to modernity. In the face of the very real dislocations brought on by the triumph of industrial capitalism and national unification, völkisch thinkers preached a return to the land, to the simplicity and wholeness of a life attuned to nature’s purity. The mystical effusiveness of this perverted utopianism was matched by its political vulgarity. While “the Volkish movement aspired to reconstruct the society that was sanctioned by history, rooted in nature, and in communion with the cosmic life spirit,” 8 it pointedly refused to locate the sources of alienation, rootlessness and environmental destruction in social structures, laying the blame instead to rationalism, cosmopolitanism, and urban civilization.

    The chief vehicle for carrying this ideological constellation to prominence was the youth movement, an amorphous phenomenon which played a decisive but highly ambivalent role in shaping German popular culture during the first three tumultuous decades of this century. Also known as the Wandervögel (which translates roughly as ‘wandering free spirits’), the youth movement was a hodge-podge of countercultural elements, blending neo- Romanticism, Eastern philosophies, nature mysticism, hostility to reason, and a strong communal impulse in a confused but no less ardent search for authentic, non-alienated social relations. Their back-to-the-land emphasis spurred a passionate sensitivity to the natural world and the damage it suffered.

    The philosopher Ludwig Klages profoundly influenced the youth movement and particularly shaped their ecological consciousness. He authored a tremendously important essay titled “Man and Earth” for the legendary Meissner gathering of the Wandervögel in 1913. 18 An extraordinarily poignant text and the best known of all Klages’ work, it is not only “one of the very greatest manifestoes of the radical ecopacifist movement in Germany,” 19 but also a classic example of the seductive terminology of reactionary ecology.

    “Man and Earth” anticipated just about all of the themes of the contemporary ecology movement. It decried the accelerating extinction of species, disturbance of global ecosystemic balance, deforestation, destruction of aboriginal peoples and of wild habitats, urban sprawl, and the increasing alienation of people from nature. In emphatic terms it disparaged Christianity, capitalism, economic utilitarianism, hyper consumption and the ideology of ‘progress.’ It even condemned the environmental destructiveness of rampant tourism and the slaughter of whales, and displayed a clear recognition of the planet as an ecological totality.

    The youth movement did not simply fail in its chosen form of protest, it was actively realigned when its members went over to the Nazis by the thousands. Its countercultural energies and its dreams of harmony with nature bore the bitterest fruit.

    1. Ernst Lehmann, Biologischer Wille. Wege und Ziele biologischer Arbeit im neuen Reich, München, 1934, pp. 10-11. Lehmann was a professor of botany who characterized National Socialism as “politically applied biology.”

    See Raymond H. Dominick, The Environmental Movement in Germany: Prophets and Pioneers, 1871-1971, Bloomington, 1992, especially part three, “The Völkisch Temptation.”

    • Interesting commentary on the “green” side of the NSDAP.

      As for climate, two large areas of ignorance are the effects of tropical thunderstorms and what happens to CO2 once it’s been placed in the atmosphere. The former has been talked about by Dr. Curry and others, haven’t seen much written about the latter other than residence time for a CO2 molecule in the atmosphere is somewhere between 5 and 15 years (latter from C-14).

      A difference between a wise person and a simple well educated person is that the wise person is aware of the boundaries between his/her own knowledge and ignorance.

  22. richardcfromnz

    IPCC climate change criteria: radiative forcing “measured at top of atmosphere” (IPCC AR4 FAQ 2.1, Box 1 – “What is radiative forcing?”).

    FAQ 2.1, Box 1: What is Radiative Forcing?

    [A] – “The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance [‘measured at the top of the atmosphere’] controls the Earth’s surface temperature”

    And,

    [B] – “When radiative forcing [‘measured at the top of the atmosphere’] from a factor or group of factors is evaluated as positive, the energy of the Earth-atmosphere system will ultimately increase, leading to a warming of the system. In contrast, for a negative radiative forcing, the energy will ultimately decrease, leading to a cooling of the system”

    https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html

    # 0.6 W.m-2 TOA imbalance, trendless (Stephens et al 2012, Loeb et al 2012, IPCC AR5 Chap 2).

    Stephens et al (2012) Figure 1

    Loeb et al (2012) Figure 1

    # 1.9 W.m-2 CO2 “forcing”, trending (dF = 5.35 ln(C/Co), C 400ppm Co 280ppm, IPCC Table of Forcings, same as net anthro).

    Game over. CO2 “forcing” is more than treble the TOA imbalance and increasing, CO2 is an ineffective climate forcing.

    # 0.6 imbalance TOA = 0.6 imbalance Sfc

    Sfc imbalance is global average ocean heat accumulation (around 24 W.m-2 tropics, -11 W.m-2 southern ocean). Therefore, TOA imbalance is simply solar SW going straight into the oceanic heat sink and lagged in energy out at Sfc and LW out at TOA.

    No need to invoke CO2 “forcing” and it is impossible to invoke anyway – it doesn’t fit between Sfc and TOA. IPCC AR5 Chapter 10 Detection and Attribution fails to address this.

    Game over.

    • The imbalance is just what it says. It is the part of the forcing that has not yet been compensated by a surface temperature change. The fact that the imbalance is still positive and remaining robust is highly significant. It means all the warming we have had so far still has not caught up to the forcing change which is primarily supplied by CO2. That is, the forcing change has provided all the warming with more in the pipeline. This concept is often missed when people try to attribute the warming.

      • Are you referring to the imbalance that resides inside the error bars? That imbalance?

      • The imbalance demonstrated by the decadal OHC rise rate for example, or assumed by Lewis for another example.

      • richardcfromnz

        Jim D

        >”The imbalance is just what it says. ”

        Well yes, that is why I’ve highlighted it because it (0.6 W.m-2, trendless) directly contradicts current theoretical CO2 “forcing” (1.9 W.m-2, increasing).

        >”It is the part of the forcing that has not yet been compensated by a surface temperature change.”

        It is not just “part”, it is ALL. 0.6 W.m-2 and trendless is ALL of the forcing. There is no other forcing at TOA.

        >”The fact that the imbalance is still positive and remaining robust is highly significant.”

        Robust? Significant? It is minimal and trendless i.e. it is NOT increasing as per CO2 forcing theory. It is also to be expected with solar levels the highest in about 11,000 years (Usoskin 2013).

        >”It means all the warming we have had so far still has not caught up to the forcing change”

        Correct. This is planetary thermal lag via the ocean, Change in solar input => oceanic thermal lag => change in atmospheric temperature. Takes decades (look up “relaxation time constant”).

        >”which is primarily supplied by CO2.”

        Incorrect. The only energy input to the system is solar energy, the major fluctuation of which is on a millennial time scale (Grand Max to Grand Min). CO2 provides no energy to the system. CO2 is a passive energy transfer medium (Coolant by definition – refrigerant code R744). Yes, there is an atm-sfc flux but the net flux is sfc-space. There is no surface heating effect from the atm-sfc flux because that would violate the Clausius statement of the Second Law of Thermodynamics:

        “Heat will not of itself move from a cold object [atm] to a hot object [sfc]”

        >”That is, the [solar] forcing change has provided all the warming with more in the pipeline. This concept is often missed when people try to attribute the warming.”

        Yes and no. As per Macias et al (2014) upthread, there is now a negative inflexion in the secular trend of GMST i.e. the ST will peak some time just after 2020 due to planetary thermal lag in the sun-ocean-atmosphere system. So yes, there is still warming to come in the secular trend.

        However, and again as per Macias et al, MDV is going negative (-ve) from 2015 to 2030 so the MDV signal must be SUBTRACTED from the ST signal in order arrive at the GMST profile. This will offset warming in the ST i.e. the hiatus to continue for at least another 4 years or so.

        Once both ST and MDV signals of GMST are in negative phase in the 2030s the hiatus will turn to cooling.

      • richardcfromnz

        Should be:

        Once both ST and MDV signals of GMST are in negative phase in the [2020s] the hiatus will turn to cooling.

      • richardc, you seem to not get what forcing really is. The dominant forcing change factor is CO2. The forcing change since preindustrial times is estimated to be over 2 W/m2 of which nearly 2 W/m2 is CO2, perhaps 80% or more. Changes in solar irradiance amount to something around 0.1-0.2 W/m2, plus or minus at various times. Volcanoes occasionally contribute, and aerosols have a masking effect. The warming is the Earth’s attempt to catch up with the still rising (due to emissions) positive forcing, while the imbalance is positive showing it has failed to catch up so far even despite all the warming. The Earth’s surface has warmed which has offset some of the forcing change, but 0.6 W/m2 is the amount that the warming has not canceled yet. There is a formula
        dF=dH/dt+lambda*dT
        when the forcing dF increases, the response is combined between dH/dt a rising heat content and lamda*dT, a rising surface temperature. dH/dt is also the imbalance term. Some ballpark numbers are dF=2.3, lambda=2, dT=0.85 leaving the imbalance at 0.6 W/m2. As you can see, a rising forcing and rising temperature can leave the imbalance constant, which is no cause for comfort because the temperature is rising.

      • richardcfromnz

        Jim D

        >”Richard, you seem to not get what forcing really is.”

        I know EXACTLY what forcng really is Jim. I posted the IPCC definition of it – remember?

        >”The dominant forcing change factor is CO2″

        Yes but only in THEORY Jim. It is perfectly clear from the TOA imbalance that the theoretical CO2 “forcing” which is increasing and currently 1.9 W.m-2 at TOA, is having no effect whatsoever on the trendless 0.6 W.m-2 imbalance at TOA. This is the IPCC’s criteria for a climate change agent – CO2 fails the criteria.

        >”Changes in solar irradiance amount to something around 0.1-0.2 W/m2, plus or minus at various times.”

        Jim you are looking at a very small picture, the 11 yr solar cycle. It is completely irrelevant and negligible in the big picture – forget it.

        Grand Maximum to Grand Minimum and vice versa is the big picture i.e. relevant to climate change in the 20th – 21st century is solar change from Maunder Minimum (17th century) to Modern Maximum (1958ish – 2005ish) and then solar change from Modern Maximum to mid-21st century (an uncertain recession).

        There has already been 0.3 – 0.4 W.m-2 solar change after 2005 in PMOD:

        http://www.woodfortrees.org/graph/pmod

        By how much the change might be by 2050 can only be estimated from indicative historical scenarios e.g. Gleissberg (least case) Dalton (mid range), Maunder (worst case).

        The worst case scenario is 6 W.m-2 Maunder Minimum to Modern Maximum indicated by Shapiro el al (2011):

        ‘A new approach to long-term reconstruction of the solar irradiance leads to large historical solar forcing’
        A. I. Shapiro, W. Schmutz, E. Rozanov, M. Schoell, M. Haberreiter, A. V. Shapiro, and S. Nyeki
        http://arxiv.org/pdf/1102.4763.pdf

        IPCC AR5 Chapter 8 Radiative Forcing discounts Shapiro et al completely via, and in favour of, Jones, Lockwood and Stott (2012) which is least of least case scenarios. Stott, Jones and Lockwood also happen to be Chapter 10 Detection and Attribution Lead and Contributing authors i.e. the gatekeepers.

        I have Mike Lockwood on email record as stating he “does not understand” Shapiro et al’s methodology. On this basis they threw out Shapiro et al anyway.

      • You have to scale the solar irradiance by a quarter and then by 0.7 to account for its effect at Earth’s surface. That brings it into the range I said. As for 6 W/m2, it looks plain wrong. 0.6, maybe, as an extreme upper limit.

      • JimD, “You have to scale the solar irradiance by a quarter and then by 0.7 to account for its effect at Earth’s surface. That brings it into the range I said. As for 6 W/m2, it looks plain wrong. 0.6, maybe, as an extreme upper limit.”

        Actually, you scale solar by 4 to get a rough estimate. Average power is tsi/pi for another rough estimate and you can closer estimate with more computer time. Incoming solar though is absorbed at various layers including about 10% subsurface which would have different degrees of amplification. We all should be well beyond the simplified BS by now.

      • richardcfromnz

        Jim D

        >”As you can see, a rising forcing and rising temperature can leave the imbalance constant, which is no cause for comfort because the temperature is rising.”

        The “rising forcing” you allude to, CO2, is only theoretical. It is not actual. If it was actual the TOA energy balance would move synchronous with and commensurate with CO2 forcing – but it doesn’t. This a really the inferred man-made climate change hypothesis falsified.

        But as you appear to be disagreeing with the IPCC on what causes temperature rise (they say the energy imbalance at TOA), I suggest you take your disagreement to them Jim – not to me.

      • richardc, you can check with the IPCC. When the forcing rises, it is manifested as a combination of a surface temperature rise and a heat content rate of change. The temperature rise offsets some of the forcing rise, and that is why the imbalance is not as high as all the forcing change since 1750. This part is understood by most of the skeptics here.

      • captd, the net solar forcing at the Earth goes as 0.25*(1-A)*TSI where A=0.3=albedo. I think Arrhenius knew this.

      • richardcfromnz

        Jim D

        >”As for 6 W/m2, it looks plain wrong. 0.6, maybe, as an extreme upper limit.”

        No Jim, read the paper (I linked it for you). It is 6 W.m-2. Judith Curry has highlighted this IPCC omission in previous posts. But just for you here is the quote from the paper (page 4 pdf Shapiro et al 2011):

        The difference between the current and reconstructed TSI during the Maunder minimum is about 6 ± 3 W/m2 (equivalent to a solar forcing of FP−M∼ 1.0±0.5W/m2) which is substantially larger than recent estimates (see Sect. 1).

        Remember solar forcing is real power that does work (think solar power), LWIR does not do work (no-one bothers to try harnessing the 400 W.m-2 DLR 24/7 in the tropics). If there was to be a 6 W.m-2 solar power reduction in the not too distant future the earth would experience some radical cooling. Extreme worst case of course but being predicted by the likes of Dr Habibullo Abdussamatov.

      • JimD, “captd, the net solar forcing at the Earth goes as 0.25*(1-A)*TSI where A=0.3=albedo. I think Arrhenius knew this.”

        That is what Arrhenius used as a simplification. Arrhenius and models also tend to simplify but not including subsurface absorption. There are lots and lots of simplifying assumptions required for a problem of this scale. That would be the reason for spending money on complex models.

        Modeling though on this complex a scale has issues. You can invest in a few very complex models then try to learn from where they miss. You can also invest in lots and lots of simple models that don’t all use the same simplifying assumptions and allow them to converge on a solution. Call it the Law of Big Model Numbers.

        This has been discussed a few times by our hostess.

      • richardcfromnz

        Jim D

        >”richardc, you can check with the IPCC.”

        I did and you are disagreeing with them Jim (TOA balance “controls” temperature – IPCC FAQ 2.1, Box 1: What is Radiative Forcing?). As I said previously, take your disagreement with the IPCC to the IPCC – not to me.

        It is not for me to be the mediator between you and the IPCC.

      • Arrhenius btw did revise his original estimate to 1.6 (2.1 with water vapor) which agrees well with Callendar which agrees well with Manabe, which agrees well with current estimate via energy budget analysis. That would be 4 different approaches converging on a common value.

      • captd, 0.25 comes from geometry and 0.7 comes from albedo, so you have to decide which of those you don’t believe and make an argument based on that. This is how we get 255 K as an equilibrium temperature, remember, so if you have a different value, have at it, because you have not stated this yet as far as I know.

      • richardc, is your confusion that you don’t know the difference between the imbalance and radiative forcing? I have tried to explain this, but I think you are not distinguishing these at all.

      • JimD, “captd, 0.25 comes from geometry and 0.7 comes from albedo, so you have to decide which of those you don’t believe and make an argument based on that.”

        tsi/pi comes from engineering solar ponds which have to work. Then you don’t have to assume 255K. See how that works? Instead of assuming a zero heat capacity “surface” you start with something just a tad more realistic. Of course you need one serious kick butt ocean model to move forward, but what the heck.

      • richardcfromnz quotes Shapiro et al. 2011: “The difference between the current and reconstructed TSI during the Maunder minimum is about 6 ± 3 W/m2 (equivalent to a solar forcing of FP−M∼ 1.0±0.5W/m2) which is substantially larger than recent estimates (see Sect. 1).”

        So, they are agreeing with Jim D and disagreeing with you. The TSI variation of about 6W/m2 yields a solar forcing of about 1W/m2. How so? Because 6*0.25*0.7=1.05. The factor of four is the ratio of the Earth cross section to its surface. This is the average solar power received over the whole Earth surface, not just the one small part of it directly facing the Sun at any given time.

      • JimD, “captd, the net solar forcing at the Earth goes as 0.25*(1-A)*TSI where A=0.3=albedo. I think Arrhenius knew this.”

        No sorry, there weren’t any satellites around to measure TSI or albedo in 1896 when Arrhennius wrote his flawed paper.

        Two biggest errors of Arrhennius:
        1) Falsely assuming 15 micron LWIR (equivalent to a “partial blackbody” emitting temperature of 193K) from CO2 can do any work on the much warmer 288K Earth black body (as richardfromnz has pointed out above)
        2) Didn’t even consider convection and the fact that convection dominates radiative-convective equilibrium by a factor of about 10 times.

      • captd, Earth resembles a semi-reflective rotating sphere illuminated by a single distant point energy source more than a solar pond. It’s plain physics.

      • richardcfromnz

        Pierre-Normand Houle

        >richardcfromnz quotes Shapiro et al. 2011: “The difference between the current and reconstructed TSI during the Maunder minimum is about 6 ± 3 W/m2 (equivalent to a solar forcing of FP−M∼ 1.0±0.5W/m2) which is substantially larger than recent estimates (see Sect. 1).”

        >”So, they are agreeing with Jim D and disagreeing with you.”

        What ARE you on about Pierre-Normand? The Shapiro et al solar change estimate is 6 W.m-2 as above i.e I correctly represented their estimate. Jim D disagreed saying, quote:

        ”As for 6 W/m2, it looks plain wrong. 0.6, maybe, as an extreme upper limit.”

        Well, 6 W,m-2 was plain right. Jim D is probably confusing change (6 W.m-2) with forcing (1.0 W/m2).

        I should point out that a constant (over about 50 years 1958 – 2008) 1 W.m-2 solar forcing is very close to the current trendless TOA 0.6 W,m-2 imbalance 2000 – 2010. Certainly better than theoretical and increasing 1.9 W.m-2 CO2 “forcing”.

      • JimD, “captd, Earth resembles a semi-reflective rotating sphere illuminated by a single distant point energy source more than a solar pond. It’s plain physics.”

        It is too plain physics. With 70% of the surface being water you would start at that surface using the appropriate thermodynamics. If you have a plain, as in ideal zero heat capacity surface, you would use tsi/4. Both are valid for basic approximations, one more reliable for a semi-transparent liquid surface, i.e. solar pond. Neither would be exact. When you compare the two you have a useful range.

        Thermodynamics has the luxury/curse of allowing various frames of reference. The more frames you use, i.e. more models, the more independent estimates you have. It isn’t really that complicated jimmy.

      • captd, you probably are surprised then that the earth’s outgoing IR really does match the 255 K equivalent temperature that is derived from the simple physics I mentioned.

      • richardc, that 1 W/m2 solar change is twice anything anyone else has suggested, but still only 25% the forcing effect of doubling CO2.

      • JimD, “captd, you probably are surprised then that the earth’s outgoing IR really does match the 255 K equivalent temperature that is derived from the simple physics I mentioned.”

        No I am not surprise at all that with an average ~240 Wm-2 in/out you could create and idealized surface, Effective Radiant Layer, that would be approximately 255K. You are not measuring a real surface at 255K though, you are “estimating” a surface at 255K. There is a more real surface at wait for it.. da surface that you can fairly accurately measure a temperature and then calculate a effective energy. Neither of the temp to energy conversions are 100% accurate. The TOA energy though thanks to the space age technology and some serious math does currently provide the energy direct measurement to the least margin of error and the “surface” measurements both at the surface and via satellite provide the best temperature estimate. Using two models, allows you to take the greatest advantage of what ya got.

        Now that is how your basic dumb redneck engineer looks at a complex problem.

      • Jim D | August 29, 2015 at 10:23 pm |
        You have to scale the solar irradiance by a quarter and then by 0.7 to account for its effect at Earth’s surface. That brings it into the range I said. As for 6 W/m2, it looks plain wrong. 0.6, maybe, as an extreme upper limit.

        This statement is nonsense.

        Water vapor is the primary greenhouse gas and at high humidity CO2 is pretty much irrelevant.

        Solar irradiance penetrates the ocean to up to 100 meters. Downwelling IR just increases latent heat loss.

        With the equatorial ocean in the 30°C range there is a high multiplier for tiny changes in external forcing. There is perhaps a 3x Water Vapor multiplier for real heating as opposed to CO2 “virtual” heating. Further, the atmosphere absorbs about half as much incoming radiation as the surface does.

        The 6 W/m2 may have 3 W/m2 effect at the surface.

        What doesn’t get discussed much but must come up in the literature is that absolute humidity is higher during the day and any place there is vegetation CO2 drops to 200 PPM at the peak of the day. CO2 wouldn’t be expected to have a lot of direct daytime effect on temperatures – it is mostly indirect by reducing nighttime heat loss.

        CO2 forcing probably has only 1/4 the value of incoming radiation on a watt for watt basis in affecting temperature..

        Perhaps someone could point out some literature that addresses this…

        The real number is probably closer to 3 W/m2.

      • Richardcfromnz, TSI is defined thus: “Total solar irradiance (TSI), is a measure of the solar radiative power per unit area normal to the rays, incident on the Earth’s upper atmosphere.”

        In order to convert a change in TSI into a forcing change, you must account for the fact that the average power per unit area received by the Earth at the TOA is 4 times smaller than the power received across a plane perpendicular to the incident radiation (as TSI is defined). You must also account for the fact that some of this energy (about 30%) is reflected back to space by clouds, snow, etc. Just can’t just blissfully ignore those two factors. How do you account for them?

      • richardcfromnz

        Jim D

        >”richardc, is your confusion that you don’t know the difference between the imbalance and radiative forcing?”

        No, no confusion whatsoever. I quoted the IPCC definitions in the first instance but here they are again:

        FAQ 2.1, Box 1: What is Radiative Forcing?

        [A] – “The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance [‘measured at the top of the atmosphere’] controls the Earth’s surface temperature”

        And,

        [B] – “When radiative forcing [‘measured at the top of the atmosphere’] from a factor or group of factors is evaluated as positive, the energy of the Earth-atmosphere system will ultimately increase, leading to a warming of the system. In contrast, for a negative radiative forcing, the energy will ultimately decrease, leading to a cooling of the system”

        https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html

        Here’s the important part you disagree with Jim:

        “Radiative forcing is a measure of how the energy balance of the Earth-atmosphere system is influenced when factors that affect climate are altered. The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance controls the Earth’s surface temperature.

        Get that? The “radiative balance controls the Earth’s surface temperature”, the surface temperature does NOT control the radiative balance.

        Real effective solar change (and forcing) on a millennial time scale is the primary altered “factor that affect[s ] climate”. This millennial solar signal is evident in GMST proxies over the last 2500 years (Ludecke, Weiss, and Hempelmann 2015). CO2 isn’t.

        Theoretical CO2 forcing, supposedly, only kicks in after 1955. Except the CO2-forced models overshoot GMST. They are wildly off the mark in 2015, let alone 2050 or 2100. Worse, these models are MDV-neutral so when the MDV signal is added in the resulting profile is laughable.

      • richardcfromnz

        Jim D

        >”richardc, that 1 W/m2 solar change is twice anything anyone else has suggested,”

        No, the 1 W.m-2 it is NOT the “change” Jim. The “change” is 6 W.m-2. READ what Shapiro et al say:

        “The difference between the current and reconstructed TSI during the Maunder minimum is about 6 ± 3 W/m2

        “Difference” = change.

        >”but still only 25% the forcing effect of doubling CO2.”

        Ok, now you are referring to the 6 W.m-2 change converted to forcing of 1 W.m-2 as Shapiro et al state:

        “6 ± 3 W/m2 (equivalent to a solar forcing of FP−M∼ 1.0±0.5W/m2) which is substantially larger than recent estimates”

        This forcing, if valid, is effective RIGHT NOW. The TOA imbalance is 0.6 W.m-2 compared to Shapiro et al’s 1 W.m-2 which indicates that Shapiro et al are out by 0.4 W.m-2 – not a bad estimate then.

        A comparison to theoretical CO2 forcing is as of RIGHT NOW, not the 2xCO2 scenario. So we have:

        0.6 W.m-2 – TOA imbalance 2000 – 2010, trendless
        1.0 W.m-2 – Shapiro et al solar forcing TOA, trendless
        1.9 W,m-2 – Theoretical CO2 forcing TOA, increasing.

        I say solar forcing is “trendless” in respect to the 50 year Grand Maximum 1958ish – 2008ish.

        CO2 is now increasingly out of contention as the TOA energy balance forcing agent.

      • Richardcfromnz, the FAQ definition is simplified for the layman reader. Section 2.2. in WG4, WG1 is more precise: “The definition of RF [Radiative Forcing] from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’. ”

        It is defined thus because an instantaneous variation in radiative forcing causes an imbalance that eventually yields a compensating variation in surface and tropospheric temperature that tends to cancel this imbalance (the so called Plank response). But this subsequent reduction of the imbalance isn’t itself a variation in the forcing. The forcing change governs the ultimate adjustment in surface and tropospheric temperature that will cancel the initial imbalance that it causes. If the imbalance itself were to govern the surface temperature, as you seem to understand the concept of forcing to imply, then you would get the absurd result that the climate system could accumulate or lose energy forever (as a result of some fixed imbalance) without this producing any change in surface temperature.

      • richardcfromnz

        Pierre-Normand Houle

        >”In order to convert a change in TSI into a forcing change,”

        Which is EXACTLY what Shapiro et al do. Quoting from their paper:

        “The difference between the current and reconstructed TSI during the Maunder minimum is about 6 ± 3 W/m2 (equivalent to a solar forcing of FP−M∼ 1.0±0.5W/m2)

        They convert 6 W.m-2 change to 1 W.m-2 forcing.

      • Richardcfromnz, The point is that the “worst case scenario” of a Maunder like minimum, that you quoted from Shapiro et al, is a *TSI* change of 6W/m2, which is equivalent to a *forcing* change of 1W/m2. This is about half the CO2 forcing change that we have had so far, only part of which has been offset by surface warming (the Planck response), leaving a residual TOA imbalance of about 0.6W/m2. Comparing this residual imbalance with the TSI change associated with a Maunder like minimum is very much an apple to orange comparison.

      • richardcfromnz

        Pierre-Normand Houle | August 30, 2015 at 12:53 am |

        >Richardcfromnz, the FAQ definition is simplified for the layman reader. Section 2.2. in WG4, WG1 is more precise: “The definition of RF [Radiative Forcing] from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’. ”

        No argument with that Pierre-Normand. I agree, a top of troposphere phenomenon. But according to the IPCC, forcing and energy balance/imbalance is, quote:

        ‘measured at the top of the atmosphere’

        Note that both Stephens et al and Loeb et al, as cited by AR5 Chapter 2, state the energy balance/imbalance/budget in terms of TOA.

        >”If the imbalance itself were to govern the surface temperature, as you seem to understand the concept of forcing to imply,”

        Well, that is EXACTLY what the IPCC state:

        “Radiative forcing is a measure of how the energy balance of the Earth-atmosphere system is influenced when factors that affect climate are altered. The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance controls the Earth’s surface temperature.”

        Get that? The “radiative balance controls the Earth’s surface temperature”, the surface temperature does NOT control the radiative balance.

        Clearly, I understand the concept of forcing EXACTLY as per IPCC definition. As for Jim D, if you disagree with the IPCC Pierre-Normand, take your disagreement to the IPCC – not to me.

      • “Get that? The “radiative balance controls the Earth’s surface temperature”, the surface temperature does NOT control the radiative balance.

        Clearly, I understand the concept of forcing EXACTLY as per IPCC definition. As for Jim D, if you disagree with the IPCC Pierre-Normand, take your disagreement to the IPCC – not to me.”

        I am questioning your interpretation of the IPCC FAQ statement, which is inconsistent with the more precise technical definition from section 2.2.

        In the technical definition, RF is defined as an instantaneous *change* in TOA balance (more precisely, at the tropopause), that the stratosphere has adjusted to, but the surface and troposphere haven’t yet. This imbalance eventually yields a change in surface temperature (which may be very slow, due to the ocean’s thermal inertia) that offsets the initial TOA imbalance that has been caused by the forcing. This adjustment doesn’t change the forcing, but it does reduce the imbalance. Hence, you can’t equate the evolving imbalance with the (unchanging) initial forcing change that caused the surface and troposphere temperature to slowly change as a result. When a forcing is applied (and initial change in TOA balance) it doesn’t change as a result of the surface and troposphere adjustment. The forcing change is definitionally pegged to the initial TOA imbalance that it causes, and not to the the subsequent adjustment as the surface temperature adjusts.

      • richardcfromnz

        Pierre-Normand Houle

        >”Richardcfromnz, The point is that the “worst case scenario” of a Maunder like minimum, that you quoted from Shapiro et al, is a *TSI* change of 6W/m2, which is equivalent to a *forcing* change of 1W/m2. This is about half the CO2 forcing change that we have had so far, only part of which has been offset by surface warming (the Planck response), leaving a residual TOA imbalance of about 0.6W/m2. Comparing this residual imbalance with the TSI change associated with a Maunder like minimum is very much an apple to orange comparison.”

        Rubbish. 0.6 W.m-2 is the primary criteria, everything else is in respect to this as per IPCC definition (“This radiative balance controls the Earth’s surface temperature”). And the comparison is all on exactly the same terms – IPCC TOA forcing criteria this century:

        0.6 W.m-2 – TOA imbalance 2000 – 2010, trendless
        1.0 W.m-2 – Shapiro et al solar forcing TOA, trendless
        1.9 W,m-2 – Theoretical CO2 forcing TOA, increasing.

        I say solar forcing is “trendless” in respect to the 50 year Grand Maximum 1958ish – 2008ish. OK, Shapiro et al are 0.4 W.m-2 out but that is a very close estimate.

        CO2 is now increasingly out of contention as the TOA energy balance forcing agent. Theoretical CO2 “forcing” is instantaneous speed-of-light i.e. there is no “warming in the pipeline”. If the theoretical 1.9 W.m-2 were to have any effect on the TOA energy balance it would be instantaneous right now – it isn’t.

        As already stated up in the thread header, the TOA imbalance has already occurred at the surface as a result of solar change and oceanic thermal lag (“a few decades” – Hansen et al 1981):

        “# 0.6 imbalance TOA = 0.6 imbalance Sfc

        Sfc imbalance is global average ocean heat accumulation (around 24 W.m-2 tropics, -11 W.m-2 southern ocean). Therefore, TOA imbalance is simply solar SW going straight into the oceanic heat sink and lagged in energy out at Sfc and LW out at TOA.

        No need to invoke CO2 “forcing” and it is impossible to invoke anyway – it doesn’t fit between Sfc and TOA. IPCC AR5 Chapter 10 Detection and Attribution fails to address this.”

        Game over.

      • richardcfromnz

        Pierre-Normand Houle

        >”In the technical definition, RF is defined as an instantaneous *change* in TOA balance (more precisely, at the tropopause), that the stratosphere has adjusted to, but the surface and troposphere haven’t yet. This imbalance eventually yields a change in surface temperature (which may be very slow, due to the ocean’s thermal inertia) that offsets the initial TOA imbalance that has been caused by the forcing.”

        You are missing a very important distinction Pierre-Normand. The atmospheric response to solar change is via the ocean in the sun => ocean => atmosphere system. So although the solar radiative change/forcing is instantaneous speed-of-light at TOA, the atmospheric response is delayed “6” and “10 – 100 years” (Dr Kevin Trenberth essay, ‘The Role of the Oceans in Climate’). We are talking “relaxation time constant” now in respect to the ocean.

        Obviously there are “modes” of response e.g. Very fast (less than 5 yrs), Fast (5 – 10 years), Slow (10 – 100 years), Very Slow (100+ years). Abdussamatov calculates planetary land+ocean 8 – 20 years centred on 14 with the ocean centred on about 20. In the SH Zhao and Feng (2014) found solar-temperature lag of 30 – 40 years over millennia in Antarctica, and Tasmanian proxies are about the same. This makes sense because there is more ocean in the SH.

        Consequently the peak of the solar Grand Maximum in 1986 say (non-contentious I think) cannot be compared to 1986 GMST as some IPCC solar specialists are apt to do (e.g. Joanna Haigh), and as are some bloggers (e.g. John Cook’s Skeptical Science). If you add 35 years (mid “30 – 40” years) to 1986 you get 2021.

        Theoretical CO2 “forcing”, on the other hand, is instantaneous speed-of-light between surface and TOA (actually top of tropopause, TOT). There is no oceanic lag as there is for solar forcing.

        Except the 0.6 W.m-2 TOA/TOT imbalance has already occurred at the surface, The surface imbalance is also 0.6 W.m-2 i.e. solar energy is accumulating in the ocean. CO2 cannot and does not play any further part between surface and TOA. The forcing has already occurred.

      • Richardcfromnz wrote: “Theoretical CO2 “forcing” is instantaneous speed-of-light i.e. there is no “warming in the pipeline”. If the theoretical 1.9 W.m-2 were to have any effect on the TOA energy balance it would be instantaneous right now – it isn’t.”

        Richard, the radiative forcing, RF, is *defined* as the amount of TOA imbalance that results from the the initial change (e.g. increase in solar TSI, CO2, volcanic aerosols, etc.) before there has been any surface and troposphere warming or cooling. When the forcing causes such an imbalance, then the surface temperature will later change as a result of the progressive accumulation, or loss, of energy in the climate systems. This surface and troposphere temperature change will itself causes a change in the amount of upwelling longwave radiation to space. This change is called the Planck response and it doesn’t count as a change in forcing, since the forcing is *defined* as the initial TOA imbalance caused by the primary change in TSI, or CO2, etc. The Planck response, though, causes the TOA imbalance to diminish. This is a response to the forcing change and doesn’t itself constitute a forcing change. The normal response to a step change in forcing is for the TOA imbalance to rapidly increase to an amount that matches the forcing change (by definition of RF) and then slowly tend to diminish towards zero as a result of the compensating Planck response.

      • So, to conclude, the 1.9W/m2 forcing attributed to the increase in atmospheric CO2 that occurred over the last couple centuries is *defined* as the TOA imbalance that would have resulted if this CO2 concentration increase had occurred all at once and before there had been any change in surface and tropospheric temperature as a result of this initial imbalance. (And likewise for the 1W/m2 forcing change that would result from a 6W/m2 TSI variation.) It certainly doesn’t follow from the definition of the radiative forcing that the TOA imbalance will not diminish when the temperature progressively increases as a result of the initial forcing change (and feedbacks). It most certainly will, because of the law of conservation of energy and the fact that the ocean’s thermal capacity isn’t infinite.

      • I see this is still going on, and richardc still does not understand the relation of forcing to imbalance to surface temperature rise. I presented an equation at the beginning for these three and I will leave it at that until richardc allows for or denies this equation in his argument.
        dF=dH/dt+lambda*dT
        Forcing change = Imbalance + Surface Temperature change factor

      • Regarding forcing changes since 1950. The CO2 change is about 1.3-1.4 W/m2, the solar change if any is downwards as we are now in a lull compared to a mid-century high. So we have had 0.7 C of warming and one major forcing term since 1950. You be the judge.

      • JimD, I believe richard understands fairly well, “You are missing a very important distinction Pierre-Normand. The atmospheric response to solar change is via the ocean in the sun => ocean => atmosphere system. So although the solar radiative change/forcing is instantaneous speed-of-light at TOA, the atmospheric response is delayed “6” and “10 – 100 years” (Dr Kevin Trenberth essay, ‘The Role of the Oceans in Climate’). We are talking “relaxation time constant” now in respect to the ocean.”

        The real issue with the paper he referenced is the TSI estimated change. Svalgaard et al have determined that the solar constant is likely a bit more constant than previously believed. The issue with the impact lag of forcing generally boils down to a “real” surface with heat capacity versus an ideal surface with zero heat capacity. That “surface” is a semi-transparent liquid with a very large relative to the atmosphere heat capacity. Imagine that? Some would consider using the average energy likely to be absorbed by that surface based on sound engineering practice instead of using a$$trophysical assumptions.

      • Forcing change = Imbalance + Surface Temperature change

        Except the sign is wrong.

        Positive surface temperature change imparts negative forcing.

      • oops, your error, my sign is correct. Forcing is offset by warming, not amplified by it.

      • PA, you may be referring to the water vapor feedback which doesn’t count as part of the forcing but does affect the value of lambda. With fast feedbacks lambda is about 2, possibly decreasing to 1 with slower feedbacks, while without, it would be nearer 3 W/m2/K.

      • Jim D | August 30, 2015 at 12:31 pm |
        PA, you may be referring to the water vapor feedback which doesn’t count as part of the forcing but does affect the value of lambda. With fast feedbacks lambda is about 2, possibly decreasing to 1 with slower feedbacks, while without, it would be nearer 3 W/m2/K.

        http://www.nature.com/nature/journal/v519/n7543/full/nature14240.html?WT.ec_id=NATURE-20150319

        Well… this is fine and good, but we have measured the IR forcing at the top and bottom of the temperate range and it is a pretty uniform 22 PPM = 0.2 W/m2. Lower in the summertime and higher in the winter – because plants reduce the CO2 level in the summer.

        That is 1.05 W/m2 since 1900. And we know the effect is limited to the surface because the TLT trend from satellites (1k+ layer of the atmosphere) is flat.

        You really have to refute the empirical evidence. Models and theory are irrelevant in the face of measured data.

        The 50+ W/m2 difference between asphalt and grass/trees artificially amplifies the effect near urban areas because there is more emitted IR, less latent heat loss, and higher CO2 (no vegetation). The ALW (anthropogenic local warming) signal is as large or larger than AGW and this seems to confuse scientists. It is known (because as mentioned above, it was measured) that 1/3 or less of the forcing is GHG caused AGW. That means that the majority of the forcing increase is due to other influences.

        That there are still scientists that think the majority of warming is due to CO2 is beyond embarrassing when the magnitude of CO2 forcing has been empirically measured.

      • PA, you seem unaware that those point surface measurements actually backed up the radiative models used in AGW for the outgoing longwave effect of GHGs. There were no surprises. Anyway, good that there is some consensus stuff you do trust. It’s a kind of progress. Read those papers. Their message is confirmatory rather than anything unusual.

      • Jim D | August 30, 2015 at 4:28 pm |
        … Anyway, good that there is some consensus stuff you do trust. It’s a kind of progress. Read those papers. Their message is confirmatory rather than anything unusual.

        Well, more CO2 increases EM latency in the atmosphere, the longer EM radiation is buffered in the atmosphere the warmer it is going.

        Where we disagree is the magnitude of the effect and the response.

        Unless AGW manages increase temperatures 2°C above today’s any warming will still be in the “beneficial” range.

      • richardcfromnz

        Jim D

        >”Regarding forcing changes since 1950. The CO2 change is about 1.3-1.4 W/m2, the solar change if any is downwards as we are now in a lull compared to a mid-century high. So we have had 0.7 C of warming and one major forcing term since 1950. You be the judge.”

        Solar change from peak only began after 2005 i.e. 1986 – 2005 was at the same high level which is an 11,000 year high according to Usoskin. PMOD shows this:

        PMOD
        http://www.woodfortrees.org/plot/pmod

        0.3 – 0.4 W.m-2 change only AFTER 2005.

        Then you neglect planetary thermal lag i.e. the atmospheric response to solar change via the ocean (predominantly). I’ve already gone over this with Pierre-Normand but here it is again for your benefit Jim.

        The atmospheric response to solar change is via the ocean in the sun => ocean => atmosphere system. So although the solar radiative change/forcing is instantaneous speed-of-light at TOA, the atmospheric response is delayed “6” and “10 – 100 years” (Dr Kevin Trenberth essay, ‘The Role of the Oceans in Climate’). We are talking “relaxation time constant” now in respect to the ocean.

        Obviously there are “modes” of response e.g. Very fast (less than 5 yrs), Fast (5 – 10 years), Slow (10 – 100 years), Very Slow (100+ years). Abdussamatov calculates planetary land+ocean 8 – 20 years centred on 14 with the ocean centred on about 20. In the SH Zhao and Feng (2014) found solar-temperature lag of 30 – 40 years over millennia in Antarctica, and Tasmanian proxies are about the same. This makes sense because there is more ocean in the SH.

        Consequently the peak of the solar Grand Maximum in 1986 say (non-contentious I think) cannot be compared to 1986 GMST as some IPCC solar specialists are apt to do (e.g. Joanna Haigh), and as are some bloggers (e.g. John Cook’s Skeptical Science). If you add 35 years (mid “30 – 40” years) to 1986 you get 2021.

        I can now add, “and as Jim D erroneously compares”.

        Fact remains, inescapably, undeniably, unequivocally, the IPCC’s criteria for climate change is the TOA energy imbalance. Therefore, a valid agent of climate change is one which moves the balance to its observed imbalance:

        0.6 W.m-2 – TOA imbalance 2000 – 2010, trendless
        1.0 W.m-2 – Shapiro et al solar forcing TOA, trendless
        1.9 W,m-2 – Theoretical CO2 forcing TOA, increasing.

        You be the judge.

      • richardc, the majority of the response to forcing changes occurs within the first decade. How much has the sun changed since 1950 and in what direction? CO2 has added 1.3-1.4 W/m2 forcing since 1950, other GHGs maybe another few tenths. The temperature has changed by 0.7 C. The temperature responds to forcing changes which may be the part you still don’t follow. Which forcing change is most likely responsible? GHGs, of course.

  23. This post has elicited a fascinating comment thread on the origins of the pause meme. A couple of belated observations specifically on that above.
    1. The original significance was not whether models produce pauses via ‘internal variability’. They do. It was for how long such a ‘real’ pause would have to continue to say climate models were flawed (falsified). The original estimate (15 years) was provided by NOAA in its ‘State of the Climate 2008’, in BAMS 90, August 2009, starting page 23. Later changed to 17 years by Santer’s paper in J. Geophys. Res. 116: D22105 in 2011. We also know Trenberth was worried about it from his ‘travesty’ email on Oct 12 2009 from Climategate 1.
    2. Steve McIntyre’s early attempt to publish on this subject, shot down by the consensus gatekeepers, also just as promised in Climategate 1. What more graphic evidence of peer review bias could there be. You have a temperature record, you statistically analyze it. How could there possibly be controversy–except in a politicized climate science world.
    Rereading the upthread commentary vividly illustrates much that is ‘wrong’ with the way ‘climate science’ has been practiced. Worth a much longer and more humorous dissection than this brief comment.

    • Perhaps a brief answer to pete’s question at the Auditor’s:

      So the error bars on the model estimates represent confidence intervals, not prediction intervals? Isn’t this the same mistake Douglass et al made?

      http://climateaudit.org/2010/08/09/mckitrick-et-al-2010-accepted-by-atmos-sci-lett/#comment-238526

      might be in order.

      It’s been five years now and nobody responded to pete.

      • Many thanks for that link, Willard. I went and read it. Nobody replied to “Pete” because ‘Pete’s’ comment was (a) stupid, and (b) already answered in the main part of the McIntyre guest post. ‘Pete’ only revealed his (pseudonymous) statistical ignorance via his comment. Not worthy of any response from any stats literate person, (except impolitely, which SM does not allow).

        You also reveal stats illiteracy, by dredging it up 5 years later. My golly, you must have quite a life inventorying every last unresponded to ignorant stupid support comment to your ‘religion’. Five years later, another ‘killer’ expert blog. Wow!
        Learning at least basic statistics and applied probability theory would have been a life much better spent. In my opinion. Have a nice day.

      • > Nobody replied to “Pete” because ‘Pete’s’ comment was (a) stupid, and (b) already answered in the main part of the McIntyre guest post.

        Show Denizens how a guy who went to Harvard would prove your (a) and (b), Sir Rud. So far, all your comment conveys is that (a) you’re a blowhard and that (b) you haven’t “read the blog” (as bender would say) enough. Those who read the blog learned to respect pete’s comments.

        The fact that noone responded to pete’s comments on this thread at the Auditor’s provides quite a big tell.

    • the pause happened after 2005

      It is all but ended at 2015.58, so call it 10 years and dead.

      • Nope. McKitrick did the ‘red noise’ autocorrelation version of no statistical significance (very fancy stuff), and came up with 16, 19, and 25 years depending on the data set he analyzed. OJS 4: 527-535 (2014). Read it.
        Learn basic statistics first. Then do them. Then opine.

      • Not that OSJ thing again, Sir Rud:

        I’ve simulated data that has the same trend, autocorrelation (an AR(2) model) and residual variance as the HadCRUT4 data since 1970 and applied McKitrick’s method to them. I did this 100 times. Ninety-five percent of these trials show an apparent hiatus lasting at least five years even though the trend is constant. In over 70% of trials, the hiatus lasts over 10 years. In 10% of trials the apparent hiatus started in or before 1995 – the year McKitrick reports. With this method, a hiatus lasting since 1995 is not exceptional even if the true trend in the data is constant. McKitrick’s method is not a tool for measuring the length of a hiatus, it is a recipe for making one.

        https://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/

        Econometrics’ the new sophistry and you, Sir Rud, are quite welcome to replicate Richard Telford’s results. And then opine.

      • Willard(@nevaudit): McKitrick’s method is not a tool for measuring the length of a hiatus, it is a recipe for making one.

        That quote misses the point. McKitrick’s method generates pauses randomly in red noise in order to estimate the frequencies of pauses of various lengths. That’s in order to assess the “statistical significance” of an observed pause if the null hypothesis includes (a) a true positive trend and (b) red noise.

      • matthewrmarler:

        That quote misses the point. McKitrick’s method generates pauses randomly in red noise in order to estimate the frequencies of pauses of various lengths. That’s in order to assess the “statistical significance” of an observed pause if the null hypothesis includes (a) a true positive trend and (b) red noise.

        Largely independent of that quote, Ross McKitrick’s definition of a “pause” is a terrible one. Using it, you can easily find a “pause” in periods when temperatures are visibly rising, and you can find that a pause which has been going on for the last 15 years can be completely erased, as though it never existed, next year.

        I wrote about it here, but a key problem is McKitrick’s definition assumes a pause exists, and that it is currently going on. That means he requires you always use the most recent date as one of your endpoints, a form of cherry-picking, as it means it is impossible to account for the possibility a pause has ended.

        (Except in that he arbitrarily requires any “pause” last at least five years, so if that requirement isn’t met, then there is no pause. Even if there was a pause just one month before. Because his methodology can actually say, today, there was a pause from ~1998 to today. But in two years, his methodology could turn around and say there was no pause during the 21st century. Which seems silly to me.)

      • Here’s Telford put an end to all this silliness:

        [T]he definition in McKitrick (2014) was not generated prior to seeing the data. I have little doubt that if there had been an early warm period followed by cooling and then significant warming, that the definition would have been changed. See for example the discussion of Arctic temperatures in

        http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/letter.to.policymaker.pdf.

        I want to reiterate some of the points made by dikranmarsupial above. It is doubtful that null hypothesis significance tests are a useful tool here (indeed anywhere). The null hypothesis in McKitrick (2014) is that the trend is exactly zero. However we know a priori that it is impossible for the trend to be exactly zero and therefore we know that the null hypothesis must be false and failure to reject it is a Type II error. If we cannot reject the null hypothesis, it is because we don’t have enough power. That short noisy temperature records don’t have much power does not seem surprising to me.

        Failure to reject the null hypothesis is exactly that. It is not an indication that the null hypothesis should be accepted.

        Trying to explain short term variability in global temperature trends is a useful task, but it requires physics. Reliance on statistics alone will generate futile arguments.

        https://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/comment-page-1/#comment-1538

        RichardT’s concept of futility might not be optimal, since it discounts what can be useful to the contrarian network and other reactionary outlets.

        Go team!

      • Brandon S? , and you can find that a pause which has been going on for the last 15 years can be completely erased, as though it never existed, next year.

        I don’t agree that those are sillier definitions than anyone else’s definitions. One of the points of simulations (as with probstats in general) is to show that random variation produces outputs that are different from what you would naively expect. If there is a true rising trend, you can get what appears to be a pause with a non-negligible probability — I interpret Jim D as saying that is exactly what we have in fact observed. It is a complement to the showing by M&M that you can get an apparently “statistically significant” hockey stick with red noise at a higher frequency than the nominal significance level.

        As to that short bit that I quoted, it is essentially what Jim D predicted, and what others have claimed about the current el Nino. If they are correct, the event will disconfirm some of the explanations for the pause, as well as some of the cyclic models fit to the long-term data. The 1998 peak temperature year was followed by a steep decline in the mean temperature. If that happens from 2015 through 2017 (peak, trough, recovery) the discussion will take yet another turn.

        Despite multiple authoritative warnings, everyone over-reacts to short-term fluctuations. A while ago the start of the pause was highlighted by a few, and dismissed as short-term fluctuations by others; now the pause is mostly acknowledged, but two warm years are taken as showing its end by some, and dismissed as short-term fluctuations by others. Meanwhile, all of the mathematical models are post-hoc; those that fit the 2000-2015 data best forecast continuation of the pause. I now expect that 3 decades more of high quality data will be necessary for us to “adopt” a model as a candidate for accurate prediction from 2050 – 2100. I used to think 2 decades would suffice, but now I think more will be needed.

      • matthewrmarler:

        I don’t agree that those are sillier definitions than anyone else’s definitions.

        I didn’t say Ross McKitrick’s definition is sillier, just that it is silly. I don’t really care to rank definitions as the silliest. I just find McKitrick’s definition useless for any discussion I’m going to have. Whether or not global warming “paused” somewhere around 2000 should not depend on whether this year is 2015 or 2016. Yet with McKitrick’s methodology, it could.

        I’m sure there’s a joke about skeptics not liking how people rewrite history in there somewhere. I mean, McKitrick’s methodology can rewrite the beginning point of “pause” with every passing month. I feel like there has to be a joke to be made.

      • The 1998 peak temperature year was followed by a steep decline in the mean temperature. If that happens from 2015 through 2017 (peak, trough, recovery) the discussion will take yet another turn.

        It’s also possible that the 2015 El Nino will be followed with yet another El Nino in 2016, and even another one after that. There were three El Nino episode in a row in the first half of the first decade of the 21st century: 2002-2003, 2005-2006, 2006-2007.

      • So, what causes the post El Niño dip, how does that jibe with the hypothesis that anthro aerosols have been offsetting near majority of anthro greenhouse gas forcing?

      • JCH: It’s also possible that the 2015 El Nino will be followed with yet another El Nino in 2016, and even another one after that. There were three El Nino episode in a row in the first half of the first decade of the 21st century: 2002-2003, 2005-2006, 2006-2007.

        Yes it is possible.

      • Brandon S?: Whether or not global warming “paused” somewhere around 2000 should not depend on whether this year is 2015 or 2016. Yet with McKitrick’s methodology, it could.

        Every specific definition will face some problems like that. It accords with the general observation that every statistical method will sometimes produce a wrong answer. The best you can hope for is that improved methods, used with discipline, produce reduced error rates.

        McKitrick’s definition is not silly. It merely produces, as you noted, counter-intuitive results.

      • > [McK’s method] merely produces […] counter-intuitive results.

        It’s possible.

        It’s also possible that this “very fancy stuff” (v. Sir Rud’s gasconnade) is only meant for contrarian consumption and ought not to be taken seriously.

      • matthewrmarler:

        McKitrick’s definition is not silly. It merely produces, as you noted, counter-intuitive results.

        SUre, if by that you mean Ross McKitrick’s methodology for determining the length of the “pause” produces results that seem to have no connection to how people actually use the word “pause.”

        It’s not “silly.” It just fails to do anything one would actually expect to do, instead choosing to completely redefine the “pause” to mean something nobody had previously thought it meant before.

      • Brandon S? It’s not “silly.” It just fails to do anything one would actually expect to do, instead choosing to completely redefine the “pause” to mean something nobody had previously thought it meant before.

        OK, first it was silly, now it isn’t silly.

        How had the majority or any one else defined “pause” before it was underway ? Part of the point of this discussion, my point anyway, is that all of the people who “would” have expected a pause neglected to tell us in advance what a pause was and that it might be expected.

      • Willard (@nevaudit): It’s also possible that this “very fancy stuff” (v. Sir Rud’s gasconnade) is only meant for contrarian consumption and ought not to be taken seriously.

        That’s possible.

        But the results of probability calculations, and especially conditional probability calculations, are always counterintuitive. Or perhaps only “almost always”, with a probability above 95%.

        Anything that is not “very fancy stuff” is most likely wrong.

    • “It was for how long such a ‘real’ pause would have to continue to say climate models were flawed”
      About 50 years

  24. richardcfromnz

    JCH

    >”the pause happened after 2005. It is all but ended at 2015.58, so call it 10 years and dead.”

    You’ve just thrown Foster and Rahmstorf (2011) and Rahmstorf, Foster and Cazenve (2012) under a bus JCH. And the IPCC cite F&R11 in Chapter 10 too.

    The F&R approach is to remove short-term fluctuations leaving a residual that passes through 2010 from below (it should pass through 2015 but minor quibble). They removed the 1998 El Nino so given all the hype re the 2015 El Nino, 2015.58+ El Nino will have to be removed too.

    Problem is: that leaves Foster & Rahmstorf with a hiatus in their residual after 2010 – embarrassing for them and the IPCC.

    MAJOR quibble: the IPCC CMIP5 CO2-forced model mean does NOT pass through either 2010 or 2015 from below. It is diverging above GMST from 2000 and is now WELL ABOVE 2015.

  25. This was interesting. Rasmus Benestad, of Realclimate, et al. devoted a paper to rebutting 38 high-profile contrarian papers and found it hard to publish, despite posting all their numbers and methods used for the rebuttals. Rebuttals should be part of the scientific process, so this should not be so difficult.
    http://www.realclimate.org/index.php/archives/2015/08/lets-learn-from-mistakes/

    • It is rare to publish papers directly attacking other people’s work.

      • If not being able to replicate is considered an attack, then it should not be. This is how science progresses. You don’t just leave wrong stuff unanswered. See cold fusion and certain cloning and vaccine studies. Attempted replication proved their undoing.

      • David Wojick

        Failure to replicate is often due to the brevity of the method description in the original work, and this is well known. In fact it is a research topic in itself. So few journals will publish mere failures to replicate. More broadly, journals seldom publish negative results of the form “I tried this and it did not work.”

        Also, failure to replicate is not a rebuttal. Which did Benestad do?

    • I agree Jim D that rebuttals are a good thing. But this paper dresses it up in some pseudo meme about agnotology. The paper was rejected many times for these reasons. If a paper is questionable, publish a rebuttal to that SINGLE paper in the same journal.
      One problem with Benestad is that if one of the 38 papers authors want to respond. It’s much harder to do so given the amorphous and broad ranging nature of Benestad et al. McKitrick did some responding in his reviewer report, but its unlikely he will devote a journal article to the issues raised by Benestad, which are not new in any case.
      Also the paper’s author list includes a large number of non climate scientists and some non scientists. These authors have a long track record of partisanship in the climate wars. Clearly, this paper is not aimed at making a contribution that will convince anyone and its hard to disagree with the editors who rejected it so many times.

  26. Since when is August “so soon in the year?” Espeically with all the measuring equipment deployed now. From the article.

    Climate scientists are predicting that 2015 will be the hottest year on record “by a mile”, with the increase in worldwide average temperatures dramatically undermining the idea that global warming has stopped – as some climate-change sceptics claim.

    Even though there are still several months left in the year to gather temperature readings from around the world, climate researchers believe nothing short of a Krakatoa-sized volcanic eruption that cuts out sunlight for months on end can now stop last year’s record being beaten.

    It is rare for climate experts to make such a bold prediction so soon in the year, but they believe that a surge in ocean temperatures in particular now makes it almost inevitable that 2015 will turn out to be the hottest year globally since instruments were first used to gather readings more than 130 years ago.

    http://www.independent.co.uk/environment/climate-change/climate-change-2015-will-be-the-hottest-year-on-record-by-a-mile-experts-say-10477138.html

    • El Nino is not climate change. Also, I did not know that the thermometer was invented 130 years ago. Oh wait, it was 1724.
      http://inventors.about.com/od/tstartinventions/a/History-Of-The-Thermometer.htm
      It is hard to see the mile as a temperature measure. Perhaps the Karl-mile.
      What do the satellites say?

      • 5,280 C?

      • ENSO is natural. The satellite scientist said his satellites were not as accurate as the thermometer series. As El Nino ramps up this fall, the satellites will start throwing up wildly high data. That’s how they achieve a semblance of accuracy for the surface air temperature They’re completely dependent on a balance of La Nina and El Nino episodes. They’re currently low, inaccurate, because they have not had a huge uptick from an El Nino since 2010. For measuring the surface air temperature, they’re a complete waste of time.

      • JCH says … The satellite scientist said his satellites were not as accurate as the thermometer series.
        *****
        Show me the link, JCH.

      • JCH, that’s very dishonest. Satellites simply measure different parts of the atmosphere. The comparison of the various types of data is what will eventually lead to better modeling of the system.

      • jim2,
        In the section labelled Measurement Errors in this blog post.

      • I’ve already showed you the link. It’s on the RSS website.

        Look at the satellite record during La Nina and El Nino. It’s a joke.

      • David Wojick

        ATTP: he is not a satellite scientist. He thinks the surface statistical models are accurate, which is just silly.

      • You’re claiming Carl Mears is not a satellite scientist.

        He didn’t say this

        He thinks the surface statistical models are accurate

        This is just silly!

      • aTTP, All Bore was also a vice-president… just sayin’!

      • For measuring the surface air temperature, they’re a complete waste of time.

        Right.

        But only becuase they don’t measure surface air temperature, or at least the level of peak emissions for each channel is significantly higher than the surface:

        However, the MSU measurements which are co-located with RAOB sounding data are supposedly well correlated and the MSU data has superior global coverage.

      • David Wojick

        RSS is a data adjustment shop, not a satellite shop. This is why he describes himself as a data scientist, not a satellite scientist. I consider RSS to be a green joke.

      • ATTP, from your link RE measurement errors …
        Measurement Errors:
        As a data scientist, I am among the first to acknowledge that all climate datasets likely contain some errors. However, I have a hard time believing that both the satellite and the surface temperature datasets have errors large enough to account for the model/observation differences. For example, the global trend uncertainty (2-sigma) for the global TLT trend is around 0.03 K/decade (Mears et al. 2011). Even if 0.03 K/decade were added to the best-estimate trend value of 0.123 K/decade, it would still be at the extreme low end of the model trends. A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!). So I don’t think the problem can be explained fully by measurement errors.

      • David Wojick

        Perhaps a data scientist is one who adjusts the data to fit the (preconceived) science, reversing the scientific method, as it were. My understanding is that RSS uses a climate model to adjust the satellite data. Sounds like reverse science to me.

      • RSS was used here repeatedly because it showed the least warming. It’s still being used that way on a regular basis.

        Spencer recently went to a new version of UAH to bring more closely inline to RSS.

        This place just cannot get worse, or funnier.

      • David,

        I consider RSS to be a green joke.

        Okay, this really is a joke. Bizarre!

    • Really strang media narrative and conceptuallization of skeptics. Personally, I’m as excited to see how the next few years play out as I would be if massive heat transport to the arctic and sea ice formation were happening. A giant El Niño is exciting, what happens at we is what is really interesting, what kind of step change does or doesn’t seem to have happened in 3/4 years is what is really interesting.

      • what happens at we –> what happens after

        Darn word predict/autocorrect/iPhone. At least my bad grammar and misspellings sometimes make some sense.

    • So here’s a comparison of GISS and HC4 global …

      http://www.woodfortrees.org/graph/gistemp/plot/hadcrut4gl

      Here’s a comparison of the 2 main sat series.

      http://www.woodfortrees.org/plot/rss/plot/uah

      I see differences all around.

      • Well, OF COURSE, I know sats differ a good bit from surface measurements. After all, higher altitude and all that.

        Back when the Discovery app was working, the one that plotted UAH sat data, one could examine visually the effect of altitude on the measurement.

        Basically, the higher the altitude, the less the slope of any given change in an annual plot. At a certain altitude, the annual plot is flat. Above that, it is inverted relative to the lower altitudes.

        I can’t see how a simple linear lapse rate, pick one, can account for this, because the slope of a change in temperature would be the same, just offset due to the lapse rate.

        It’s complicated, it appears. But, if one could understand the details of the mechanism outlined above, one should be able to use the lower trop sat temperature to predict what the temperature should be on the surface.

      • David Wojick

        The surface global estimates are not measurements. They are the output of complex (and questionable) statistical models. Nothing measures global average surface temperature.

      • > Nothing measures global average surface temperature.

        Nothing measures averages.

        DavidW’s disputation has nothing to do with being the output of complex (and questionable) statistical models, and everything to do with C-l-i-m-a-t-e-B-a-l-l.

    • Spencer and company were improving the way the calcs were done and cleaning up the code base.

      The fact that it is closer to RSS is just the way it went, he wasn’t TRYING, as you imply, to get closer to RSS. That’s a numbskull remark.

  27. I guess most of you have heard Obumbles is going to Alaska to save them from the feared “climate change.”

    https://www.whitehouse.gov/webform/president-obama-going-alaska-heres-why

    From the article:

    Summer Snow to Greet Obama on Alaska Climate Trip

    Extreme Weather
    Alaska weather matches its geography – extreme.

    Glaciers were Already Retreating Before 1900

    Alaska’s Recent Warmth is Mostly Due to the Pacific Decadal Oscillation

    http://www.drroyspencer.com/2015/08/summer-snow-to-greet-obama-on-alaska-climate-trip/

  28. The piece on the corals of the Great Barrier Reef (GBR):

    “Great Barrier Reef species more likely to face extinction from climate change than other marine life ( http://www.theguardian.com/environment/2015/aug/25/great-barrier-reef-climate-change-is-biggest-threat-to-native-species-study ) …”

    is just another shot in the the GBR Wars being fought, mostly in ANZO.

    The GBR War is another of the many Science Wars being fought in journals and universities around the world, like the Climate War, the Salt War, the Obesity War, the Sugary-Drinks War…the list is quite long and amusing.

  29. From the article:

    An energy storage study claims that prices for certain battery technologies will plunge by as much as 60% over the next five years. The report was prepared by Australian consultancy AECOM and published by the Australian Renewable Energy Agency (ARENA).

    The 130-page study, originally published last month, expects all battery technologies to drop in price. However, the largest reductions are forecast for Li-ion and flow-battery technologies, which are expected to plummet by 60% and 40%, respectively by 2020.

    Lithium-ion (Li-on) batteries will drop from $550 per kilowatt hour (kWh) in 2014 to $200 per kWh by 2020; and flow battery prices will drop from $680 per kWh to $350 per kWh during the same time.

    http://www.computerworld.com/article/2977655/sustainable-it/as-energy-push-accelerates-battery-costs-set-to-plunge-60.html