How long is the pause?

by Judith Curry

UPDATE:  comments on McKitrick’s paper

With 39 explanations and counting, and some climate scientists now arguing that it might last yet another decade, the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified, given its reliance on computer models that predicted temperature rises that have not occurred.Rupert Darwall 

The statement by Rupert Darwall concisely states what is at stake with regards to the ‘pause.’   This seriously needs to be sorted out.  Here are two recent papers that contribute to setting us on a path to understand the pause.

HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series

Ross McKitrick

Abstract. The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend has changed, and for evaluating climate models. Here, I propose a method for estimating the duration of the hiatus that is robust to unknown forms of heteroskedasticity and autocorrelation (HAC) in the temperature series and to cherry-picking of endpoints. For the specific case of global average temperatures I also add the requirement of spatial consistency between hemispheres. The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein. Application of the method shows that there is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 – 26 years in the lower troposphere. Use of a simple AR1 trend model suggests a shorter hiatus of 14 – 20 years but is likely unreliable.

McKitrick, R. (2014) HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series. Open Journal of Statistics, 4, 527-535. doi: 10.4236/ojs.2014.47050. [link] to full manuscript.

JC comment:  I find this paper to be very interesting.  I can’t personally evaluate the methods, although I understand the importance of the heterskedacity an autocorrelation issues.  The big issue with length of the pause is comparison with climate model predictions; I would like to see the climate model simulations analyzed in the same way.  I would also like to see the HadCRUT4 results compared with Cowtan and Way and Berkeley Earth.  I also seem to recall reading something about UAH and RSS coming closer together; from the perspective of the pause, it seems important to sort this out.

 UPDATE:  The blog Musings on Paleoecology has a post on McKitrick’s paper Recipe for a hiatus, that critiques McKitrick’s method.  McKitrick posted a comment:

Hello Richard
Thank you for your interest in my paper. Let me make a couple of observations.

McKitrick uses a regression technique that is supposed to be robust to heteroscedasticity (unequal variance) and autocorrelation to find the trend in the temperature time series.

I use OLS to find the trend. The HAC method is used to compute the robust confidence intervals. I can’t tell if by your phrase “supposed to be” you are dubious about the robustness of the VF method but if you look at the article cited (V&F 2005), it contains all the power curves, null rejection rates and size estimates you are seeking.
What you are referring to in this post is a null distribution around Jmax. In 100 simulations assuming AR(2) around a positive trend you show that a 1995 or earlier start date occurs 10% of the time. It would be helpful if you also verified in each of those simulations that all the conditions of the definition were met (that the trend CI includes zero across the entire time subsample and applied in both the NH and SH.) Assuming that those things are the case, and you were to get roughly the same answer in 1000 or 10,000 simulations, what you are saying is that under the assumptions of your null, a pause of 19 years is now in the lower 10% tail of the null distribution. And by the looks of it in your Figure, in another 3 years it will be in the lower 5% tail. That’s an interesting additional bit of information on the topic and I encourage you to publish it, especially if you also add in the UAH and RSS computations as well.
However the problem with this kind of estimation–and what I expect a stats journal would point out– is that if what we really want to know is whether Jmax is significantly different from zero, you need a null that assumes it is zero and works out the corresponding distribution. And the difficulty with that is the well-known ‘Davies problem’ in which the parameter to be estimated is not identified under the null. There are simulation methods for handling this problem, which Tim Vogelsang and I briefly review in our new paper comparing models and observations in the tropical troposphere, again using HAC-robust methods (http://onlinelibrary.wiley.com/doi/10.1002/env.2294/abstract). We also outline a simple bootstrap method that gets around the simulation problem, but you’d need to verify whether you need to use a block bootstrap since you have assumed an AR2 error structure. You might get a wider or narrower CI around Jmax than the one you drew above, it’s hard to tell, especially since it will likely be a non-standard distribution.

 

Return periods of global climate fluctuations and the pause

Sean Lovejoy

Abstract.  An approach complementary to General Circulation Models (GCMs), using the anthropogenic CO2 radiative forcing as a linear surrogate for all anthropogenic forcings [Lovejoy, 2014], was recently developed for quantifying human impacts. Using preindustrial multiproxy series and scaling arguments, the probabilities of natural fluctuations at time lags up to 125 years were determined. The hypothesis that the industrial epoch warming was a giant natural fluctuation was rejected with 99.9% confidence. In this paper, this method is extended to the determination of event return times. Over the period 1880–2013, the largest 32 year event is expected to be 0.47 K, effectively explaining the postwar cooling (amplitude 0.42–0.47 K). Similarly, the “pause” since 1998 (0.28–0.37 K) has a return period of 20–50 years (not so unusual). It is nearly cancelled by the pre-pause warming event (1992–1998, return period 30–40 years); the pause is no more than natural variability.

Published in Geophysical Research Letters [link] to full manuscript.

The conclusion states:

“Unless other approaches are explored, the AR6 may simply reiterate the AR5’s “extremely likely” assessment (and possibly even the range 1.5–4.5 K). We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin. To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ” 

JC comment:  I  like Lovejoy’s general approach, but convincingly rejecting a centennial scale giant fluctuation requires more robust paleo proxy reconstructions.   Lovejoy identifies a magnitude of the natural fluctuations of ~0.4C, which is the largest such estimate I’ve seen.

 

JC reflections

The climate community is in a big rut when it comes to climate change attribution – as I’ve argued in previous threads, climate models are not fit for the purpose of climate change attribution on decadal to century timescales.  Alternative methods are needed, and the two papers discussed here are steps in the right direction.

We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data.  The paleo proxy community also seems to be in a  rut, with continued reliance on tree rings and other proxies having serious calibration issues.

The key challenge is this:  convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century.  Global climate models and tree ring based proxy reconstructions are not fit for this purpose.

 

groundhogcartoon

 

696 responses to “How long is the pause?

  1. It is not the arguments of the skeptics they are fighting; rather mother nature is rejecting their pathetic dogmatism. Thuus far skeptics have been proved right just by being properly skeptical of the massive, entrenched yet unjustified hubris of the climate cabal.

    • And lets not forget the public trough in which the alarmist feed.

    • Waiting for Webby to proclaim Mother Nature performed an own goal.

    • Actually things are really pretty simple. The return periods can be estimated from the multproxies, and the mulitproxies are quite reliable up to 125 years or so (the time scale of the corresponding warming period since about 1880). Indeed, using the same methodology as described in my comment below (about the accuracy of the instrumental series), the three I used agreed to about ±-0.07 oC at 125 year time scales (the issues about medieval warming are indeed where there is disagreement- but below about 125 year scales all the multiproxies agree about the amplitudes of change to about the level indicated. Indeed one of the three multiproxies that I used was based on boreholes so that it has no paleo calibration issues, yet for 125 year statistics, it is very close to the others).

      You can then estimate the return periods of natural fluctuations from graph:


      (other stuff can be seen at: http://www.physics.mcgill.ca/~gang/Lovejoy.htm or:
      http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthropause.GRL.final.13.6.14bbis.pdf)

      A cursory glance at the figure:

      shows that it was only in 2012 that the globe recovered from the enormous pre-pause warming (1992-1998, return period 20 – 30 years) and the temperature finally went below the long term trend line at the far right of the figure.

      Indeed, if there hadn’t been a pause, the temperature increase would have been so large as to invalidate the anthropogeneic warming hypothesis!
      According to a new stochastic analysis (under review), assuming that emissions and other anthropogenic forcings continue to grow at their current rates, the pause will have to continue until 2019-2020 before it can be used to reject the anthropogenic warming hypothesis with 95% certainty. Until now, the pause is exactly as expected and will not be very unusual for another couple of years- assuming that it hasn’t already ended….

      • Shaun

        Is 125 years really long enough?

        The globe has been warming for over 300 years. In 2006 Phil jones examined the Extraordinary warming from 1700 to 1740 which came to an abrupt end in that year with one of the most savage winters in the entire record. The 1730’s were the warmest decade until the 1990’s fractionally eclipsed it.

        He came to he conclusion that natural variability was much greater than he had previously realised. That natural variability is not reflected in the period since 1880 so I am not sure it is an adequate period to ascertain the potential levels of natural variability.

        Tonyb

      • Matthew R Marler

        Shaun Lovejoy: Until now, the pause is exactly as expected and will not be very unusual for another couple of years- assuming that it hasn’t already ended….

        It is a shame that it had not been “expected” until after it had been well-underway for a while, and had been disputed by famous scientists. Of course it is merely the “expected value” of a model developed post hoc and not yet tested against out-of-sample data. Had the IPCC expected it 15 years ago, or Hansen in 1988, their prospective global mean temperature graphs for the early 21st century would have been much different.

        How long is the pause now “expected” to last? That was Prof Curry’s question, rephrased in terms of expectancy.

      • As I indicated in a comment below, the pause will have to last until 2019 or 2020 before it’s probability gets down to the 5% level which would be needed to reject the anthropogenic hypothesis at the 95% level.

      • Matthew R Marler

        Tonyb: Is 125 years really long enough?

        While we await Prof Lovejoy’s response to this and my comment below, my answer is “No”. It is too short to estimate the recurrence times between large excursions such as the Roman Warm Period and Medieval Warm Period.

      • To Tonyb:
        There are two issues for the time scales and one issue for the spatial scales. For the spatial scale, it is important to consider global scale (at least hemispheric) temperature changes – the changes over small regions (such as central England) are always much larger (not much averaging). For the time scales, first, if you want to know the statistics of 125 year changes, then you only need probabilities of 125 changes. But, second, to get this information, you’ll need long pre-industrial records, that’s where the multiproxies come in. However – a commonly misunderstood point – they do not need to accurately measure changes at periods longer than 125 years. They can totally disagree with each other about the medieval warming for example (800 years changes): taking differences is a high pass filter.

        That being said, direct use of the existing multiproxy statistics are insufficient to measure changes of the order of 0.8- 1o C these are much much larger than any of those observed over 125 years (again, for global spatial scales). Assumptions have to be made about the nature of the extremes. This is done all the time in statistics – usual the bell curve is chosen. With this choice, the change since 1880 is between 4 and 5 standard deviations – probabilities of about one in a million. That’s why it needs a more refined analysis. In the end, one finds that extremes occur about 10 – 100 times more often than the bell curve would allow, that’s why the probability of the change since 1880 being natural is so high – about one in a thousand!

      • Dr. Shawn Lovejoy: Dr Tony Brown made a rather convincing case on this blog a few years ago- https://judithcurry.com/2011/12/01/the-long-slow-thaw/ -that the multiproxies are probably not as reliable as the CET data. Seems that the only argument (from Mann) against using the CET data is that it’s not global enough, but the counter is that they match the NH instrument data and are close to global.

        H/T Tony

      • All of England is 0.04% of the globe and the Central England Temperatures (CET) represent an even smaller part of the earth. It may be more reliable than multiproxies, but it isn’t very useful for our purpose which requires global scale values. The CET variability is very much larger than the global scale variability. For example, the “Friends of the Earth” tried to debunk my paper by refering to a 0.9o C change in CET from 1663-1763 (this is roughly the global change from 1880- 2004). However, over the same period, the global scale temperature change was only 0.21 oC (a typical century scale natural variation) which proved my point rather well. This is a much smaller variation because while the CET might have increased by 0.9o C, other regions decreased their temperature, partially offsetting the increase. The globe contains more than 2500 regions the size of the CET and many did not change temperature by the same amount – or even in the same direction during this period!

      • So you second plot allows one to estimate the ‘equilibrium climate sensitivity’ for a 2x[CO2] as we know 280 to 390 ppm gives a change in the ‘equilibrium’ global temperature of 1 degree.
        Thus, your analysis generates a ECS of 2.1.

      • The equilibrium climate sensitivity is a theoretical/model concept, I do not pretend to estimate it. The fact that the effective climate sensitivity is not so different from estimates of the climate sensitivity to CO2 is presumably that contributions from methane and other GHG’s roughly cancel out the cooling due to aerosols.
        But the nice thing is that all this is irrelevant to the conclusions (it just makes them more plausible since compatible with a totally different approach).

      • DocM, he is careful to call it ‘effective climate sensitivity’ because it uses CO2 as a proxy and because it takes no account of ocean delay.

      • The ocean delay was taken into account – the delay was bounded between zero and twenty years using cross-correlation analysis and then the zero and twenty year delayed sensitivities were used. The uncertainty in the delay is the biggest cause for uncertainty with this method.

      • Furthermore, in the paper he shows that the fit works just as well for up to a 20-year lag where the ECS becomes over 3 C per doubling because adding the lag means more warming for a given earlier CO2 increase.

      • “Is 125 years really long enough? . . . ”

        Not really, though it is more rational than many of the shorter proposed periods. However, it sturdily ignores much longer period natural variation like Dansgaard-Oeschger events, which can span a millennium or more. And it ignores the overall cooling trend of the Holocene, spanning the last 8,000 years or so. Not to mention ice ages and even longer patterns that are all known contributors to what we laughingly call global climate over geological spans. Still, it does head in the right direction, and it does pose the very appropriate point that until natural variation of climate is understood, anthropic effects are going to be lost in the noise.

        Dr. Lovejoy’s 95% confidence is what bothers me. One chance in 20 of being wrong is gambling odds and I have even thrown dice series that were far, far less likely. So, ideally, rather than two-sigma significance, which is the QC level that the less effective industrial quality control systems shoot for, how about a four or five sigma certainty? We might run our lives on the precautionary principle, but we really can’t do science that way.

      • It is 4 -5 standard devations, but rather than one chance in about a million due to the frequent extremes, a Gaussian is not appropriate so that this translates into a one in a thousand chance.

      • I get different results using a longer air temperature series than giss.
        Kinda concerned about borehole data.
        And it’s similarity to other paleo data indicts the latter.

      • Steven Mosher, “I get different results using a longer air temperature series than giss. Kinda concerned about borehole data, and it’s similarity to other paleo data, indicts the latter.” ( lightly edited :)

        How about posting a comparison?

      • Matthew R Marler

        Shaun Lovejoy: As I indicated in a comment below, the pause will have to last until 2019 or 2020 before it’s probability gets down to the 5% level which would be needed to reject the anthropogenic hypothesis at the 95% level.

        Let me try again: How much longer do you expect the pause to last?

      • I did partially answer, I’ll give a more detailed one. If we assume that the anthropogenic effects continue as at present, there is a 67% chance that the pause will end before the end of 2016 and a 95% chance before 2019 – 2020, Is that good enough? This is the result of stochastic forecasting (work in progress).

      • Captian

        Lovejoy says he used the 2010 giss global and
        NH.
        It’s unclear whether he used land ocean or land only.

        If he used land ocean we have a problem

        The proxies he used moberg and Wahl are also
        Problematic.

        II will have to dig more but I show amplitudes higher than his.

      • Since he used NH the Ljungqvist, F.C. 2009.northern extratropics might be interesting.

        Ljungqvist, F.C. 2009.

        You know, with that land amplification and all.

      • If I’m extrapolating correctly from this chart:
        – Trend: +0.2C / +0.1 log-Co2.
        -> So if 2000 is at .4 on the X-axis, (the canonical 2050/2x-CO2 is 1.0) we’d expect 0.6 * 0.2 = +1.2C warming.

        This is closer to Lindzen-estimates than AR4/5-estimates; isn’t warming *very unlikely* to be less than +1.5C ? In other words, why so skeptical? Maybe I’m reading it wrong?

      • Your calculation is about right (you used an effective sensitivity of 2.00o C/doubling which is close to the slope of the graph which is 2.33). However the main uncertainty in this approach is the lag. The graph and slope indicated are for a relation between CO2 and temperature at an annual scale. If you lag the CO2 and response by up to 20 years then the relationship is just as linear but the slope (sensitivity) increases to 3.73o C/doubling which is a fair bit more. That explains the uncertainty range range in my papers.
        The argument for the lag is that most of the heating goes first into the ocean, however, the feedbacks work both ways so things may not be so simple.

      • David Springer

        Speaking of borehole temperature…

        “For central Greenland (Cuffey et al. 1995, Cuffey and Clow 1997, Dahl-Jensen et al. 1998), results show a warming over the last 150 years of approximately 1°C ± 0.2°C preceded by a few centuries of cool conditions. Preceding this was a warm period centered around A.D. 1000, which was warmer than the late 20th century by approximately 1°C. ”

        Surface Temperature Reconstructions for the Last 2,000 Years ( 2006 )
        http://www.nap.edu/openbook.php?record_id=11676&page=81

      • Shaun: Thank you for your reply. Tony has a paper that compares CET to BEST and I don’tsee the 0.9 C between 1663-1763. Here is an excerpt from the paper:
        “According to studies made by a number of climate scientists, CET is a reasonable proxy for Northern Hemisphere -and to
        some extent global temperatures- as documenoted in ‘The Long Slow Thaw’. However, as Hubert Lamb observed, it can ‘show us the tendency but not the precision’.”
        http://wattsupwiththat.com/2012/08/14/little-ice-age-thermometers-historic-variations-in-temperatures-part-3-best-confirms-extended-period-of-warming/

      • For the values of temperature change from the CET,form 1663-1763, I’m just quoting from the Friends of Science site.

        It seems that you are arguing that the centennial scale trends of the CET may be indicative of global scale trends. While that may be the case, there is clearly a large reduction in the amplitude of global fluctuations with respect to the CET fluctuations precisely because of the huge amount of spatial averaging that goes into the global average compared to the CET values. The numbers I quoted (0.9o C for CET compared to 0.2o C for the northern hemisphere) illustrate this well. Using classical statistics – which assume independence – if the global scale temperature is 2500 times larger than the CET area, we would expect 50 times smaller fluctuations (square root of 2500) whereas in reality it is only a factor of 4 – 5 times smaller. This difference is because of the spatial correlations that act over long distances (scaling, nonclassical statistics).

      • Rls

        Thanks for your references to two of my papers. I think Mosh also appears to be a bit startled by the low temperature variability built into Shaun’s work. Having looked at boreholes and tree ring data in great detail they are not data that gives me confidence that they should be used as highly accurate proxies. They can not show the annual and decadal variability of the real world.

        I wrote about the relative unrepresentative stability of such proxies compared to observational data in this article

        https://judithcurry.com/2013/06/26/noticeable-climate-change/

        In looking at a global average supplied by proxy data or instrumental data covering only the last 125 years we are not looking at the real world over the last many hundreds and thousands of years, where variability is at times much greater than proxies or the modern record appear to illustrate.

        I admire Shaun’s work and his vigour in supporting his highly theoretical findings covering such a short time scale, but my question remains . How can you put forward a case for in effect, only human made warming, when in the past it has been both warmer than today and cooler than today illustrating the range of natural variability that is possible that surprised even Phil Jones back in 2006.

        tonyb

      • Tony: Thank you. Judith says “We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data.”. And you know how to do this. Are any of the consensia listening to you?

      • We should work with raw data during periods they are available and not proxies. NASA GIST mean global temperature vs. Mauna Loa CO2 data cross plotted have zero correlation during two thirds of the reporting period from 1959 to present including 1959 to mid/late 1970s and 1998 to 2014 to date – during these two periods the X-Y data are a double aught shotgot pattern – no correlation. Focus should be to understand and explain. Then, how does one conclude that extension of no increase / hiatus to 2019-2020 is required to refute the hypothesis. But what is required to prove the hypothesis is true.

      • Matthew R Marler

        Shaun Lovejoy: Is that good enough?

        Yes, that’s what I was after. Thank you.

      • The argument for the lag is that most of the heating goes first into the ocean, however, the feedbacks work both ways so things may not be so simple.

        the argument does not hold water in the SH,as the growth rate of Co2 has decreased in the SH ie the lag rate in matching MLO has increased from 18 months to around 4 years in the SH midlatitude stations a difference of around 7ppm in the 21st century.

      • Shaun, is there any difference in the rate at which the ‘average’ temperature recovers from a cooling event vs. a warming event?

      • There is a long term memory in the process so that it’s the entire past temperature history that determines the answer (work in progress). But there appears to be a positive/negative symmetry (at least as far as I can tell).

      • David Springer

        @Tony

        Paleo-temperature from ice boreholes are supposedly very accurate and need no calibration because it’s temperature readings all the way down.

        The only caveat is they become increasingly smeared (average of longer and longer time periods) as depth increases. So a temperature anomaly of 1C greater than late 20th century happening in the year 1000 means that high temperature must have persisted for many decades.

        Nothing unusual is happening today in the NH hemisphere if Greenland is as good a proxy for the entire NH then as it is now.

      • Shaun: The graphs in the link at my last reply to you does indeed show greater fluctuations for CET, however starting about 1780 the moving averages of both sets closely follow each other, with CET about 0.5 C higher. Can’t this correlation be used to give CET greater confidence and greater usefulness?

        Also, you stated “the mulitproxies are quite reliable up to 125 years or so”. How is this known?

        Thank you for your time and patience.

      • From the original Lovejoy post and various replies, it is practically impossible that warming since 1880 could occur without human contributions but a pause of current duration from entirely natural causes is possible.

        Am I the only one to see the asymmetry to this argument?

        Additionally, if the pause is natural, that means that nature could be responsible for at least half of the 20th century warming (having countered recent warming from man). What does that imply for models and co2 sensitivity?

    • The problem is that for 50 years, atmospheric science has been based on incorrect physics originating from Carl Sagan. This is to confuse radiative emittance with a real energy flux, the vector sum of emittances. There is net zero surface IR in the self-absorbed atmospheric GHG bands.

      There are two other major errors. The first is to misinterpret the Tyndall Experiment. Even if there were surface IR to absorb**, there can be no thermalisation of GHG-absorbed IR in a gas because that would mean absorptivity would exceed emissivity, a breach of Kirchhoff’s law of Radiation.

      The second error, from 1981_Hansen_etal.pdf, is the claim of a single -18 deg C OLR emitter radiating up and down; there is no such zone, -18 deg C being a virtual temperature for emission from 0 to ~20 km. with zero downward IR flux except for the stratospheric CO2 and O3 emission, very minor.

      The IPCC climate modellers imagine that the mean heating rate of the atmosphere is 238.5 + 333 – 238.5 = 333 W/m^2. The 40% increase is imaginary. Processes in the atmosphere ensure there is mean zero warming by well-mixed GHGs apart from the stratospheric emission.

      So, the pause is the norm: the previous heating in the 1980s and 1990s was about half AGW but not from CO2.

      **The 23 W/m^2 absorbed in the atmosphere is over a few km, so local thermalisation (at aerosols) is minimal on a unit volume basis.

      • AlecM commented
        I’ve been measuring the zenith with an IR thermometer (8-14u), and on anything but hot and humid days, it’s quite cold, 70-100F colder than surface air temps. Hot humid days as low as 60-65F.
        Cloud bottoms however are much closer to surface temps, only 10 to 40-50 degrees colder.
        During this winter I measured -80F zenith temps.
        Even if you then add the Co2 band DWIR, the sky is still very cold, even in the middle of the summer (-40F).

      • AlecM – when a CO2 molecule absorbs a photon, it can collide with other air molecules and transfer the energy to them as kinetic energy. So, the point is you can’t just look at radiated energy in and out. You have to consider all pathways the energy can come and go. Meaning, you are wrong about the Kirchhoff’s law of Radiation part.

      • Reply to Jim2: your physics is wrong. Statistical thermodynamics shows that although there is thermalisation of the GHG-absorbed IR by the mechanism you quote, the ‘excess energy’ in the IR absorbed density of states, a discrete function of temperature, must be ejected from the local gas volume.

        This is basic physics and Climate Alchemists, specifically Ramanathan, failed to do the science correctly.

      • AlecM – the heated air molecules, they aren’t ejected, they just bump into their neighbors more frequently, thereby raising the pressure. The additional pressure expands the heated parcel of air thus reducing its density, then it rises as a thermal. This isn’t the only scenario, but it is a common one.

      • jim2 commented

        the heated air molecules, they aren’t ejected, they just bump into their neighbors more frequently, thereby raising the pressure. The additional pressure expands the heated parcel of air thus reducing its density, then it rises as a thermal.

        This warming should be present in any DWIR that a “regular” IR Thermometer would detect.

      • So, if the temp of the air is that cold, then why don’t you freeze to death?

      • Not surface air temps, zenith ir temp, the radiative surface temp that would be what the surface radiates to in an S – B equation.

      • So, if the temp of the air is that cold, then why don’t you freeze to death?

        You can .
        http://www.kilty.com/freeze.htm
        But I did want to add that the very cold temps are straight up, and as you move to the horizon the temps go up, I would presume that by the time it is pointed horizontal it will be at ground temps. So this would go into the S-B equation as well.

  2. daveandrews723

    Dr. Curry
    Here is my question as a layman (with a brief set up).
    Back in the 70’s the prevailing scientific wisdom was that the earth was cooling.
    Then in the 80’s a couple of scientists came up with a hypothesis that CO2 is a major driver of global temperatures. The scientific community seemed to say…”yeah, that sounds right.”
    Then a few scientists came up with models to predict future global temperatures based on a direct relationship to CO2 levels.
    Then came all the dire predictions of “global warming.” The concept really caught on with the public. Yes, they said, man must be driving up the earth’s temperatures with the increasing release of CO2 into the atmosphere.
    Then, 17 years ago, the earth’s temperatures stopped rising even though CO2 levels continued to increase.
    My question is this…. How can the scientific community be so certain that the trace gas CO2 is so influential in affecting the temperatures of the earth??? It seems to me (and several scientists whose opinions I have seen) that the proof is just not there.
    I believe this is a very dark period for science, in which political and social agendas, and not science, are actually driving the debate.
    Thanks for your time.
    Dave Andrews
    .

    • Dave,

      The notion that CO2 affects global temperature goes back a lot further than the 1980s. Tyndall discovered the radiative properties of CO2 in 1859 and Arrhenius predicted that burning fossil fuels would cause global warming and made the first calculations of the amount of warming back in 1896.

      There is a good summary of the history of climate science here.

      http://www.aip.org/history/climate/index.htm

      There is a good technical introduction to how CO2 and other GHGs affect the temperature of the earth here

      http://scienceofdoom.com/2009/11/28/co2-an-insignificant-trace-gas-part-one/

      I particularly like the figure from Goody & Yung which compares calculations of outgoing IR radiation with observations – that seems to me to be pretty good “proof” that our understanding of the way GHGs behave in the atmosphere is correct, along with the fact that the average surface temperature of the earth is about 15C and not -18C.

      As for the “global cooling consensus” of the 1970’s, that is actually a myth. There were a small number of papers suggesting this and the idea got some traction in the media but the large majority of papers supported the notion of global warming caused by human GHG emissions. See

      http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1

      • Andrew adams

        Regarding your last link and comment on global cooling. We have had this discussion before with one of the authors. William Connelly . The global cooling scare began in the 1960’s and was supported by many papers and the leading climatologists of the era including Hubert lamb and Budyko who wrote of their concerns in books as well as papers. By the time the 1970’s came around the tide wa already starting to turn against the notion of cooling as temperatures started to deviate from the previous concerns.

        The CIA wrote a major paper on the subject in the 1970’s. There was great concern at the time and it is a myth to claim it was a small number of papers.
        Tonyb

      • If the seventies global cooling thing is a myth, then it is a myth that at the time was supported in writing by current White House science advisor Holdren. Global Ecology pp. 64-78 (1971). Some media attention includes three cover stories plus two feature articles (both titled ‘Another Ice Age?’ 11/13/72 and 6/24/74) in Time. And a myth with sufficient traction that on 8/1/1974 Nixon ordered Commerce to set up a subcommittee on climate change (cooling, not warming) chaired by NOAA with an initial annual budget of $39.8 million.
        I think your myth assertion is itself mythical. Merited an essay of its own in the next book.

      • Even if you think it isn’t a myth, it’s irrelevant. Unless you live in some cave somewhere, cooking and heating with fire, then you rely on science despite scientists having been in error in the past.

        Logical fallacies are fallacious. I’m writing a chapter about that in my next book.

      • Tonyb,
        “The CIA wrote a major paper on the subject in the 1970’s.”
        It didn’t. It assigned someone to investigate, and he went and got an earfull from Klutzbach at Wisconsin. His report was tagged

        “This document is a working paper prepared by the Office of Research and Devedlopment of the Central Intelligence Agency for its internal planning purposes. Therefore the views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official position, either expressed or implied, of the Central Intelligence Agency.”

        My own story of the times is this. In 1976 I took up a job with CSIRO in Perth. The WA government was under pressure to extent their wheat transport facilities into more marginal areas, where cropping was now more feasible via mechanisation. CSIRO was asked for a climate outlook. I came from a Division with close contacts with Atmospheric Physics in Aspendale, so they asked me to ask them. The answer was unequivocal. The GHE was the future, and it would mean an expanded Hadley cell. The westerlies that sustain wheat in WA would move further south – lower rainfall. Bad investment.

        The WA Gov’t took our advice, which proved right.

        The “global cooling scare” is largely a myth, based on a few news mag articles.

      • Tonyb is correct, as far as I can see. The CIA wrote that paper, and it concerned cooling, but also cooling related disruption, including warm spells.

        CIA, August 1974. And it might well be deemed a major paper. Simple as that. It was indeed tagged for internal use, but it was prepared by the CIA. Maybe they were impressed by the loss of much of the Soviet wheat crop in 1972. Some agricultural follies in China might have got them thinking also.

        Obviously the agency got someone to do it. It was either that or wait for a million monkeys to type it by accident. The paper was theirs, under their letterhead.

        Why fudge this?

      • Mosomoso

        Yes, it was an official document. You often get disclaimers such as Nick cited. It doesn’t get away from the fact it was an official document drawing together a lot of research and mentioning some of the leading climatologists of the day whose academic work I then cited in my post.

        People are trying to rewrite history which bearing in mind its only ever ‘ANECDOTAL’ seems a lot of work for nothing.
        tonyb

      • Mr. Stokes, you are being a tad disingenuous to label the global cooling fad a myth. At least without mentioning Stephen Schneider’s paper on it.

        An early numerical computation of climate effects was published in the journal Science in July 1971 as a paper by S. Ichtiaque Rasool and Stephen H. Schneider, titled “Atmospheric Carbon Dioxide and Aerosols: Effects of Large Increases on Global Climate”. The paper used rudimentary data and equations to compute the possible future effects of large increases in the densities in the atmosphere of two types of human environmental emissions:greenhouse gases such as carbon dioxide;
        particulate pollution such as smog, some of which remains suspended in the atmosphere in aerosol form for years.

        The paper suggested that the global warming due to greenhouse gases would tend to have less effect with greater densities, and while aerosol pollution could cause warming, it was likely that it would tend to have a cooling effect which increased with density. They concluded that “An increase by only a factor of 4 in global aerosol background concentration may be sufficient to reduce the surface temperature by as much as 3.5 ° K. If sustained over a period of several years, such a temperature decrease over the whole globe is believed to be sufficient to trigger an ice age.

        For a compilation of media articles you can wander over to your favorite site WUWT: http://wattsupwiththat.com/2013/03/01/global-cooling-compilation/.

        Or the National Science Board’s statement, “Judging from the record of the past interglacial ages, the present time of high temperatures should be drawing to an end . . . leading into the next glacial age.”

        Some quotes: he continued rapid cooling of the earth since WWII is in accord with the increase in global air pollution associated with industrialization, mechanization, urbanization and exploding population. — Reid Bryson, “Global Ecology; Readings towards a rational strategy for Man”, (1971)

        ……or this:

        This [cooling] trend will reduce agricultural productivity for the rest of the century — Peter Gwynne, Newsweek 1976

        …..or this:

        This cooling has already killed hundreds of thousands of people. If it continues and no strong action is taken, it will cause world famine, world chaos and world war, and this could all come about before the year 2000. — Lowell Ponte “The Cooling”, 1976

        or this:

        ….If present trends continue, the world will be about four degrees colder for the global mean temperature in 1990, but eleven degrees colder by the year 2000…This is about twice what it would take to put us in an ice age. — Kenneth E.F. Watt on air pollution and global cooling, Earth Day (1970)

      • Tom,

        You lecturing Nick Stokes is hilarious.

        Just to remind you, this was the first comment on this little sub-thread;
        “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.

        That is a myth.

      • What a very instructive sub-thread. Often I read comments from tony, tom, mosomoso, and rud and think they sound like intelligent and well-informed people.

        Then I read their comments in this thread and the paper Adam linked.

        It is apparent that they are all, despite their intelligence and being informed, avoiding what Michael pointed out – the statement that spurned this sub-thread was the following:

        “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.

        None of them actually addresses the inaccuracy of that statement or the accuracy of Adam’s response:

        “As for the “global cooling consensus” of the 1970’s, that is actually a myth.”

        Talk about disingenuity! Why is it so important to them that they individual examples that don’t speak to the original inaccuracy?

        From William’s paper:

        Fig. 1. The number of papers classified as predicting, implying, or providing supporting evidence for future global cooling, warming, and neutral categories as defined in the text and listed in Table 1. During the period from 1965 through 1979, our literature survey
        found 7 cooling, 20 neutral, and 44 warming papers.

        Let’s look further. In his initial response, tony says this:

        “The global cooling scare began in the 1960’s …”

        And further down he cites in support of that claim, scientists speaking of a cooling phase, not evidence of scientists supporting a “global cooling scare.”

        Rud speaks of funding of research into climate change. Not on point.

        Mosomoso points out that well, the CIA wrote a paper, Not on point.

        And Tom tells is that Steven wrote a paper. Not on point.

        What an instructive sub-thread. Why is this question so important that our much beloved “skeptics” would offer responses that don’t speak to the inaccuracy of the original statement?

        Even if it weren’t inaccurate, it would be fallacious to argue that because there was a consensus about cooling in the ’70s we should assume anything in particular about scientists who say today that there is reason to be concerned about a risk posed of significant climate change as a result of continued ACO2 emissions.

        So this compulsion to avoid that actual inaccuracy is doubly interesting.

        Sometimes being intelligent and well-informed just ain’t enough.

      • Joshua

        I have taken the trouble to post 3 or 4 replies all with actual links or information that directly relate to the leading climatologists of the time and have quoted their words. I have their books in my Bookcase here and much more of their work stored on my computer. Who is it you disbelieve? Budyko? Lamb? The Cia? The 20 leading climatologists?

        The scientists of the time were quite right to point to fears about cooling as after the warm decades preceding the downturn it looked for a time as if it might be severe and ongoing. To claim cooling was a myth is to deny the papers written, the conferences held, the books written and the general consensus of the time which gradually gave way in the 1970’s to a realisation that the cooling was turning to a warming.

        Which papers and climatologists do you claim were lying?

        It would be surprising if present day scientists weren’t warning about warming. Why do you think I would have a problem with that?

        The problem is that I don’t believe they provide a sufficient historic context to demonstrate this, nor access to data that demonstrates it. Shaun quoting 125 years as proof of natural variability is not good enough, nor is the belief that we have for instance a good knowledge of global sea surface temperatures to 1850 enough to demonstrate this is so.

        tonyb.


      • Shaun quoting 125 years as proof of natural variability is not good enough

        It’s plenty good enough, LOL. See Stadium Wave.

      • tony –

        ==> “Which papers and climatologists do you claim were lying?”

        I am not claiming that anyone was “lying.”

        I am pointing to the fact that none of what you have spoken about speaks to the evidence provided in William’s paper – evidence that supports a conclusion that David’s statement was inaccurate and Adam’s was accurate (the statements that spurred this thread). Can you show me that William’s evidence is inaccurate – that by a matter of some 7 to 1, published papers supported a conclusion of warming rather than cooling?

        I am pointing out that you are equating cooling, or a cooling phase, with evidence of a “scare.”

        And I am pointing out that this whole argument is takes place in the name of fallacious rhetoric – that even if there were a consensus towards cooling (which it appears their wasn’t), it wouldn’t tell us anything much of anything useful w/r/t evaluating today’s science. The whole argument is embedded in fallacious reasoning.

        I actually find it unfortunate to see you stain your solid efforts at collecting evidence by signing on with this kind of partisan “skepticism.” I pretty much expect it of Rud, Tom, and mosomoso to some extent, but I like to think that you’re better than that.

      • Heh. Some 6 to 1. And of course, the neutral papers also speak to the “cooling scare” myth (as it refers to the scientific literature).

      • Joshua: I remember it and, not being part of the climate science community, learned of it through the media and word of mouth. And I can say this with confidence; the message was widespread and scary. However, like today, it was at the bottom of concerns for most people; it was at the time of watergate, recessions every three years, and high inflation. If you want to understand, perhaps the newspaper articles of the day would be the best source.

      • Which papers and climatologists do you claim were beating their wives, Joshua?

      • It takes a Physicist to model a unknown fear today.

        “Physicist Alessandro Vespignani of Northeastern University in Boston is one of several researchers trying to figure out how far Ebola may spread and how many people around the world could be affected. Based on his findings, there will be 10,000 cases by September of this year and it only gets worse from there.

        ebola-outbreak-model-2

        (A model created by Alessandro Vespignani and his colleagues suggests that, at its current spread, Ebola may infect up to 10,000 people by September 24. Other models suggest up to 100,000 infected globally by December of this year. The shaded area is the variability range.)”

        I guess.

      • Joshua

        Despite being quoted chapter and verse of the cooling fears articulated by such as Budyko and Lamb you still harp on about no evidence.

        Put your logical sceptical hat on for a moment. Here is what I said.

        ‘Regarding your last link and comment on global cooling. We have had this discussion before with one of the authors. William Connelly . The global cooling scare began in the 1960’s and was supported by many papers and the leading climatologists of the era including Hubert lamb and Budyko who wrote of their concerns in books as well as papers. By the time the 1970’s came around the tide was already starting to turn against the notion of cooling as temperatures started to deviate from the previous concerns.’

        Imagine yourself to be a climate scientist. There had been a 30 year warming from 1940 then this suddenly reversed, with some notably cold winters. We then have 25 or so years of cooling. Its there in the graphs.

        Do you look at the evidence and say its cooling, or do you ignore the evidence and claim warming that isn’t there?

        William C took papers up to 1979. By that time the cooling concerns had long passed. Why would scientists still be writing papers about it by then? They would look foolish in light of the evidence showing a warming trend was well established.

        Please note what I wrote; that this was a 1960’s scare based on the evidence. By the time the 70’s came round the evidence was showing something else to be happening and by the end of that decade warming was well established.

        Of course William would find more papers if he went up to 1979. If he looked objectively in the literature of the 1950’s/ 1960’s era-with an overlap as it always takes time for an idea to lose momentum- he would find the general thinking was that there was a problem with cooling. By the end of the 1970’s there were very few making that claim.

        tonyb

      • Joshua

        I said here

        https://judithcurry.com/2014/09/01/how-long-is-the-pause/#comment-623985

        ‘….Imagine yourself to be a climate scientist. There had been a 30 year warming from 1940 …’.

        I did of course mean UNTIL 1940.

        tonyb

      • tonyb,

        It’s a long way from a few papers and some concerns to “prevailing scientific wisdom”.

      • Only takes one valid observation…

        http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0105948

        to end all the speculation.

      • tony –

        You raise an interesting point w/r/t to the timing. First, someone less importantly, let’s look at what was said.

        ==> “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling.”

        So technically speaking, in that the ’70s would mean from 1970-1980, even with your explanation the original statement would be inaccurate and Adam’s statement would be accurate.

        But more importantly, (if anything can ever really be considered to be important in the silly squabbles), the underlying debate is whether (1) there was a prevailing ‘scare” promoted by scientists that there was catastrophic cooling on the horizon and, (2) whether that has relevance (in a meaningful sense as opposed to a partisan rhetorical sense) to today’s debate about climate change.

        As for the first point…even if we were to break down the range of the 70s into the early 70s and the later 70s, or even if we were to extend that to the mid-60s or early 60s – there is a large gap between saying that some scientists tentatively spoke of a “phase” of cooling and saying that it was a “scare” being promoted by scientists. You seem (to me) to be deliberately jumping over a lower bar. That’s kind of OK, IMO, if you make it clear what you’re doing. The information that you add w/r/t the different time periods involved and how that relates to William’s data is relevant – but your approach to enlarging the discussion to include that point seems to me to be more in service of advancing a fallacious agenda. It would seem easy enough to break down William’s data by publishing date. Have you don so? If so, then it would seem that you could easily show me that for a different period of time than what David referenced, there was a “prevailing scientific wisdom” that the Earth was cooling – for at least a time-limited period But even that wouldn’t justify your “scare” language, as to do so you’d have to cross yet another higher evidence bar – to show (1) that the prevailing wisdom was that there would be a long-term cooling and that the prevailing wisdom that the long-term trend of cooling would be worthy of being “scared” of.

        As for the 2nd point, I’ve said it multiple times. This entire discussion is rooted in a fallacious attempt by “skeptics” to argue that (if it were true that) scientists’ believing something 50 years ago has direct relevance to what they believe now – with the state of the science being completely different.

        You know, tony, when I read the technical arguments of “skeptics,” sometimes they make sense to me. But I know that my ability to understand the technical arguments is extremely limited, so I have to use other tools to try to evaluate the scientific debate. One of those tools is to look for arguments made by participants in the climate wars that I can tell are fallacious – that I don’t need technical scientific skills to evaluate. This is one of those cases. I also add into that consideration of the general approach of certain individuals to debate. You’re actually one of those “skeptics” that I think is more reliable in your general approach.

        When Rud or Tom or mosomoso employ fallacious rhetoric to advance their views, it is instructive as to the probabilities of the technical arguments they make. Not dispositive, of course. They could simply advance fallacious arguments in one area and make completely valid arguments in a more technical area. But it is evidence that helps me to evaluate probabilities. Overt politicization might more information about probabilities. Refusal to acknowledge the symmetry of motivated reasoning is more information about probabilities. Prevalence of view among highly educated and specialized “experts” is also information about probabilities. Consistency in application of criteria (say, about uncertainty) is also information about probabilities.

        So when someone who I generally consider to be more reliable within that matrix of probabilities makes what seems to me to be weak arguments in the service of fallacious rhetoric – I have to ask myself why he does that – and whether or not is is more information about the probabilities. If a more reasonable “skeptic” (IMO) is willing to engage in weak and fallacious debate, then does it say anything about the larger spectrum of “skeptical” arguments? I think it does.

        Of course, the same processes apply on the other side of the debate with “realists.” Yes, I do tell them that. But I can guess that predictably, some of my much beloved fans like mosher and Don will sign on to dismiss the probabilities to which I speak by fallaciously attacking me and personalizing the argument. Such identity-aggressive and identity-defense behaviors are well-predicted by the empirical evidence that supports the role of motivated reasoning in highly polarized debates like the debate about climate change. It’s funny that smart and knowledgeable people fall into the trap of thinking that whether or not I apply standards evenly speaks to whether or not other people do,.

        But as always, I do thank you for the respectful exchange. It is one of the reasons that I hold your opinions in higher regard than the opinions of many of my much beloved “skeptics.”

      • And tony –

        One more point (because the previous comment wasn’t long enough and my fans just can’t get enough of my long comments)…

        It’s also rather interesting that even if it were true that there was a “prevailing scientific wisdom” in, say, the mid-60s there where was a “scary” long-term imminent cooling, if anything the fact that such a view was relatively quickly corrected because of increased information and a longer data period is all that much more reason to reject much of what I see in “skeptical” arguments about the state of the science of climate change. But then again, trying to extrapolate from those events is more fallacious than meaningful, and something only likely to be done by someone engaged in motivated reasoning… so I won’t make that argument here.

        Heh.

      • I like ‘relatively quickly corrected’. Now? Not so much.
        =============

      • Joshua

        I think you are paying me a back handed compliment. Here is the paper in question;

        http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2370.1
        entitled ‘the myth of the 1970’s global cooling consensus.’ It is a well written paper but from the title we can see what view point the authors are putting over.

        See table 1.

        As I have said at least three times on the thread it wasn’t a 1970’s thing. The concerns started in the 1950’s and built up in the 1960’s when the trend had become obvious, As Budyko said

        ‘in the 1940’s the warming trend was overcome by a cooling trend which intensified in the 1960’s and in the mid 60’s the mean air temperature of the Northern Hemisphere (once again) approached the level of the cold seasons of the late 1910’s’

        However this strong trend had tailed off by the 1970’s so a search of this last decade for evidence on cooling is rather pointless as by that time the concerns were about warming and as the decade wore on more papers explored the emerging warming trend..

        It is a shame that a detailed search of the earlier papers and books was not made for the article, some of which have been quoted here including Lamb and Budyko, who changed their minds concerning the prevailing climate state in the early 1970’s as new evidence came along.

        tonyb

      • tony –

        No backhanded compliment. Unlike most of the “skeptics” I encounter here, you are willing to exchange views in good faith, and you tend to avoid conflating personal attacks with logical arguments. That really does increase the validity of your perspective, IMO – particularly if it is w/r/t arguments being made where I can’t evaluate the technical merits of different viewpoints.

        On the topic of the sub-thread – AFAIAC, you still have not addressed the points that I’ve made (some of them a couple of times). Not sure that there’s any further for us to go on this one unless you do.

      • Joshua, it’s funny to watch you employ the tactics you claim to abhor in others.

        Tony, myself and others did not claim that global cooling was prevailing scientific opinion in the 70s. But that’s a convenient strawman for you to pin your obsessions to.

        What we claim is that there was widespread media coverage of a potential cooling with potential damaging impacts. This was in part fed by peer reviewed papers including one by Stephen Schneider and in part fed by grave pronouncements from national associations.

        As others have pointed out to you, I too was there and I remember the coverage quite well.

        We don’t obsess on it–that seems to be your schtick, although it’s a blessed relief for you to be whining about us instead of Judith–so please continue.

        We do note that some of the current whining by alarmists has a familiar ring to it. Perhaps that grates on you most of all.

        But I guess when fallacious reasoning leads to felicitous conclusions it’s acceptable.

      • In Search Of… The Coming Ice Age

        Watched it in 1977. Also watched Tomorrows World, spreading soot on Arctic ice to melt it. Dick Emery doing is sketch on surviving the ice-age in his Inuit costume.

      • Above i show why Tyndall’s Experiment has been misinterpreted, also why net surface IR emission has been exaggerated 5.1x.

      • Correction; the exaggeration of net GHG-absorbed surface IR is (396-40)/23 = 15.5x!

      • Hey Joshua, check out the Stephen Schneider comment
        on a coming ice age @ 21.19 on the video that Doc Martyn
        posted. Guess those geologists, meteorologists et al, (not Al)
        and people stuck in snow on the roads were quite worried at
        the time.

        We’d be worried too if the cold hits now what with greater
        populations to be fed, more people crowded in high rise
        living and dependent on heating, and reduced energy
        efficiency as intermittent solar and wind energy replace
        fossil fuels thanks ter the back ter the, er, golden age,
        campaigning by Greens.

    • Dave Andrews,

      Actually the predictions of global cooling were communicated mostly in the media and in some policy circles. The scientific community as a whole did not articulate a predominant view on the future direction of climate. The behaviour of CO2 as a “greenhouse” gas was well known by the 1970s, but it was not clear when and if it would become a dominant driver of climate. In 1965 a report to the Johnson Administration anticipated warming due to CO2: Environmental Pollution Panel (1965), Restoring the Quality of Our Environment, 133 pp, President’s Science Advisory Committee, Washington D.C.
      see
      http://climatechangenationalforum.org/fears-of-freezing-the-1970s-are-calling-they-want-their-climate-policies-back/

      • wrhoward

        It is not correct that the concerns over cooling were a media creation.

        Lamb, Mitchell, Budyko, Ladurie etc etc sparked the concerns.

        As Budyko himself says (who seems to have subsequently changed his mind about cooling as did Lamb-as scientists should do when new evidence comes to light) in his book “The earths climate past and future’ pages 148 ;

        ‘it was generally accepted that a tendency towards climatic cooling appeared during the last few decades; since the sign of temperature fluctuations changes relatively rarely, the scientists concerned with climatic change almost UNANIMOUSLY (my capitalization) believed that the temperature would continue to decrease in the near future…Lamb 1973 mentioned that more than 20 forecasts of the early 70’s concerning climatic change predicted a cooling trend in the next few decades, but (then) indicated a lack of sufficient scientific grounds for these forecasts and two years later obtained the FIRST (my capitalization) evidence of a possible climatic change towards warming.”

        (The temperature cooling can be seen in the Willett/Mitchell curves of the time)

        Budyko continues;
        ‘in the 1940’s the warming trend was overcome by a cooling trend which intensified in the 1960’s and in the mid 60’s the mean air temperature of the Northern Hemisphere (once again) approached the level of the cold seasons of the late 1910’s .”

        To summarise, here is what seems to have happened; As you know there was a very substantial warming from the 1920’s to 1940’s. This reversed itself. By 1962/3 the dropping temperature made Callendar himself doubt his greenhouse theory. Budyko, Lamb and an almost ‘unanimous’ agreement of climate scientists believed we were heading into a significant cooling phase . Lamb eventually pointed out in 1973 that the cooling was not sufficiently long lived to be a scientifically meaningful climatic trend of at least 30 years. The widespread scare of cooling changed into a scare of warming as temperatures started to recover.

        Here are a couple of additional links and a quote;

        “The second important group analyzing global temperatures was the British government’s Climatic Research Unit (CRU) at the University of East Anglia, founded by Lamb in 1971 and now led by Tom Wigley. Help in assembling data and funding came from American scientists and agencies. The British results agreed overall with the NASA group’s findings — the world was getting warmer. In 1982, East Anglia confirmed that the Northern Hemisphere cooling that began in the 1940s had turned around by the early 1970s.

        http://www.aip.org/history/climate/20ctrend.htm

        Also see;

        http://earthobservatory.nasa.gov/Features/GISSTemperature/giss_temperature2.php

        So the 25 year long (very real) cooling scare was most rife during the 1960’s and came to an end in the early 70’s.

        tonyb

      • wrhoward, it should be fairly easy to find the letters to the editors of the media where the bulk of the scientists were trying to set the record straight. I’d love to see the collection from that time period. Are you aware of one? I wouldn’t think it would be very flattering to the climate science community if they were perfectly willing to let a few alarmists manipulate the public in such a way and not try to stop it. Probably be better to just claim to have been wrong then but right now.

      • Steven ” it should be fairly easy to find the letters to the editors of the media where the bulk of the scientists were trying to set the record straight. I’d love to see the collection from that time period. Are you aware of one?”

        My point was that *as a community* climate scientists did not communicate a prevailing view. Today, the prevailing paradigm is that global warming due to accumulating greenhouse gases is the main climate risk we face. This view is communicated widely through IPCC, national academies, scientific societies, government science agencies, and the media.

        In the 1970s climate scientists as a community were more divided, and had not come to a predominant view on which way the climate’s trajectory would proceed from the 1970s. Some studies had already suggested GHG-driven warming would dominate climate, others anticipated cooling as some of the posts here show.

        http://climatechangenationalforum.org/fears-of-freezing-the-1970s-are-calling-they-want-their-climate-policies-back/

        see also:

        Peterson, T. C., W. M. Connolley, and J. Fleck (2008), The Myth of the 1970s Global Cooling Scientific Consensus, Bull. Am. Meteorol. Soc., 89(9), 1325-1337, doi:10.1175/2008BAMS2370.1.

        Weart, S. R. (2010), The idea of anthropogenic global climate change in the 20th century, Wiley Interdisciplinary Reviews: Climate Change, 1(1), 67-81, doi:10.1002/wcc.6.

        http://scienceblogs.com/gregladen/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/

      • climatereason

        “It is not correct that the concerns over cooling were a media creation.”

        Indeed they were not a media *creation* but there were media stories communicating the concerns. Nothing like the current coverage of climate change, but stories were there.

        The notion that in the 1960s and 1970s climate scientists were *unanimous* about anything is incorrect.

        http://scienceblogs.com/gregladen/2013/06/04/the-1970s-ice-age-myth-and-time-magazine-covers-by-david-kirtley/

      • The little ice age of the mid 20th century was widely discussed in the astrophysics community. eg Sagan

        Although the radiative transfer time scale of photons from the core to the photosphere of the Sun is long, all the models identify epochs of anomalously low neutrino flux with epochs of anomalously low surface luminosity; and the temptation to connect the present “ice age” on Earth with low solar luminosity, L, has been unsuccessfully resisted.

        http://www.nature.com/nature/journal/v243/n5408/abs/243459a0.html

      • wrhoward, I’m aware of the argument you were making. I’ve seen it before. I don’t have a problem with your argument being that the majority of the climate science community sat in silence as they watched the general public being told things they felt were in error.

    • Tom,

      You lecturing Nick Stokes is hilarious.

      Just to remind you, this was the first comment on this little sub-thread;
      “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.

      That is a myth.

      • wrong spot.

      • Michael

        Various of us have quoted the leading climatologists of the time and relevant contemporary papers to demonstrate that what we remember is exactly what happened. Global cooling was a hot topic advanced by (97.5%) of the climate establishment of the day. By the early 70’s it had turned round to generally agreed warming.
        tonyb

      • Michael, perhaps you have a different definition of lecture. Reminding Mr. Stokes of what was written by one of the leading lights of the Consensus is not a lecture.

        I would show you what a lecture is, but you’re not worth the words.

        Run along now–Joshua is downthread and up. You’ll find him.

      • Tom, it was one paper.

        That’s not “the prevailing scientific wisdom”.

      • Mikey, the point is it wasn’t just Schneider–if you can read you’ll see evidence here on this thread. There were at least seven papers in a two-year span that were peer reviewed predictions of global cooling. Not to mention a lot of media play and concerned talk from respected institutions.

      • Yes Michael, there’s evidence.

        Of course, the evidence doesn’t refute that it is a myth that there was a “prevailing wisdom” of cooling.

        But there’s evidence. Look at the evidence. There were 7 papers.

        They reflected a minority opinion of a cooling phase, and weren’t scientific reports supporting a “scare,” and yes, the question of “prevailing wisdom” and “consensus” were what the sub-thread started with, but…..er….uh…..there were 7 papers.

      • Tom, are you disingenous, or just a foo1?

      • But Michael, there were 7 papers.

      • OK then, I concede……ignoring the 50 or so that said cooling was not likely in the future.

      • So we’ve gone from it being a myth to not being prevailing wisdom.

        Even f**ls can make progress. Mikey–you found your Joshmate! See? All is well…

      • ==> “So we’ve gone from it being a myth to not being prevailing wisdom.

        It is a myth that it was the “prevailing wisdom.”

        Once again, we go back to what spurred the subthread:

        “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”.

        That is a myth. As Adam said.

        As Michael said.

        And that’s without even getting into the myth of a “scare” among scientists.

        What an instructive sub-thread!!!

      • Glad to help Tom.

      • LMAO. This is the progress. Your position remains a despicable torturing of the truth. Fully supported by a bunch of puffed-up crap. Used-car level stuff.

      • The common theme to both scares was to blame industrial society.

        Hey, AnthroCO2 is like an injection of sugar into a hypoglycemic patient.
        ==================

      • JCH, this is how they do everything. Quotes from the media? Doesn’t matter. Expressions from august institutions? Piffle. Academic papers, including one by Saint Schneider? Ignored.

        This is how they do everything. Which may explain why they’re getting their keisters handed to them.

        When was the last time you heard a skeptic complaining about the state of climate communications?

      • kim doubles down.

        Popcorn please!

      • Plants have certainly revived, Michael. Had we an easy metric it could be shown in the Animal Kingdom, too.

        Gaia stirs restlessly on the gurney. “Air, air’, she mumbles. Of course she means Carbon Dioxide. That Oxygen mask doesn’t really do much for her.
        ==========================

      • Tom Fuller | September 2, 2014 at 9:31 am |
        ” Piffle. Academic papers, including one by Saint Schneider? Ignored.”

        Poor Tom.

        What are ignored, are the many caveats in the paper, instead just quoting the abstact.

      • JCH –

        ==> “LMAO. This is the progress. Your position remains a despicable torturing of the truth.”

        It might help if you were a bit more specific.

      • I’ve seen a video of Stephen Schneider from that time, bell bottoms and all, in which he solemnly tells us that we simply don’t know enough to tell which way climate is going. He has fooled a lot of people since, but first he had to fool himself.
        ===================

      • “What are ignored, are the many caveats in the paper, instead just quoting the abstact.”

        The realist’s dictionary:
        Caveat – see also “exit strategy.” In climate science, a “caveat” is all-important from roughly 1960-1979 and applies only to papers and text books that describe cooling. Caveats are plentiful in modern climate papers and texts, however to notice them is to risk being labeled a “denier.” Following general trends in politicized science, expect a phase shift in the next decade or so where today’s caveats become all important and only “deniers” note the caveats in papers produced in the 1960s and 1970s. The basic rule is that climate science is always 100% correct, especially when it is 100% wrong.

      • Joshua writes- “It might help if you were a bit more specific.”

        Imo, when it comes to purely poltical discussions, being specific seems to occur infrequentlt as it can lead to people pointing out where a person was wrong.

      • “Back in the 70’s the prevailing scientific wisdom was that the earth was cooling”

        From the mid-1940s to ~ the mid 70s the data said the earth *was* cooling. So if what you mean by “prevailing scientific wisdom” is that scientists recognised what the data indicated, then in that sense your statement is correct.

  3. Maybe now is the time to divorce policy from climate modeling, and consider both the other risks of CO2 and what can be done about it while minimizing the risks to the economy. Climate is hardly the only non-linear system involved.

    • What is needed is a discussion of the risks and benefits of CO2.

      I believe we need a CO2 level of about 600 PPM.

      Reducing CO2 will require more water and land to grow food, which will lead draconian population controls to avoid starvation by 2100 since we are currently wasting land to grow fuel.

      The problem with Lovejoy and similar studies is the Argos system and satellites were not deployed in the Roman Period or even the middle ages.

      We do not have good data from previous warm periods or even previous natural cycles.

      His claimed accuracy for mean global temperature anomaly is ±0.03 K.

      I’m not sure how you get that from historic data. Climate scientists can’t even agree on whether there was a significant medieval warming.

      • What is needed is a discussion of the risks and benefits of CO2.

        I believe we need a CO2 level of about 600 PPM.

        Totally unwarranted, considering how little we know about bio-feedbacks and risks to crops from emergent weeds and animal (primarily insect) pests. (Also, IMO, fungal/lichen).

        Depending on how accurate Salby’s work is, if the atmospheric pCO2 has really been constrained through the Pleistocene the way current “consensus” science says it has, raising it significantly above that level carries a real risk of non-linear responses from the ecosystem(s). The assumption that “more CO2 is better” for unprotected crops, just because it works in greenhouses is totally unwarranted. Greenhouse crops are much better protected…

      • Paleo helps you there, AK. It’s safe.
        ==========

      • Paleo helps you there, AK. It’s safe.

        No it doesn’t. AFAIK current “consensus science” says it’s been since before the beginning of the Pleistocene the the global pCO2 was higher than 400ppm.

        Don’t you think most of the biome has had plenty of time to adjust, and evolve new features without the risk from a higher global pCO2? Granted, there are localized places/times where it’s much higher, naturally. But those are exception situations, where opportunistic populations (species, sub-species, or strains) can take advantage.

        So when the entire globe crosses some tipping point or other, what is the risk that one or more of the opportunistic populations will experience a population explosion?

        Such opportunistic explosions can result in much more rapid evolution of the strain(s) involved.

        The risk isn’t from failure to grow, it’s from too much growth, and evolution, of the wrong things.

        And parallels from earlier epochs don’t matter, because they weren’t the same populations.

      • AK – I see no reason to assume that evolution would produce the “wrong things.” Why would a change in CO2 produce proportionally more “wrong things” than any other change?

        Splain that one.

      • Meh, tipping points, all over paleo.

        We can’t have evolution of the wrong things. Nope, nope, nope.

        Those ‘unsame’ populations still have the genetic machinery to adapt.

        Relax, everything is for the best in this best of all possible worlds.
        =============

      • Jim2 beat me to ‘wrong’. Some people think humans are ‘wrong’.
        =========

      • AK – I am unaware of any benefit from limiting CO2 that justifies starving billions of people.

      • AK – I am unaware of any benefit from limiting CO2 that justifies starving billions of people.Well, I could say

        PA = Alarmist Klaptrap

        Certainly there’s no reason to suppose that letting the pCO2 rise to 500-550 then dragging it back down to, say, 400 would be “starving billions of people.” Or even 350. Any more than similar mitigation would. We’ve been doing fine at 350-400.

      • AK = Alarmist Klaptrap

      • I’m not sure how you get that from historic data. Climate scientists can’t even agree on whether there was a significant medieval warming.

        Real Scientists agree there was a Medieval Warming. Real historians also agree that the Vikings moved to Greenland and grew crops there.

        CO2 Alarmists need the warm periods and cold periods of the past to not be true. That is why the hockey stick was promoted. The IPCC cannot still use the hockey stick because too many people really do know that Temperature does vary naturally, in a cycle the goes up and down and repeats. That the IPCC used the hockey stick even once does demonstrate they are not really about real science.

        Science is always skeptic. Look for real scientists outside the Consensus Clique. There are many really good scientists.

      • “Wrong things” for our crops.

      • I listen to both jim2 and AK. We’re all ignorant fools.
        ===============

      • Crop plants genetics are managed by humans, I would say those are the least of our worries WRT to any warming that might occur. Cooling would be very much harder to deal with, though.

        That being said, over time we will see cheaper energy via some mix of nuclear fusion and fission. We just have to put the enviro-not-sees aside, put government back in its rightful place, and continue with the technological progress that afforded us the great standard of living we now enjoy.

        I might point out that the modern era in the US was ushered in by independent, monopolistic capitalists. They brought us the coal, petroleum products, steel, and electricity upon which the modern era was built. No matter how you slice it, billions of people have benefited from their efforts.

      • We’ve been doing fine at 350-400.

        Actually, we have done much better at 350-400 than at the 280 we had before we started using a lot of carbon fuel.

        Most every real study on how green things grow indicate that more CO2 will be even better.

      • Only comparisons of quite different estimates based on different methodologies are capable of quantifying the accuracy of global temperatures.

        My accuracy estimate comes from four instrument based data annual global mean data sets since 1880. One of them (the 20th Century reanalysis) uses only monthly Sea Surface Temperatures and pressure data from stations. It uses no land based temperature series so that there are no issues about heat islands, changing the locales of stations etc .
        One simply calculates the mean of the four series and then the differences of each from the mean. One then analyses the amplitude of the fluctuations in these differences as a function of time scale. One finds that the fluctuations in the differences as roughly constant as a function of scale corresponding to the cited figure (±0.03o C see http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/ESDD.comment.14.1.13.pdf). Notice that this is actually a surprising result since one would expect that the fluctuations at longer and longer scales would instead diminish (due to the increased amount of averaging in the longer time periods).
        This lack of converge is an indication of the nontrivial measurement issues (they don’t follow classical statistics), yet the actual level of uncertainty is still much lower than that needed to check the anthropogenic warming issue which is closer to a 1o C fluctuation.

      • Matthew R Marler

        Shaun Lovejoy: yet the actual level of uncertainty is still much lower than that needed to check the anthropogenic warming issue which is closer to a 10 C fluctuation.

        Ten C?

      • Kim says, “Some people think humans are ‘wrong’.”

        jim2 thinks American humans are ‘wrong’

      • AK “Don’t you think most of the biome has had plenty of time to adjust, and evolve new features without the risk from a higher global pCO2?”

        For a good part of the late Pleistocene pCO2 has fluctuated between ~ 180 ppm (glacial stages) and ~280 (interglacial). Most extant species (obviously not individual organisms), including us, have adapted to this range. We have now taken pCO2 well beyond that Pleistocene range, more into Pliocene levels.

        A good thing? A bad thing? I don’t know, but I think it will be interesting.

        Lüthi et al. (2008), High-resolution carbon dioxide concentration record 650,000-800,000 years before present, Nature, 453(7193), 379-382, doi:10.1038/nature06949.
        (Fig 2 shows the whole 800,000-year CO2 reconstruction)

        Seki, O., G. L. Foster, D. N. Schmidt, A. Mackensen, K. Kawamura, and R. D. Pancost (2010), Alkenone and boron-based Pliocene pCO2 records, Earth Planet. Sci. Lett., 292(1-2), 201-211, doi:10.1016/j.epsl.2010.01.037.

        “during the warm Pliocene pCO2 was between 330 and 400 ppm, i.e. similar to today. The decrease to values similar to pre-industrial times (275–285 ppm) occurred between 3.2 Ma and 2.8 Ma”

      • O we are all ignorant fools, sometimes even
        radiant fools … the glare that obscures.
        Good enuff fer Socrates,
        good enuff fer Thurber,
        good enuff fer serfs.

      • Beth: We are like 5 year olds playing soccer; just playing the ball.

    • AK | September 1, 2014 at 8:13 am | Reply
      “…the other risks of CO2…”

      Please articulate the other risks you are referring to.
      thanks

    • They biggest risk associated with CO2 is that if we ever reduce it, green things will not grow as well and it will require more water and life on earth will get more difficult.

      • The action of the sun on the biome inevitably virtually permanently sequesters CO2. Gaia is very grateful that she has evolved a species(Human Carbon Volcano) which is supplementing the historically failing natural vulcanism which is failing to keep CO2 in an optimal range.
        ==============

      • What’s the optimal range? I dunno, but we are pretty clearly below it.
        =============

      • PCT perhaps this is helpful:

        The chart shows that photosynthesis shuts down at 200 PPM (C4 plants can function with less CO2). There was a terrestrial photosynthesis crisis before the current interglacial. At 280 PPM – a near starvation level – plants need larger amounts of water than they do at more normal CO2 level. When plants first evolved the CO2 level was 17.5 times (1750%) higher than the current 400 PPM and 25 times (2500%) higher than 280 PPM.

        http://www.csiro.au/Portals/Media/Deserts-greening-from-rising-CO2.aspx

        The deserts are greening because the increased CO2 and the reduction in water consumption.

        If we don’t increase CO2 before the interglacial ends the result will be catastrophic.

        The only effective source of CO2 is mankind. Nature tends to bind CO2 and remove it from the atmosphere. It is fortunate we came along when we did because by increasing CO2 we can save life on this planet from extinction.

      • Human Carbon Cornucopia.
        =====================

      • “I’m sorry Mr. President, there’s simply too much carbon pollution in this salad for it to be politically safe for consumption……would you like me to taste the lobster instead?”

    • “…what can be done about it….”
      “A strange game.The only winning move is not to play.”

    • If I understand your
      “AK | September 1, 2014 at 9:42 am |’Paleo helps you there, AK. It’s safe.’ No it doesn’t.”
      post, the risk you cite from increased levels of CO2 is greening of the earth. I know of no science supporting a “tipping point.” Presumably the risk is unwanted flora would increase as well as beneficial flora. Unless there are creditable studies that identify specific and significant risks of unwanted flora, I personally would call that an unknown not a risk. Particularly since starvation, which is a know and immediate problem and not a risk, would be mitigated by “greening” of crops. Unlike Pangloss I do not believe “we live in the best of all possible worlds” and there is anything special or sacrosanct about the current levels of CO2 or temperature.
      While not wishing to indorse a specific lever of CO2, I agree with the sentiments of
      “PA | September 1, 2014 at 8:44 am | Reply
      Reducing CO2 will require more water and land to grow food, which will lead draconian population controls to avoid starvation”

      I do concur with you comment
      ” AK | September 1, 2014 at 8:13 am | Reply
      Maybe now is the time to divorce policy from climate modeling”

      Other than “greening” of the earth, did I miss any other risk you are concerned about?

      • Presumably the risk is unwanted flora would increase as well as beneficial flora. Unless there are creditable studies that identify specific and significant risks of unwanted flora, I personally would call that an unknown not a risk.

        The risk of any specific “unwanted flora” is an “unknown unknown”. The probability that some of the new, opportunistic flora would be unwanted competitors to our crops is probably beyond specific evaluation, but certainly not zero.

        As to how much competition? Probably not much, given 5-10 years to develop treatments. What if we start getting 1-2 new competitors every year?

        A bigger risk, however, is the emergence of new, fast-breeding insect pests among the new wild flora, with abilities to compete with us (and our domestic animals) to eat our crops.

        AFAIK there’s no real consensus among students of eco-evolution, but there’s good evidence than many evolutionary innovations occur via sudden, wild, population explosions. Moving the pCO2 far outside the bounds it’s maintained since before the start of the Pleistocene increases the chance of that. By how much? Nobody knows. Could be negligible, could be a major risk.

        Note that I said the Pleistocene because of its glacial oscillations, with repeated evolutionary stress with every phase switch. AFAIK best estimates of last time the pCO2 was much over 400ppm go much farther back, but IMO it’s the Pleistocene glaciation that has produced the highest evolutionary stress on all land populations, resulting in evolution of new characteristics without the need to allow for higher global pCO2.

        And more CO2 in the atmosphere isn’t the only issue. That fossil carbon is going somewhere, and we don’t know how much, if any, damage it’s doing along the way. Increased deposition of organic carbon in the deep ocean, and on its floor, when it oxidizes, may well be contributing to progressive anoxia of the deeps. Whether that could have any evolutionary effects that could return to surface and bite us (in the near future, longer term probably isn’t an issue) we just don’t know. Not probable perhaps, but not impossible.

    • Steinar Midtskogen

      My position has long been that the climate debate can be made redundant by looking at the bigger picture. What the world really needs in the long run in order to develop the technology to support a 10+ billion poverty free population is practically free energy available for everybody. Obviously, fossil fuel cannot do this. If we can’t agree on climate change, why not try to agree on developing nearly free energy, whose side effects include stopping CO2 emissions as well as possibly solving poverty problems. It’s something we need to tackle eventually anyway.

      Fossil fuel is a primitive technology, but we musn’t forget that it’s been a necessary step on the ladder towards superior technology. The trouble is that we should have been well along the road towards the next steps by now, namely nucluar power, but I think fear has been holding us back. If the world had made sense, environmentalists would be fighting for funds to develop better nuclear power, possibly even fighting for extending the life of old plants despite the waste issues if they really fear the effects of CO2.

      • Steinar

        I have said many times that we need a CERN or Apollo type programme of concerted effort over a short time scale of a decade or so using the best brains and facilities to create alternative forms of cheap energy sources which would likely include storage technology

        Tonyb

      • Steinar, it is doubtful that total future arable land can support more than about 9.1 billion. And that is IF meat calories do not increase, best practices spread where they are not used (requiring wealth and machinery and GMO crops everywhere), and yields continue to improve on top of best practices through hybridization and such, and pests do not evolve (weeds, root worms, and wheat rust already have). Read the detailed calculations on water, virtual water, and food production in Gaias Limits.
        And by then, petroleum cost for transportation fuel will have risen enormously independent of CAGW or not and the ‘war on coal’. Two previous guest posts about that here. One on the IEA projections, and one on Maugeri from Harvard.

      • @Steinar Midtskogen…

        If we can’t agree on climate change, why not try to agree on developing nearly free energy, whose side effects include stopping CO2 emissions as well as possibly solving poverty problems. It’s something we need to tackle eventually anyway.

        Exactly my point. The risks from dumping all that fossil carbon into the system certainly don’t justify killing the Industrial Revolution. Only a socialists (and perhaps not all of those) would say otherwise. But there are approaches that would enable “rolling back” the pCO2 from a peak of 500-600 ppm to around the 400 it is today, later in the century. If, in the light of experience and better science, it’s deemed necessary.

        Fossil fuel is a primitive technology, but we musn’t forget that it’s been a necessary step on the ladder towards superior technology.

        And, IMO, we shouldn’t stop now. “Full steam ahead” with gas. Once “carbon-neutral” sources become available, all the investment in infrastructure to store, transport, and use it will continue to pay off. Using for generating electricity, as well as direct heat.

        The trouble is that we should have been well along the road towards the next steps by now, namely nucluar [sic] power, but I think fear has been holding us back.

        Actually, we are! Using that big nuclear reactor in the sky.

      • @Tonyb…

        I have said many times that we need a CERN or Apollo type programme of concerted effort over a short time scale of a decade or so using the best brains and facilities to create alternative forms of cheap energy sources which would likely include storage technology

        I doubt it. I’ve posted many links here (CE, not this particular post) to pages demonstrating the very rapid course we’re making towards solving those problems. It might speed it up a little, but the current situation seems to be driving a great deal of innovation. Via the “free enterprise” path, although with various incentives, financial and otherwise, from the political situation.

      • @Rud Istvan…

        Steinar, it is doubtful that total future arable land can support more than about 9.1 billion.

        It’s past time that we stopped using natural land for agriculture. “[A]rable land” should be created in factories, used in enclosed structures (greenhouses, etc.), and occupy sunlit areas not suitable for natural growth. Existing “arable land” should be allowed to revert to wild ecosystems.

        […] yields continue to improve on top of best practices through hybridization and such, and pests do not evolve (weeds, root worms, and wheat rust already have). Read the detailed calculations on water, virtual water, and food production in Gaias Limits.

        All these problems are much more easily addressed with enclosed agriculture. Water can be recycled. Pests and competition can be excluded, without constantly needing new pesticides, with their attendant costs and health risks.

        And by then, petroleum cost for transportation fuel will have risen enormously independent of CAGW or not and the ‘war on coal’.

        Maybe. Or maybe Joule Unlimited’s new process, and others like it, will have crashed the price so much it’s no longer worth digging it out of the ground. Or, most likely IMO, something in-between.

      • @ Steinar Midtskogen and Rud Istvan

        Rud, this quote from Steinar nails it: “What the world really needs in the long run in order to develop the technology to support a 10+ billion poverty free population is practically free energy available for everybody. ”

        Your reply to Steinar: “Steinar, it is doubtful that total future arable land can support more than about 9.1 billion. ” misses the fact that as Jerry Pournelle is fond of saying ‘Cheap, plentiful energy is the key to freedom and prosperity.’

        Given cheap, plentiful energy, arable land is a non-issue. With a large enough supply of cheap energy, ALL of our food can be grown indoors, under ideal conditions for the crop of interest, using desalinated water from the oceans.

        Where do we get this cheap energy, postulating the exhaustion or outlawing of fossil fuels? Nuclear, ignoring political pressure and using whatever technology was most effective from an engineering standpoint, including breeder reactors, thorium reactors, or whatever else makes engineering sense, could come close. Other things like Rossi’s Ecat (should it actually turn out to be real) and similar technologies, whatever. Fusion? So far it is always 40 years in the future (According to jim2, a company called Helion claims that it is here now. Is it?) It won’t be solar and windmills.

        The point is that as Steinar points out: Given cheap, abundant energy, the entire surface of the Earth is ‘arable’.

        The tall pole in the tent right now is that we don’t have it and every attempt to develop it is blocked by the ‘environmentalists’ who are so concerned for our welfare that they are willing to kill off 90+% of the human race for our own good.

      • AK: That idea of enclosed agriculture is an idea I’ve long had, but not in the context of reduciing land use. My idea is to use vacant multi-storied buildings in the US inner city poverty areas, to help the poor find work. I read that there is such a project ongoing.

      • @rls…

        Perhaps, near term. Longer term, looking farther out, there are several things I foresee that will, IMO, lead to a switch to enclosed agriculture.

        First, there’s a “sleeper technology” based on using inflated structures. Today, such structures have to maintain their mechanical stability through simple negative feedback: push on it enough to distort its shape, and it will push back, in proportion. But if you supply such inflated structures with “muscles” in the form of internal tensile members with some sort of active tension systems, and you supply “brains” in the form of active control systems to react to stress (especially wind stress) in proportion to the stress rather than needing to distort the structure, you can build it using much less material, while still making it more robust. The key to making such structures very cheap is the control systems. IT. And IT follows” Moore’s Law”.

        Another potentially very useful technology is “light pipes”. These can be hollow tubes of inflated plastic, aluminized on the inside. They can be made very light, cheap, and moderately robust, while easily and cheaply replaceable when the occasional storm takes them out.

        Combining multi-story inflated structures with light pipes, and collection technology using similar cheap, replaceable inflated structures, agriculture could become very much like a factory. No tractors, just overhead cranes. Hundreds of square kilometers of “arable land” squashed into a few square kilometers of building, with the light being gathered by cheap, replaceable structures.

        Of course, as Solar energy continues its exponential cost reductions, it might make more sense to use internal lights powered by solar cells. This is a use, one of very many, for solar energy that isn’t impacted by its intermittent nature.

        And the whole shebang can be put on/under the oceans, in areas that currently have almost no life. No impact to the environment/ecosystem(s), no impact to land wanted for other purposes, no significant price for surface area, near term, except what politics places in the way of it. Thus making the solution of the agricultural problem(s) political rather than technical.

        And notice how robust such a system would be to “climate change”.

      • Mr. Istvan, much of what I have read on food production is in sharp contrast to what you write. The FAO states that cereals production is increasing 1.5% annually without bringing any new net land under the plough. As population increase is 1.1% (and dropping), I find it difficult to see the rationale for your pessimism.

        Most agriculture outside the West is practiced at something close to medieval levels of technology. There’s lots of room to improve productivity before looking for new land to till.

    • Annual Energy Outlook Projections and the Future of Solar Photovoltaic Electricity by Noah Kaufman

      The Economist recently declared that due to technological advancements in solar photovoltaic (PV) energy, soon “alternative energy will no longer be alternative.” But this transition to a solar energy future in the United States is rife with uncertainty. Will costs continue to fall at their recent pace? Can solar compete everywhere, or only when and where the sun shines brightest?

      The lack of answers to these questions about the future of solar energy creates a problem for energy modelers. By their nature, models are built to reflect how a system currently works. Revolutionary changes are difficult to incorporate into models. However, proper policymaking requires accurate “baseline” energy forecasts. For example, the projected costs of retiring fossil fuel generating plants depend heavily on the ability of renewable sources like solar energy to take their places in the near future without causing large hikes in electricity prices.

      The topic of this paper is the assumed growth of solar PV in current energy models, with a focus on information from the Annual Energy Outlook (AEO) reports of the U.S. Energy Information Administration (EIA), which report results from the most widely used and influential energy model in the country. EIA resolves the difficulty of modeling solar energy into the future by assuming its current growth will not continue. However, EIA’s assumptions on the future costs of solar PV are highly pessimistic, and its methodology would appear to bias its “Reference Case” projections toward lower growth of solar energy. Sure enough, past AEOs have systematically underestimated the future growth of solar PV. Energy modelers therefore may need to adjust the AEO forecast in order to reflect a most likely baseline trajectory for solar PV.

      Viewed in isolation, solar PV has experienced remarkable growth in recent years. Figure 1 shows that over the past decade, growth in solar PV electricity capacity and generation has been over 60 percent per year.

      If “Swanson’s Law” continues to lead to cost reductions, solar PV is likely to become even more competitive with natural gas in the near future. This leads the main questions of this paper: are energy models forecasting that this “grid parity” will be achieved? If so, will this lead to a “tipping point” whereby solar becomes a significant contributor to the electricity grid?

      As shown in Figure 5, AEO 2014 (from the “Early Release” published in December) projects that solar PV growth will slow considerably in the near future. Through 2025, solar PV is projected to remain less than one percent of total U.S. generation.

      However, a closer look at the assumptions underlying the AEO Reference Case modeling shows that there are various reasons to believe this forecast should not be considered a most likely projection:

      •       The cost of solar PV is projected to increase in the short-term, reversing the recent trend;

      •       Policies to promote solar are projected to weaken, reversing the recent trend; and

      •       Over the past decade, the AEOs have systematically under-estimated the growth of solar PV.

      Each point is discussed below. Taken together, they suggest the EIA Reference Case should be viewed as a pessimistic scenario for solar PV growth in the near future, and that energy modelers should make adjustments to the AEO Reference Case to produce a most likely scenario for the growth of solar PV.

      The relatively pessimistic assumptions EIA makes related to solar PV costs and public policy supporting solar PV suggest that the methodology would tend to underestimate the growth of solar PV. Sure enough, over the past decade, the AEOs have systematically needed to revise their solar PV growth projections upward to account for actual growth. Figure 8 shows the Reference Case projections of the growth of solar PV generation and capacity for AEO 2008, AEO 2010, AEO 2012 and the AEO 2014 Early Release.

      • Of course, the key driver of uncertainty here is the extent to which the enormous subsidies granted solar PV will continue. The rest is just foam on the wave.

      • AFAIK those subsidies aren’t any greater in proportion than those granted to oil. And, in its time, coal. And certainly nuclear.

        And when it comes to subsidies, don’t forget the existence of “nation-states”, and their hybrid descendants. Such political entities have strong strategic incentives to support the development of “appropriate” energy resources.

        And don’t forget, also, that subsidies can take many forms. (As the highly unsuccessful feed-in tariffs have demonstrated.) Even intellectual property (e.g. patents) are really a form of subsidy, when you come down to it. As were the 19th-20th century military adventures in support of oil resources. Resources for nations and businesses with national strategic implications.

  4. Before the nifty new methods take over, it would be good to understand the uncertainty principle again. You can’t, cannot, distinguish a cycle from a trend with data short campared to the cycle. The eigenvalues of the discriminating matrix explode.

    An out exists in the case of strong signal to noise, called supergaining in the beam forming field, but it amounts to overwhelming the exponential explosion with signal.

    • David L. Hagen

      rhhardin
      Re: “uncertainty principle . . .You can’t, cannot, distinguish a cycle from a trend with data short [compared] to the cycle.
      How then are we to reliably predict and validate models, to identify whether a “pause” in global temperature is the consequence of a 22 hear Hale cycle, a 60 year ocean oscillation, a 1,500 year cycle, or the descent into the next glaciation?
      How can we ensure we are achieving enough global warming to prevent the next glaciation? e.g.,
      e.g.,

      These two models are here used to predict the effect
      of an anthropogenic CO2 pulse on the evolution of atmospheric CO2, global ice volume and Antarctic ice cover during the next 300 kyr. The initial atmospheric CO2 condition is obtained after a critical data analysis that sets 1300 Gt as the most realistic carbon Ultimate Recoverable Resources (URR), with the help of a global compartmental model to determine the carbon transfer function to the atmosphere. The next 20 kyr will have an abnormally high greenhouse effect which, according to the CO2 values, will lengthen the present interglacial by some 25 to 33 kyr.

      “Impact of anthropogenic CO2 on the next glacial cycle” Carmen Herrero, Antonio Garc´ıa-Olivares, Josep L. Pelegr´, Climatic Change (2014) 122:283–298 DOI 10.1007/s10584-013-1012-0
      What if they are wrong? A one mile high glacier would have rather a major impact when it ploughs through a city.

      Statistical downscaling of a climate simulation of the last glacial
      cycle: temperature and precipitation over Northern Europe

      Clim. Past, 10, 1489–1500, 2014 http://www.clim-past.net/10/1489/2014/
      doi:10.5194/cp-10-1489-2014

      Earth system models of intermediate complexity (EMICs) have proven to be able to simulate the large-scale features of glacial–interglacial climate evolution.

      If the previous model is now disrupted with anthropogenic CO2, how can we reliably modify it without experiencing the changes? – which could be catastrophic cooling!

      • How then are we to reliably predict and validate models

        It won’t be by distinguishing cycles from trends.

        I’m not in charge of the constraints. I just report them.

      • “The next 20 kyr will have an abnormally high greenhouse effect which, according to the CO2 values, will lengthen the present interglacial by some 25 to 33 kyr.” Whatever discount rate you use, that would have to be beneficial. What possible dangers from further warming could possible justify not extending the interglacial by 25,000+ years? Who would support reducing GHG emissions if this is well-validated and disseminated?

    • jim2 – heteroskedasticity

      supercalifragilisticexpialidocious still leads, though hetero… is right up there.

      I’m worried about the future of humor in climate science. If it’s like the past, we’re all gonna die.

      Justin

  5. Being stationary is a very strong requirement, which may not apply. I’d bet but do not know that he also needs gaussian.

    • Um, no. Go read the definition of “heteroskedasticity” again.

      I am encouraged to see people taking the base CO2 climate sensitivity as the null hypothesis. Gavin Schmidt (among many others) seems to claim that high climate sensitivity should be the null hypothesis and claims of lower sensitivity have the burden of proof. Thar claim is central to his criticism of Dr. Curry’s 50-50 default position.

      But using a model as the null hyopthesis is exactly backwards from how competent science is done. In the recent discussions, my respect for Gavin Schmidt as a scientist has certainly gone down.

  6. When does “a pause” become a trend? Considering that the basis for the claims of “Climate Change/CAGW is based primarily on a record of 30 years or so?

  7. Pause, pause, pause, pause; goblins bugger cause.
    ================

  8. How long is the pause?

    As shown in the following data, the two previous hiatus period last for about 30 years (from 1880 to 1910 and from 1945 to 1975).

    http://www.woodfortrees.org/plot/hadcrut4gl/mean:60/detrend:0.9/from:1880

    From this pattern, the current hiatus period should last until 2030.

    • Girma, until we understand causation there is no pattern. It could just as easily go up,down or continue on as is with no one the wiser.

      • Statistical forecasting does not require any causation. It just assumes the previous pattern continues. There is a clear pattern of an oscillation of 60 years period in the GMST data as shown above.

      • Matthew R Marler

        angech: Girma, until we understand causation there is no pattern.

        Until there is a pattern, we have no understanding of causation.

        Simple pattern analysis predicted the pause, and predicts that it should last until “about” 2030. So far so good for simple pattern-matching. Should the pause last until 2030, all the better for evaluating claims about causation, where the hypothesized causes will be judged by how well they match the pattern.

    • Statistical forecasting does not require any causation. It just assumes the previous pattern continues. There is a clear pattern of an oscillation of 60 years period in the GMST data as shown above.

      Statistical forecasting does not require any causation.
      The last ten thousand years are a really good pattern for the years ahead.

    • Agent Provocateur

      There is nothing magical about any given period of mean.
      Still this one is fun:

    • David L. Hagen

      Contrast the Golden Rule of Forecasting: Be Conservative, Armstrong, Green & Graefe, 2014

      when the situation is complex and uncertain,forecasts by experts using their unaided judgment are no more accurate than those of non-experts (Armstrong 1980).

  9. WRT Lovejoy.

    1. The multiproxies are only a little over 100 years. Wouldn’t he need at least a 200 year period to reliably detect a 100 year cycle? And what of cycles longer than 100 years?

    2. The assumption that all anthropogenic warming is due to CO2 is wrong and will exaggerate the residual warming that he attributes to natural variability. But this problem is probably overshadowed by the short proxy periods.

    • He can use stochastic estimates of solar and volcanic forcing which are the main components of natural variability to extend the effective record length before anthropogenic effects began. He uses 1500 as a starting point. Its standard deviation is about 0.2 C.

  10. I claim the longest pause at 100 years from 1820 to 1920.

    However, a pause is an average that disguises the many ups and downs between the start and end point.

    Also, the pause will be different according to your location. We would be better looking at Koppen classifications rather than some non existent ‘global’ average whether temperature (or sea level)
    tonyb

    • Hi Tony, as long as the procedure to obtain a “global temperature” is well defined, I see no problem with it. Mathematicians have been conjuring abstract concepts for hundreds of years to good effect.

      • Jim2

        How useful is a global weight average or a global height average? They wouldn’t tell us anything about other trends and the fact that many people are starving whilst others are overeating for example would be lost.

        Similarly with a ‘global’ average we are missing out on the nuances of what places are warming, which are cooling and which are static and over what period, all of which would tell us much more interesting things than a bald global average.

        Paging Mosh. Come in Mosh.

        tonyb

      • Global average is more uncertain when one changes the past, the trends and invents the numbers over the unmeasured oceans, southern hemisphere and arctic an Antarctic. What would be the error bars of the observations when added together and how can the absolute delta be estimated within 0.01 degress?

        Too made up to initiate massive actions.
        Scott

      • Tony,

        Please don’t convolute issues. Having a global average in no way precludes the use of regional climate metrics. I mean really!

        And the state of various societies is related, but certainly they have other problematic metrics other than climate. Vicious governments, cultural elements that prevent modernization, … the list is long and the interactions complex.

      • Jim2

        You said

        ‘Having a global average in no way precludes the use of regional climate metrics. I mean really!’

        Correct, but we don’t really look at these regional climate metrics do we as we are too fixated on the global average. We could learn much about the climate by examining the interrelationship between the various climate states we can observe.

        tonyb

      • tony –

        ==> “How useful is a global weight average or a global height average?

        Please explain.

        Is there some parallel w/r/t weight and/or height to the concept of a global climate? Do you not think that there is anything that affects the climate on a global scale? If so, then how to you explain winter/summer alterations, or ice ages, or global temperature patterns, etc.? There are mechanistic links to climate on a global scale. There are no such links to height and weight on a global scale, (except, perhaps, in that global climate might affect nutrition on a more or less global scale).

        How is your analogy useful?

      • Joshua

        It was useful because it was an easily understood analogy.

        Tonyb

      • tony –

        ==> “It was useful because it was an easily understood analogy.”

        Seems to me that analogies are only useful when they are: (1) instructive and, (2) analogous.

        You compared a phenomenon where there is no global-scale mechanism to a phenomenon where there is a global-scale mechanism to suggest that there’s no value in a global-scale metric.

        So I am asking whether your analogy is either instructive and/or analogous. Are you conceding that neither is the case, but that it is “useful” anyway simply because it is easily understood?

        But hey, if it’s useful for you….

      • Joshua

        If you backtrack you will see I mentioned global temperature and sea level as not being useful averages. At which
        Pint I introduced analogies to point out that global averages miss the nuances.

        Let’s assume you are hydrological engineer looking after two projects to build sea flood defences.

        In One location the sea level is falling and has been for years. In your other project it has been strongly rising at let’s say 10mm a year.

        Now, do you take the ‘global ‘ average of around 3mm a year in order to plan or do you look at the data relating to the dynamics of each project?

        If you don’t want to get fired and either under engineer or over engineer your defences with the resultant cost or safety implications you would ignore the global average in both cases as not being relevant to your individual cases.

        Global averages have their limitations if you want to look at nuances. We are all concerned with what is happening in our locality whether that relates to temperatures that may be falling not rising or sea levels which don’t conform to the norm.

        Tonyb

      • climatereason | September 1, 2014 at 11:31 am |

        Similarly with a ‘global’ average we are missing out on the nuances of what places are warming, which are cooling and which are static and over what period, all of which would tell us much more interesting things than a bald global average.

        tonyb

        Tony, follow the url in my name, I have broken surface measurements into various sized blocks.

      • Mi Cro

        Thanks for the info. Will look at it tomorrow.

        tonyb

    • Was that a pause in cooling before warming started?

    • koppen DEPENDS on the accuracy of the data. its not an explanatory variable. its an effect

  11. Its the PDO and then the AMO. This is not only going to last, but there is likely to be a return to the satellite measured temps at the start of the era, when the PDO suddenly flipped. I have written extensively about this idea since 2007. The fact is that the “warming” is a distortion of temps brought about by major natural cycles, not the least of which is the PDO/AMO cycle. However the bulk of the warming has been in the arctic winters, which has very little effect on the number one GHG, WV . The cooling PDO leads to less frequent and less strong enso events, and an overall reduction in water vapor over the tropics ( hence the global downturn in ace.. spoke about this and showed mixing ratio flip at heartland). This in turn drives yet another dagger into the heart of the trapping hot spot, quite the contrary the reduction of wv over tropics in an area where there is much bang for the buck will lead to the opposite. This is clearly seen in the NCEP Temps since the PDO flip.. the spiking for enso events but the overall downturn that has begun

    The pause will end when the AMO which will flip to cold in the coming years has matured and the PDO warms again. The test of the theory is already on. The basic equation is the sun/oceans/stochastic events effect far far greater than co2, which is rendered as immeasurable in the noise, except to those who think they have found the needle in the haystack holy grail of climate, at the cost of 165 billion to the American taxpayer ( and God knows how much other treasure and effort, given all we have lost)

    Its simple..like any forecast.. watch and verify

    • Hi Joe,

      What do you make of the record-breaking (in the satellite era) of global ocean surface temperature?

      • If you have a rising trend away from the ending of a mini ice age , every year will be a record year. The point is we have no satellite record of the medieval warm period or Roman Holocene to compare it against. Therefore your obsession with record years are meaningless. All we can say that satellite record registered a rise in temperature that has now plateaued, nothing more or less. The satellite record simply isn’t long enough to say otherwise

      • Down, Kevin. It is a satellite era record and I made that very clear. I’m well aware satellites didn’t exist during the LIA or MWP.

      • What bothers me is that the sat record DOES include 1998.

    • The concatenation of cooling phases of the oceanic oscillations. I’ll bet I got that from you around seven years ago.

      Also, Cheshire Cat Sunspots. He’s grinning for a reason.
      ====================

    • The effect of the PDO and AMO can be eliminated by looking at the last time these were in the same phase and taking the temperature difference since then, e.g. 1950 if they have a 60-year cycle. Or you could go back to 1910 when the sun was also less active like now. Someone needs to try these methods out and see what they get. It is surprising that the natural variability proponents have not tried this yet, as it seems kind of obvious to do.

      • Three times in the last century and a half temperature rose at the same rate, and only in the last of these was CO2 rising. Phil Jones heself told me so.
        ==========

      • The previous times it rose then fell, consistent with perturbations around the mean, while this time it rose, went flat then rose again. Quite different when you think about it, almost like something else is happening too.

  12. daveandrews723

    I thought “the science is settled.” That’s all you hear from the warmists.

  13. What about the pause (actually a decrease) from 1941 to 1976?

  14. Saying the pause is no more than natural variability is pointless unless one knows what natural variability is.
    But natural variability is purely an expression of no knowledge, of uncertainty.
    Adding the uncertainty factor, upsilon or Y, or uncertainty monster as Judy prefers says we do not know what is going on.
    Hence natural variability is as long as a piece of string, indeterminable.
    All the heteroskedasticity and attribution in the world is pointless without understanding the causes.

    • There are forced and unforced components of natural variability. Lovejoy accounts for the forced components from solar and volcanic effects assuming that the unforced components get smaller with the averaging time. This is most obvious with the ENSO cycle that has an amplitude of 0.5 C but is effectively removed by a 10-year average. Other longer cycles can be removed by longer averaging times, but are smaller anyway. As seen by the rapid restoring time after an El Nino the background radiative (Planck) response is a strong constraint to bring surface thermal perturbations back to the mean (like the dog on the elastic leash analogy), but that mean is changing with time too.

      • So you think the ‘lag’ is <<10 years for 'forcings' like ENSO but much bigger for CO2; as in TCS and ECS being very different due to a 'lag'?

      • ENSO is not a forcing, it is a state that starts with ocean temperatures. CO2 takes a lot longer to affect ocean temperatures because it affects them from outside.

      • JimD that was completely authentic climatic thermodynamic/kinetic meanless babble; you should have a Chair for that word salad.

      • OK, I will just say ENSO is not a forcing.

      • A cloud is a forcing, the wind is a forcing, but a combination of wind and cloud isn’t.

      • DocM, then your definition of forcing has no value in climate. What isn’t forcing?

  15. daveandrews723

    In 1976 all you heard from the scientific world was that global cooling was happening. Now, 38 years later, it’s just the opposite from the “97 percent.” When did logic, common sense, and the scientific method escape the field of climatology? When did dogma and hard-headedness take over? Follow the money. It is truly amazing… and sad… to watch.

    • Logic, common sense and the scientific method has seemed to escaped you daveandrews723. I would suggest finding it by learning about that which you write. A good place to start would be the IPCC reports which are available online.

  16. “We may still be battling the climate skeptic arguments that the models are
    untrustworthy and that the variability is mostly natural in origin. To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ”

    What happened to “lets look, in a dispassionate, scientific way at all the evidence we have, try to assess what’s missing and then see what hypothesis that leads us to.”

    Is it a bird, is it a plane…?

    • We don’t want to share code and data with skeptics – they’ll just try to find something wrong with it.

      • The skeptics seem incapable of developing their own code, and their foray into the data (BEST) didn’t work out so well for them. Watts had a separate data effort that McIntyre didn’t want his name on, and has been delayed as a result.

      • But jimmy, the pause is still killing the cause.

      • Jim D. Show me where McIntyre said that.

      • Watts gave him a manuscript and he wanted to check for himself, and nothing more was heard because they disagree on TOBS.
        http://climateaudit.org/2012/07/31/surface-stations/

      • In the link you posted, SM says he will continue to work on Tob. So, I’m not sure what your issue is. I don’t know what has transpired since, but again, I don’t see anything SM said that indicates he won’t help Anthony.

      • It’s been two years now…

      • Mosher knows a lot more about this, but until Watts uses TOBS McIntyre will not likely be on board.

      • Jim D said “The skeptics seem incapable of developing their own code, and their foray into the data (BEST) didn’t work out so well for them.”

        Why should sceptics pay again to develop code? It’s already our tax pounds/dollars that went into the current stuff. Frankly, what is most upsetting is how bad the forecasting has been given the amount of money that has been spent. If my company had subcontracted this task out to the IPCC, I’d be looking for a new contractor by now!!

        How can they not have reduced the number of models by now. Which is the most skilful and why. Which is the least skilful and why?

        Value for money…not.

      • The kiss of death for the idea that climate science is in fact science is the high degree of paranoia regarding the notion that others should independently examine the models on which all the alarmist claims depend.

      • The skeptics seem incapable of developing their own code, […]

        The skeptics want to see if they can replicate the results claimed with the code actually used.

        Anyway, if skeptics did develop code, and found something different from what the radical warmists like, the latter would just dismiss it as “oil-industry funded FUD”. Probably without bothering to try to replicate the results.

      • AK commented on How long is the pause?.
        in response to jim2:

        The skeptics seem incapable of developing their own code, […]
        The skeptics want to see if they can replicate the results claimed with the code actually used.
        Anyway, if skeptics did develop code, and found something different from what the radical warmists like, the latter would just dismiss it as “oil-industry funded FUD”. Probably without bothering to try to replicate the results.

        I developed my own code to look at temp data, no pre-processing other than doing some minor cleaning of incomplete records, no homogenization, just straight averaging of various sized areas, plus I look at rate of change over a 24 hour period (yesterday’s warming, last nights cooling).

        And you know what? It shows no global warming trend, there are some areas that have warmed, some that have cooled. It also shows that nightly cooling is the equal to the previous days warming, and seasonal warming is based on the changing length of day.

        It also shows how poorly the past and some remote areas were sampled, one can make a good case that surface records prior to the 70’s at worst, the 50’s at best do not adequately sample surface temps and a global average is mostly made up.

        The reaction to this is Mosh tells me it’s just wrong, and because it does not agree with the published series the people who matter, IMO don’t know what to make of it, and more or less ignore it.

        I’ve made a dozen different views of surface data that represents hundreds of areas that have been analyzed, all of it is available (follow my name), including the code. It shows a very different picture of surface temps, basically global surface temps do not show a warming trend.

      • @ Mi Cro…

        AK commented on How long is the pause?.
        in response to jim2:

        That was actually Jim D.

        ‘Course, I have a hard time telling them apart. Both future-blind in many ways.

      • “Mosher knows a lot more about this, but until Watts uses TOBS McIntyre will not likely be on board.”

        Anthony wont use TOBS. he has selected stations where the metadata, if trusted, indicates that TOBS was not applied.

        results are pending, but I dont know if steve will be in the paper.
        other guys have been brought in

      • “The reaction to this is Mosh tells me it’s just wrong, and because it does not agree with the published series the people who matter, IMO don’t know what to make of it, and more or less ignore it.”

        it’s not wrong because it disagrees.

        Here is what you need to do.

        1. describe your method is words and math.
        2. supply the code for your method.
        3. demonstrate that your method WORKS to quantify a trend in synthetic
        data.
        4. Publish.

        It’s easy, pick any open journal.

  17. The basic problems with the models is their inbuilt super-sensitivity to doubling of CO2.
    If the TCR is significantly less than one, which is what the pause means.
    Then the theories of Hansen and Mann go out the window.
    Now the response of the atmosphere to CO2 doubling is theoretically robust but practically limp.
    Many reasons for negative feedbacks can be posited with one overwhelming good reason.
    Life has existed for several billion years on earth. Most of the CO2 extant that we have to burn has been made by life forms.
    The carbonate in rocks and bones, the carbon based molecules in fossil fuels all owe their origin and deposition to life that has been sustained for billions of years in a non hostile environment due to feedbacks that keep the atmospheric and sea temperatures in the Goldilocks range of life.
    A few years of minute CO2 rise will be forgotten by the planet as mechanisms like. Extra cloud and moisture production negate any possible major temperature rise.
    We call this natural variation meaning we do not understand it, yet.

    • The sea levels rise and fall like tides with the atmospheric CO2 content too and wash away those coastal dwellings that are insignificant to nature.

      • Let’s hope Al Gore is in his Tennessee palace then instead of his San Francisco shoreline manse.
        ==================

    • Facts never matter on Climate Etc., but his beach house in Montecito in on East Mountain Drive. Why? Because it is on the side of a mountain.

      Do you know of a beach house?

  18. “…the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified…”

    Thankgod this isn’t just more pointless hyperbole from some free-market ideologue……oh, wait….

    • As soon as I see “alarmism” I know it’s a polemic. You’d think that Judith, with her consistent focus on the use of “denier,” would eschew similar polemics.

      But I have to admit, I’m beginning to doubt whether that might happen.

      What’s not OK for the goose is just fine for the gander, eh?

      • Well, considering that over 90% of the models were wrong to the point of alarmism (not to mention that anyone who pointed this out was called “anti-science”), if the shoe fits…….

      • Whether the term alarmist is accurate is one thing. Then there’s the question of whether it is polemic (and whether or not people who decry the use of “denier” should so comfortable with polemics). And then there’s the question of whether you or someone else should be anti-science.

        Conflating the three is childish, unskeptical, and not productive.

      • sorry….whether you or someone else should be calledanti-science.

      • Joshua imagines that pragmatic realists should take space cadets seriously. It is not so. Having got it all so spectacularly wrong – we should play their game all the harder.

        It is the model problem all over again – it is not that the models are wrong but that the solution – one of many feasible – pulled out of the arse end is determined by specious expectations.

        The argument is not that the models are wrong but that they inherently – by fundamental laws of nature – are incapable of predicting the future.

      • The thing about alarmism is that it doesn’t intentionally compare alarmists to skinheads who pretend the Holocaust never occurred.

        In fact, by comparison, it’s a little wimpy.

    • One member of a vast chorus. Can’t you hear the music?
      =============

    • ‘Alarmist’ is just fine, considering that what most skeptics deny is the alarm.
      =================

    • If the shoe fits. ‘Alarmist’ will stand far longer than ‘Denier’.
      ===========

      • ‘Alarmist’ is apt, ‘Denier’ is not.
        =======================

      • We know kim is an alarmist

        Beware the coming cold apocolypse.

      • Yup. Nature is capable of far more alarming cooling than man is capable of with warming.

        Seeking balance, yessirree.
        ==========================

      • If we are false-footed into mitigating a warming that isn’t happening instead of adapting to a cooling that is happening, there’ll be Hell to pay. Aren’t you alarmed at that bill?

        We’ll know a lot more in a few years which way temperature is going(ducks incoming from the chief hydrologist).

        So patience, my dear. Meanwhile, what do we do with all these white elephant models? They’re eating into house and home.
        ======================

      • An ice-age is an imminent as a perpetual motion machine.

        kim the kold catastrophist.

      • We’re at half precession cycle and AnthroCO2 just might get us over the hump for another 11,000 years of Holocene. I fear the effect will not be strong enough. We’ll see. Well, not me.
        ===========

      • Well…

        The acolytes of catastrophic anthropogenic global warming treat historic data with an approach reminiscent of creationists.

        The acolytes of catastrophic anthropogenic global warming appear to be members of a religious or quasi-religious group.

      • Andy West wonders whether this meme will ultimately be constructive or destructive. My take? Catastrophic Anthropogenic Global warming is a destructive meme, and Anthropogenic Global Warming is a constructive meme.

        A warmer world sustains more total life and more diversity of life.

        Simples, ain’t it? When will they ever learn? When will they ever learn?
        ==========

      • Yes indeed – we’re on long term cooling trend, a short term dimming of the sun,and still global av surface temps stubbornly refuse to go down.

        Curious.

      • kim | September 1, 2014 at 10:48 am |
        “A warmer world sustains more total life and more diversity of life”

        Of course…..temperate rainforests have nothing on the Sahara.

      • Heh, Michael, I thought the sun didn’t have much influence.

        Re: regions. Sure there will be regional costs. In total, there will be net benefit. Observe the bottom line.
        ===========

      • Kim – there is a better term for it since “anthropogenic” global warming (the pattern of the last 100 years) is really about increasing low temperatures and lengthening growing seasons… beneficial anthropogenic global warming (BAGW).

      • Yes, PA, anthropogenic warming will eventually be seen to be a good thing.
        ==============

      • Easily the safest way to geo-engineer a better world for the biome.
        ==============

      • kim | September 1, 2014 at 10:58 am |
        “Sure there will be regional costs. In total, there will be net benefit. Observe the bottom line.”

        Based on what?…..besides wishful thinking.

      • Paleo. Warmer is always better than cooler.
        ====

      • Paleo what??

        120m higher sea-levels???

        Nutty kim.

      • Hyperbole doth not become thee. We’re not melting those two icecaps with man’s feeble contribution.
        =========

      • What’s the magnitude of the orbital forcing that you’re so alarmed about??

        Nutty kim.

      • Half precession is a risk. That was the end of recent interglacials.
        ==========

      • kernal kim.
        ========

      • Yikes! Always getting my ‘e’s’ and ‘a’s’ mixed up.
        ======

      • Way better: Not minding my e’s and a’s.
        ==================

      • kim thinks it prident to geo-enginenr today for events likely thousands of years into the future, but cries ‘alarmist’ for people concerned about the coming century.

        That’s ‘skepticism’ for you.

      • ‘prudent’…..but prident is interesting too.

      • Well…

        1. “Of course…..temperate rainforests have nothing on the Sahara.”

        Rainforests are “tropical” rainforests and much of the Sahara is tropical (below 23° latitude). Temperate rainforests are on the west side of coastal mountains (they are forests where it rains a lot – not self-sustaining rainforests in continental interiors). Per another post the Sahara is greening (CO2 reduced water consumption to the rescue). Low CO2 helped make the Sahara a desert. 1/3 of the land area is desert – shrinking the deserts will be a boon for mankind (and wildlife, and plants, etc.)

        2. “120m higher sea-levels???”.

        Well, yeah – but you are measuring from the pre-Interglacial period. The sea level was higher the last interglacial. The GMSL is computed with +0.3 mm to account for the 0.4 mm the sea is sinking (look up GIA, yup, 0.3 mm of that GMSL is fudged). If the sea is sinking 0.4 mm per year we have pretty much tapped out the sea level rise.

      • BAGW could be interpreted as bhagawat – a path to achieving salvation through loving devotion to a particular deity, open to all persons irrespective of sex or caste – which in this case one might take as the positive impact of those humans who embrace life in all its diversity and understand the great benefits it brings. “We have seen the gods, and they is us!”

      • It is astonishing that Michael can write ‘120 meters of sea level rise’ and think that anyone should pay attention to anything he writes.

      • Don’t blame me for your ignor@nce Tom.

  19. I love RSS for measuring the pause, it drives the warmists crazy. Instead of discussing it rationally as in here is a proven data set, adjusted as best we can for a prolonged period showing no warming it suddenly becomes an object of derision and hatred.
    This more than anything else shows when rational debate has been sacrificed on the alter of expediency.
    Unfortunately the satellites cannot stay up for ever.

  20. Bob Tisdale has posted the latest SST charts. These show that we have a new SST high, largely due to the rapid (sunlight fuelled) warming of the N.Pacific. So tin hats and perhaps a stick to chew on as the extremists make the most of this. So is the pause over? Discuss.

    • From Bob’s blog – too funny!

      Preliminary Note: An “alarmism warning” indicates alarmism is imminent. On the other hand, an “alarmism watch” indicates alarmism might occur, but that’s all the time.

    • OMG – it’s a HOCKEY STICK!!!

    • So, when the North Pacific was colder the sun was not shining as much?
      We do not understand how the sea develops hot and cold spots and blaming the sun is not helpful.
      Cloud formation or lack of is more relevant to a new area of heat in the ocean. The sun was not suddenly hotter or colder and was not the cause of the change in SST.

      • angech,
        I believe that Bob Tisdale’s perspective is that changes in SSTs are sunlight fuelled rather than LWIR fuelled. I don’t believe he is suggesting that changes in insolation are responsible.

  21. Pingback: How Long Is The Pause? | The Global Warming Policy Forum (GWPF)

  22. Judith, I would not bother comparing Cowtan and Way to anything in the way of respectable science. Two co contributors to Skeptical Science, the most warmist cabal of warmists put up a method of divining temperatures with a self admitted (Cowtan) method that lines up temperatures from places that are further away as more accurate than places next to each other is not science.
    Their method is “failure proof ” in that it has never dropped a temperature anywhere.
    The only people around who could give a scientific view seem to be outer circle supporters like Mosher, Zeke and Nick who lend blind support.
    Where are the McIntyres, Curries, Cheifio’s Carricks and Jean S’s when you need them?
    Someone needs to diss their paper soon and thoroughly.

    • “Someone needs to diss their paper soon and thoroughly.”

      Here’s a list of links to WUWT rebuttals:
      http://wattsupwiththat.com/?s=cowtan

    • Cheifio doesnt have the skills. Mcintyre has used a similiar approach.
      there is nothing wrong with their approach. I’ve tested it using independent data. it’s valid.

      But you didnt test it. you dont like the answer, therefore it must be wrong.

      • You do know how to cover yourself.
        A method being valid does not mean it is right, only that you put in a formula correctly and got an output.
        Years from now you will be able to hands on heart say “I didn’t say it was right, only valid”.
        Like your Eli- like “A sum of all models is better than one”, valid statement , yes, true? Hardly.
        You know better than anyone else because you work with the data and the people pushing warming [cringe, cringe] desiring methods that they are valid [warming in gives warming out] but rubbish.
        The answer is obvious and you have never denied it.
        In all their Arctic work they have never shown a “station” get colder. Why?
        They have also shown near 100% correlation [dare I use the 97% meme, heck why not] with any other data sets they have tested.
        Inconceivable!
        Inigo Montoya: You keep using that word. I do not think it means what you think it means.
        Why, Eli might say because they front loaded those results in, but he and I wouldn’t.

      • Mosher, this time you really stepped in it. On 2/25/14 you guest posted the newest BEST results. You plotted HadCRU, NASA GISS, NOAA NCDC, Cowtan and Way, and BEST land plus ocean. And the pause that CW sought to erase was still there in all your graphed data sets, including CW. Because you charted anomalies conventionally, unlike CW.
        How to lie with statistics 101, from a book written if I recall correctly in 1939. Clever CW, thinking most people are not well read.
        Arguing against your own previous posted here 2014 figure takes selective memory to a new level. Darned that Internet, and archiving.
        BTW, makes a lovely little example from an essay in the forthcoming book. So many thanks, even if you and CW did not intend the help.

  23. John Smith (it's my real name)

    the “pause”…
    hearing about this is what got me interested in climate science –
    the other side of this issue is that CO2 ppm has gone way up steadily during the “pause” (by about 80% unless I’m mistaken)
    still don’t get how the “greenhouse” effect isn’t called into question as a result –
    can heat, with complete stealth, enter any system and then hide… become “missing”?
    (read the SkS incoherent explanation … ugh!)

    • daveandrews723

      The way I understand it is that the latest theory (excuse) from the warmists is that all the added heat because of rising CO2 levels is now being “stored” in the oceans. It must be, the warmists claim, because their models are so “reliable.” I imagine they will now start predicting that the oceans will release this heat with a vengeance (time to be determined apparently) and the wrath of CAGW will be even worse in the decades to come. They have an answer for everything.

      • Alarmists are already touting this but it is thermodynamically highly unlikely.

        That won’t matter. Narrative above all.
        ===================

      • Matthew R Marler

        kim: Alarmists are already touting this but it is thermodynamically highly unlikely.

        the current of warm, dense (because salty) water carries the heat energy into the deep water. There is no thermodynamic argument that makes this unlikely except by ignoring the density of the salty water.

      • At WUWT, Tisdale is pretty excited that August was the warmest month in SST history.

      • Matt, I meant the return of heat to the surface.
        ============

  24. John Smith (it's my real name)

    “missing” heat … alternative universe perhaps?

  25. Does it matter if the pause ends? In previous warmings the temps may well have gone higher than now. Chinese researchers reckon their MWP was a bit up on this current lot. (But cooling is not at all nice in China, very dry as well cold, so maybe they’re biased.)

    Really, when it’s not cooling it’s warming…and what do we expect to happen in a warming? Warming! (Along with very short flat spots and mini-coolings). The Hockeystick…it’s not real. They totally made that up.

    For this we are trashing the energy supply? And handing billions over to villains we neglected to lock up after 2008?

    • China has their own Tibetan tree ring temperature reconstruction untouched by huMann hands. I’ve long suspected that they’ve snapped to the fact that mild global warming benefits the Middle Kingdom.
      ===================

  26. Judith –

    Given that you’ve highlighted the Lovejoy article, I thought you’d find this interesting – also from a paper he authored:

    Although current global warming may have a large anthropogenic component, its quantification relies primarily on complex General Circulation Models (GCM’s) assumptions and codes; it is desirable to complement this with empirically based methodologies. Previous attempts to use the recent climate record have concentrated on “fingerprinting” or otherwise comparing the record with GCM outputs. By using CO2 radiative forcings as a linear surrogate for all anthropogenic effects we estimate the total anthropogenic warming and (effective) climate sensitivity finding: ΔT anth = 0.87 ± 0.11 K, λ2xCO2,eff=3.08±0.58K . These are close the IPPC AR5 values ΔT anth = 0.85 ± 0.20 K and λ2xCO2=1.5−4.5K (equilibrium) climate sensitivity and are independent of GCM models, radiative transfer calculations and emission histories. We statistically formulate the hypothesis of warming through natural variability by using centennial scale probabilities of natural fluctuations estimated using scaling, fluctuation analysis on multiproxy data. We take into account two nonclassical statistical features—long range statistical dependencies and “fat tailed” probability distributions (both of which greatly amplify the probability of extremes). Even in the most unfavourable cases, we may reject the natural variability hypothesis at confidence levels >99 %.

    And also this:

    However, the probability of a centennial scale giant fluctuation was estimated as ≤0.1%, a new result that allows a confident rejection of the natural variability hypothesis.

    Apparently from the paper connected to the abstract that you highlighted in your post.

    Did you only read the abstract? Or is there something I’m missing here?*

    *(Don, don’t answer that question.)

    • Oh…wait…you quoted from the conclusion. So you did read the paper…So I wonder what you make of that whole “confident rejection of the natural variability hypothesis” thingy.

      • She already said we need better paleo reconstructions. So did Lovejoy.
        ===========

      • I was hoping for some more elaboration, kim. You know, so that I can ratchet back my alarmism. Why does she seem to accept some of their results but then reject their conclusion? It’s one thing to say that she doesn’t think that their certainty about rejecting the “natural variability hypothesis” is justified, but I would think that then she would give some alternative range other than levels greater than 90% certainty, and explain the thinking behind her range of certainty.

        So does she reject their results as being inadequate when examining centennial scale fluctuation but accept their results when attributing the “pause?” How does she decide when to agree with their methodology and results and when to find them inadequate? Is it merely on the basis of whether their conclusions align with hers, or is there some technical reason why their methods produce valid results when discussing the pause and inadequate results when discussing natural variability on a longer scale? What is her cut-off point, and what is the technical reason for picking that particular time frame?

      • 1. see how you assumed she didnt read the paper. Apologize and explain why you jumped to the conclusion.
        2. you dont get how one can like a method AND point out a limitation?
        for reference there is a reason why judith likes my method and lovejoys. what would that be?

      • 1. Curry is encouraging people to look at non-GCM-dependent approaches to estimating natural variability patterns. Lovejoy and McKittrick are both doing that in very different ways but in that same spirit of not allowing GCM hubris to circularly support GCM assumptions and GCM runs. Whether the specific conclusions from these methods agree with her individual scientific judgment about what is probably going on with the climate is distinctly secondary to the main point–these papers join the battle rather than dodging it. Judith likes that. She is more interested in steering the science back on track than making sure she wins the argument about where climate has gone/is going/is affected by forcings. Joshua can’t see that because he is a partisan and a non-scientist and so can’t imagine that someone might be more passionate about getting the scientific process right than achieving a policy win.

        2. Lovejoy’s statement rejects at 99% or whatever that ALL the warming during the anthropogenic CO2 era is natural. Not sure what it says about 33-66%% of the warming being caused by natural fluctuations–but as argued ad nauseum on the earlier thread, that middle tercile is Curry’s favored zone of belief.

        3. Apparently Curry also has the usual sort of scientific back-and-forth technical issues with whether Lovejoy’s method can do what he says it does with the confidence he claims. That technical discussion is one for research participants to hash out with interested observers (our peanut gallery) interpreting the various puffs of smoke and pieces of ejected debris over time to see which view (if either) is more plausible.

      • If a dove descended on a ray of light from the sun bearing an olive twig in its beak and spake to you that there was no reason for alarm, you would not ratchet back your alarmism.

        If James Hansen knocked on your door and said there was no reason for alarm, you would not ratchet back your alarmism.

        There are no circumstances under which you would be anything but alarmed.

        That’s the true meaning of unfalsifiable.

    • I think the get-out clause is that Judith thinks half the warming is CO2, while Lovejoy dismissed the 0% hypothesis at 99.9%.

      • Ah. Interesting. Kind of an apples and oranges situation. Thanks,

      • No. you guys need to learn to read

        1. Judith prefers observational approaches over GCM approaches.
        2. Lovejoy has an observation approach.
        3. She likes the approach.
        4. She notes an issue with the data. For this approach to work, you need
        better proxies.. good temporal resolution, small error.

        remember your first job is to understand the strongest version of your opponents argument.

      • Steven Mosher you have to compare the hypothesis dismissed by Lovejoy with Judith’s 50% hypothesis. They are not the same. Judith also dismisses 0%, but then goes on to dismiss 100%, the IPCC central estimate.

      • err Jim. no you dont.

    • Josh, I know this is going to go over your head but one can attempt to look at this simply.
      Take the temperature over the last 34 years; the recent flat line is CO2 and natural cooling and the 17 years before that is CO2 and natural warming.
      Plot it an get something like this

      http://www.woodfortrees.org/plot/hadcrut3vgl/from:1989/to:2104/plot/hadcrut3vgl/from:1989/to:2014/trend

      This is the basic position; oceanic oscillations cause periodic warming and cooling. Over a whole cycle the total warming/cooling is nil, but if one examines a period of warming, one observes warming, whereas if one examines a period of cooling, one observes cooling. This is not neurochemisty, it is very, very simple.

      The vast majority of people you sneer at as ‘deniers’ believe that increasing atmospheric CO2 will cause an increase in steady state temperature. However, this is not being tested by Lovejoy, he is testing the hypothesis that all changes global temperature are due to natural variation. He is doing this because, like you, he treats anyone who is not afraid of increases in atmospheric CO2 with contempt.

      He, like you, asks the wrong questions

      • Doc –

        ==> “:The vast majority of people you sneer at as ‘deniers’”

        I was reading along happily, willing to look past the first slight to assume the potential of good faith…until I ran across that comment.

        You see, you disqualified yourself not only on the basis of bad faith, but also on the basis of terrible reasoning.

        You have never, ever, seen me “sneer” at people as “deniers.” And it’s not like the first tine that you’ve made that mistake (in one form or another – completely mischaracerizing what I do and don’t say), or read the same interaction take place with other “skeptics.”

        It’s not clear to me why you would take the time to respond as you did, but irrespective of the reason, you once again can’t gain enough credibility for me to even follow the rest of your argument.

        Try rethinking your position. Try recalibrating how you approach analyzing situations where your motivated reasoning is likely to be activated, and then you have a hope of presenting a valid argument – one worthy of a serious effort to understand and respond.

        I promise, I won’t give up. Even though you have made the same mistake countless times in the past, I will extend you the good faith of possibly rising above that analytical flaw in the future. I have every reason to believe that you have those skills when you are engaged in analysis where you are less invested from the perspective of identity. Even though the evidence suggests that those with those skills are that much more likely to engage in these kinds of polarized situations with a greater degree of “motivated reasoning,” the causality isn’t lockstep.

        I have faith in you, Doc. Don’t give up on yourself.

      • It was way over joshie’s head, so he picked out a word to yammer on about, thus avoiding the substance of the comment. Nice work, joshie.

      • Josh, you are just a first class jerk.

  27. The problem with Lovejoy’s approach is the same as the problem with Mosher’s from the previous thread (well, they’re basically the same. Maybe Mosher got the idea from this paper?): It doesn’t just throw out GCMs, it throws out useful information about the timing and relative magnitude of major volcanic eruptions. These would provide an ability to estimate expected natural variability over a particular period, rather than assuming all periods are the same, which we already know is wrong.

    The period Lovejoy uses to determine natural variability amplitude (1500-1900) is, on average, much more active in terms of forced natural variability than the period 1880-2014 or 1951-2010.

    An MPI-ESM-P model historical run exhibits a trend in the last 64-years (1942-2005) of 0.67ºC. If I look at 64-year trends over 1500-1900 in the model past1000 experiment I find 5% exceed 0.66ºC. Essentially, based on this method I wouldn’t be able to rule out at 95% level that the model could have produced all the warming over 1942-2005 without any anthropogenic influence. Does anyone believe that could be true?

    • Not so unlikely as you might think. Though I don’t think so.

      Marvelous critique and thanks. I’d never have come up with that.
      ===========

    • “The problem with Lovejoy’s approach is the same as the problem with Mosher’s from the previous thread (well, they’re basically the same. Maybe Mosher got the idea from this paper?): It doesn’t just throw out GCMs, it throws out useful information about the timing and relative magnitude of major volcanic eruptions.”

      no i did not get the idea from Lovejoy.
      I just flipped gavins approach on its head.
      I dont throw out GCMS.. I just dont run them with anthro forcing.

      yes there are problems with the approach. But my position is you calculate the fricking answer and make the caveat

    • Specifically looking at the period since 1950, the evidence suggest solar forcing went down, if anything, and it is not clear that volcanic events have had a net effect on the trend. So when we see a rise of 0.7 C, it is hard to say that Lovejoy’s definition of natural variability contributed since 1950 even though it could have in past 60-year periods.

  28. Statisticians McShane and Wyner were not the first to find that there was no ‘signal’ in the proxy data that Michael Mann used to fabricate the ‘hockey stick’ hoax-graph (MBH98/99/08) that the UN-IPCC showcased and Al Gore pointed to as proof than humans were the cause of global warming. All of the junk science published after that was nothing more than the findings of Mann’s sycophants, based on the same phony data.

  29. “The key challenge is this: convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century. Global climate models and tree ring based proxy reconstructions are not fit for this purpose.”

    The fact that Dr. Curry can make this statement and expect most readers to understand it shows a huge advance in our understanding of attribution over the last two years. The key change is that “natural variability” is once again in its proper place as the focus of scientific endeavor. However, much work remains. To do justice to natural variability, experimental and theoretical scientists must search for regularities that are found in nature and that have a physical existence apart from modelers’ assumptions about “internal variation” in their models. (For those who think that natural regularities must be perfect cycles, please ask some mature woman.)

  30. “How long is the pause”?

    How broad is the peak?

  31. +1 to lovejoy as well.

  32. How about this, least I propose a heretical hypothesis:

    Starting in 2020 solar flux flatlines into a minimum lasting 22 to 33 years. The oceans cool providing an enhanced CO2 sink dragging down CO2 lagging temperature. Temperature acually decreases. After that warming resumes albeit slowly at first.

    Such a scenario, being the opposite of mainstay, could be described as doomsday cooling. Certainly not recommended for a career in climate science, unless you live in Montana. Still, not considering all of the possible projections would be like reaching a point in market swings where you completely discount a crash. It can go up, it can go down and it can go sideways there is always those three possibilities.

    • The largest solar variations are a few tenths of a W/m2 and we see what that can do (the LIA and MM), so when CO2 forcing can go up 5 W/m2 in a couple of centuries, you can put that in perspective.

      • How many W/m2 has CO2 forcing gone up since 1950, jimmy dee? And since the pause started?

      • Strickly speaking there would be a 33 1/3 % chance of any of the three. However, based on what you know about P/E ratio, bulls to bears index (a contraindication), Fed policy, fiscal policy, etc etc … One could increase the possibility of one or the other(s) being of greater chance.

      • CO2 additions has provided 1.35 W/m2 since 1950 on its own, so the rise of 0.7 C is not surprising in that perspective either, and that includes all of the pause.

      • ordvic, the table is rather tilted by the CO2 effect which is the elephant in the room when it comes to forcing.

      • Jim,

        If one believed in the 50/50 propositon you’d have 50 go to up and 50 go to sideways and down. I would preface that by saying natural variability can cause up as well, but for the sake of simplifing my premise I’ll give all of natch to the other two. If it were 50 to sideways and 0 to down the choice would still be hard. If it were 25/25 there would still be three choices but up would be the safest bet. Being that the market has historically always trended up many get sucked in; oblivious of an occasional crash. Keeping money in is conventional wisdom and the Warren Buffet way. However the inexperienced investers are always hurt the worst by a crash. If you invested across the board in 1929 it took until 1952 to get your money back. That is unless a lot of the picks went bankrupt and they did.

        It seems to me that Climate would be harder than the market being that it hasn’t trended up in the long long term (temps being much higher in most of the ordovician epoch and trending down since). If one looks at temps since the LIA there is indeed a nice upward trend but that still doesn’t discount a crash entirely.

      • 50/50 means your choices are up, sideways, or faster up.

      • How much since the pause started, jimmy dee?

      • You better put a stop loss on that just in case :-)

      • There is no pause in the 30-year climate. The last climate pause was in the 1960’s.
        http://www.woodfortrees.org/plot/hadcrut4gl/from:1900/mean:360

      • Since you are a pause denier, I will rephrase the question: How much in the last 15 years?

      • Jim,
        That graph onky goes to 2000. You better try again.

      • ordvic, the 30-year average at 1999 covers 1984-2014.

      • If you had done the same thing back in 1965, you would have also reported no pause:
        http://www.woodfortrees.org/plot/hadcrut4gl/from:1870/to:1965/mean:360

      • The forcing in the last 15 years has changed about 0.4 W/m2 and the temperature of this 15-year period is 0.25 C warmer than the previous 15-year period.

      • …meant to say actually 30 years into the pause

      • The forcing in the last 15 years has changed about 0.4 W/m2 and the temperature of this 15-year period is 0.25 C warmer than the previous 15-year period.

        That does not refute the pause.

      • phatboy, by 1965 the decade average was cooler than the previous decade average, while now the difference is 0.1 C between the last decade and the previous one. If you think 30 years is too strict, you can use that 20-year measure.

      • jimmy dee says: “The largest solar variations are a few tenths of a W/m2 and we see what that can do (the LIA and MM)”

        Theoretically adding .4 W/m2 CO2 forcing in the last 15 years hasn’t produced any noticeable warming. Everybody but a few deniers are calling that the “pause”, jimmy. It’s a problem for Chicken Little, when people notice that the sky isn’t falling. The pause is killing the cause.

      • Don M, yes, you can take an El Nino which has a 0.5 C change in a year and show it drowns out CO2 variations too. Proves nothing, but you can do it. Natural variation is even more impressive if you take individual months.

      • Jim D, you’re arguing that there’s been no 15-year pause because the last decade was warmer than the previous decade – during which temperatures were still increasing for the first half.
        You do the math!
        My point is that 1965 was already 30 years into a pause, meaning that the decade 1955-1965 was no warmer than the decades 1935-1955, and even despite that, your 30-year mean would have shown no pause whatsoever in 1965.
        Why are you trying so hard to argue against the pause anyway, particularly when it’s been all-but acknowledged by a great many climate scientists?
        If the pause continues for another few years then you’re going to find it increasingly difficult to wipe the egg off your face.
        Otherwise, if the pause comes to an end in a year or two then you’ll probably be gloating – but you’ll then be right for all the wrong reasons – which won’t do your credibility much good.

      • This is another way to look at the pause in the longer term context. We see that negative perturbations of 0.1 C are common, and this is just another one that seems to have ended. Also note how the La Ninas are getting warmer.
        http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/mean:12/plot/hadcrut4gl/from:1970/trend/plot/hadcrut4gl/from:1970/trend/offset:0.1/plot/hadcrut4gl/from:1970/trend/offset:-0.1

      • We are obviously not talking about one year, or individual months, jimmy. It’s at least 15 years and counting. They call it the pause. They are grasping at any excuse. You can’t average it away. Try to catch up, jimmy. How much longer can you maintain your denial?

      • phatboy, the difference is that in mine we are now headed back for the mid-point (the monthly data has already been there for 5 months) and the trend is double yours.

      • Jim D, in 1965 we were already 30 years into a pause – big difference.
        You queried earlier whether it’s statistically significant – well, perhaps, or perhaps not, but not for the reasons you cite.
        There may or may not be a pause, just as there may or may not have been the warming you think beforehand – they’re both as statistically significant or insignificant as each other. You either look at the data one way, or you look at the data another way. If you contend that the warming was statistically significant, or not, then you have to treat the pause the same. You can’t have it both ways.

      • Advancing my heretical hypothesis I’ll go with 50% down (solar minimum) 33% sideways (oceans) and 17 % up (CO2 about where it should be).

  33. Ironically, the UHI effect — by operation of the very thing than contaminates the sampling — serves to reduce the heteroscedasticity of the data –e.g., the continual removal of snow from and tarmac of French airports where official thermometers are located reduces winter albedo compared to the French countryside, making the sites of the weather stations more like it’s summer all the time.

  34. From what we now know about Alpine glacial advance and retreat patterns(e.g., Christian Schlüchter pointed out that the retreat 4000 and 2000 years was greater than the retreat today), the only good argument against the finding that centennial scale giant fluctuations — unrelated to human activity– is all we need to explain all late 20th century warming is the presence of millennial scale giant fluctuations that also are unrelated to human activity.

  35. “We may still be battling the climate skeptic arguments that the models are untrustworthy and that the variability is mostly natural in origin.

    To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ”

    What the #$%^ is this unscientific political strategizing of a predetermined conclusion doing in a scientific journal?

    And why is the fact that it is present not the focus of vocal outrage by scientists, including the one that hosts this blog?

    • Can you clarify why it is wrong to acknowledge skepticism, especially as that motivated this study.

      • This $#!^ does not “acknowledge skepticism” as anything other than a political obstacle on the path to the predetermined conclusion.

        What motivated this “study” was the intention to rationalize the party line, and to blatantly advocate for more of the same. That is not science. It is the sort of anti-scientific $#!^ that ought to be restricted to political manifestos and religious tracts.

      • No, he took one of the skeptic arguments at its word and wrote a paper to demonstrate to those people another way of looking at it without models.

      • Matthew R Marler

        Jim D: No, he took one of the skeptic arguments at its word and wrote a paper to demonstrate to those people another way of looking at it without models.

        Without GCMs, not “without models”. His model is one of the family of statistical autoregressive models, with a maximum lag of 125 years.

      • Actually, it isn’t an autoregressive model with 125 years, it basically uses the multiproxy data and then a scaling assumption to scale the distrubitons grom 64 to1 25 years. Scaling could be considered an infinite order autoregressive model (it’s based on power law, scale invariant decorrelations/dependencies, not exponential, scale dependent ones). Actually, the scaling assumption is pretty minor since it is only used over a factor of about 2 in scale (64 to 125 years) and is verified by the second order moment. The only other assumption is for the extreme tails of the distribution of temperature changes: it bounded between two “black swan” type extremes (power laws with exponents 4 and 6). These are so extreme as to be outside the usual realm of classical statistics. In other words, the test I used is much more demanding than the bell curve or other standard statistics (these all fall off more quickly at the extremes: in an exponential way). If you find it too extreme for your taste, just use it as an uppur bound. Even with this, the probabilities are of the order one in a thousand that the change since 1880 is natural (nearly one a million with the bell curve – if you prefer).

      • Shaun

        What were the two extreme black swan events?

        Surely the one in aThousand chance of it being natural since 1880 can only be true if in effect it has never happened before?
        Tonyb

      • My reference to black swans may cause more confusion than clarity. Let me try to explain.

        Mandelbrot pointed out in the 1960’s that Levy distributions – which may arise naturally due to their “stability under addition” property – could explain very extreme fluctuations (especially but no only in economics), calling them examples of the “Noah effect” (from the Biblical Flood). Levy distributions generally have power law extremes with exponents less than 2 so that their second moments (variances) don’t exist. In the 1980’s, based on scale invariance and cascade processes (multifractals) the result was generalized to power laws with any exponents. In 1986, I showed empirically, (with Schertzer: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Annales.Geophys.all.pdf) that at least some climate data has extremes with exponent 5. By the 1990’s power law extremes were a major theme in nonlinear geophysics. In the 2000’s Naseem Taleb wrote his book popularizing such extremes using the “Black Swan” metaphor and applying it mostly to economic series. In 2013, in my book, I show that monthly changes in global temperatures also have exponent 5 (Lovejoy, S., D. Schertzer, 2013: The weather and Climate: emergent laws and multifractal cascades, 496pp, Cambridge U. Press).

        The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.

      • thisisnotgoodtogo

        Shaun Lovejoy,
        What would your method say about the Medieval Period and likelihood of it’s warmth say, as per Moberg2005, and it being “all natural”?

      • My position has been that the nature of the fluctutations change fundamentally at scales longer than about 125 years: the macroweather to climate transition. Fortunately, macroweather statistics are (just) enough for settling the anthropogenic warming issue. Unfortunately, the answer to your question – the statistics of longer term climate variability – is really not at all clear, although it seems that the GCM’s underestimate it (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/esd-2012-45-typeset.final.pdf).

        For a rather biting critique of conventional approaches to this low frequency (and some other!) issueS, see “Lovejoy, S. , 2014: A voyage through scales, a missing quadrillion and why the climate is not what you expect, Climate Dynamics, “http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/climate.not.revised.28.7.14.pdf
        which was just accepted for publication this morning!

        It notably shows that the NOAA paleoclimate site is in error by 10 quadrillion (10**16)!.

      • Hi Shaun, thanks for spending time here, I’m following your papers with great interest.

      • thisisnotgoodtogo

        Shaun Lovejoy,
        Thank you. How about if you take the same length of time as you did with the modern period, with frame taken from within the period of the low point at about 900 in Moberg to the high point 200 years later?

      • Matthew R Marler

        Shaun Lovejoy: Actually, it isn’t an autoregressive model with 125 years, it basically uses the multiproxy data and then a scaling assumption to scale the distrubitons grom 64 to1 25 years. Scaling could be considered an infinite order autoregressive model (it’s based on power law, scale invariant decorrelations/dependencies, not exponential, scale dependent ones).

        Sorry, my mistake. That’s still models (my main claim) but not what I thought.

      • David L. Hagen

        Shaun
        Compliments on clarifying quadrillions. We look forward to reading your paper when available.
        If you have not yet seen their work, for “long” term climate statistics Koutsoyiannis et al. track Hurst-Kolmogorov dynamics over 9 orders of magnitude as different from conventional assumptions of randomness in GCMs! e.g.
        Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

        Koutsoyiannis, D., Hurst-Kolmogorov dynamics as a result of extremal entropy production, Physica A: Statistical Mechanics and its Applications, 390 (8), 1424–1432, 2011.

      • The quadrillions paper was accepted yesterday and is on my site. I am well aware of Koutsouyannis and have criticized the strong limitations of his climactogram (the requirement that the fluctuation exponent H must be in the range -1<H<0) that make it not useful over most of the range to which he applies it – that's why the quadrillions paper uses a much superior fluctuation called the Haar fluctuation which is valid for -1<H<1 which covers the entire 20 orders of magnitude needed.

        See for example:
        Lovejoy, S., D. Schertzer, I. Tchiguirinskaia, 2013: Further (monofractal) limitations of climactograms, Hydrol. Earth Syst. Sci. Discuss., 10, C3086–C3090, 2013, http://www.hydrol-earth-syst-sci-discuss.net/10/C3181/2013/. Supplement.

      • David L. Hagen

        Shaun
        Thanks for posting your preprint that I just found, where you so well extend the discussion of Koutsoyiannis et al.
        How does Murry Salby’s addressing diffusion in ice cores reducing higher frequency variations impact such long term climate alanysis?

      • I won’t comment on Salby’s theory except to say that it is contrary to much evidence. In any case, I don’t need ice cores to estimate the centennial scale variability, and that’s the basis for the probability distributions, return times etc.

      • David L. Hagen

        Shaun
        Thanks for your work on the “Haar fluctuation which is valid for -1<H<1"
        Re: "macroweather statistics are (just) enough for settling the anthropogenic warming issue"
        In light of “unknown unknowns”, that suggests that “macroweather statistics [may NOT be] (just) enough for settling the anthropogenic warming issue”. i.e. magnitude of climate sensitivity, the magnitude of anthropogenic warming, and the confidence we can have in those!
        Why does the climate science community ignore the world standard on quantifying uncertainty, especially Type B uncertainty? See:
        Evaluation of measurement data — Guide to the expression
        of uncertainty in measurement
        BIPM JCGM 100:2008

      • David L. Hagen commented

        Shaun
        Thanks for your work on the “Haar fluctuation which is valid for -1<H<1"
        Re: "macroweather statistics are (just) enough for settling the anthropogenic warming issue"
        In light of “unknown unknowns”, that suggests that “macroweather statistics [may NOT be] (just) enough for settling the anthropogenic warming issue”. i.e. magnitude of climate sensitivity, the magnitude of anthropogenic warming, and the confidence we can have in those!

        David, I think the data is there, most of it is however filtered away as noise. But I think it is clear proof there isn’t a warming problem, what we have is large amounts of energy moving around on it’s journey from when it is collected on Earth to finally being radiated away to space.

        The people making the temp series have made one large mistake (apparently), They’ve never compared aggregated unmodified data to their field (obviously their out of band testing isn’t catching the problem, although they have to either process out of band stations to be comparable to their fields, or they have to decode the field to make it comparable to the station in question).
        They are all doing something in common because they all have the flawed final results.

      • David L. Hagen

        Thanks Mi Cro
        Re “Filtered away as noise”
        WM Briggs politely emphasized: Do not smooth times series, you hockey puck!

        if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses!

        Michael E. Mann “used the filter length equal to the length of the signal to be filtered!” etc. etc.

      • David L. Hagen commented

        WM Briggs politely emphasized: Do not smooth times series, you hockey puck!
        “if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses!”

        Every day we get a chance to examine the surface’s response to a large input of solar energy, and there are a lot of places to sample different conditions.
        I do get some live places where the weather’s boring and doesn’t change much, maybe that’s why they get it so wrong?

    • “Most institutions demand unqualified faith; but the institute of science makes skepticism a virtue “. Robert K Merton

    • Judy sometimes likes to present all perspectives it doesn’t mean she endorses it. To not consider or look at the article would be rather closed minded wouldn’t it?

    • There are some great looking graphs and tables one can look at on the pdf if for no other reason.

    • thisisnotgoodtogo

      What I’m getting at, Shaun, is that if similar results could be obtained from any pre-industrial period at all, then that would cast a very different light, no?

    • I strongly approve of this statement in the paper. It says nothing about “politics” or “policy,” conrtary to JJ. Something like it would have applied to any pure-science dispute–say, continental drift, the existence of atoms, Chomsky’s linguistic theory,, radiation hormesis, you name it– where there were competing hypotheses or a bold hypothesis and a skeptical community doubting it. This statement clarifies the stakes in the debate as seen by the investigator rather than obfuscating in that passive-aggressive voice that makes it so much harder for non-specialists to figure out contextually what is going on in a paper.

      Furthermore, Lovejoy’s behavior on the site has been impeccable so far in the face of a less-than-usual but still high level of provocation. Hard to complain about someone accepting the Curry methodological argument, rattling the china in the cupboard of conventional climate science with his pursuit of scaling theory, and engaging constructively on a skeptic-accommodating website. Some people need to take a chill pill.

      • wow. Is this really Climate Etc.?

      • Josh, sometimes it’s better not to posf. Guilt by association ya know :-)

      • Steve

        I think the sceptics have been like pussycats with Shaun. The questioning has been robust but no more than that. Whilst we may disagree with Shaun about his methods and levels of certainty we can still respect him.
        tonyb

      • Tony: “Less than usual but still high level of provocation” is consistent with your pussycat description using the Celsius-to-zoology conversion scale. You may have to make a TOBS adjustment, though.

  36. Take a trip to your local crafts or art supply store and browse through the available scrapbooks.

    Replacing a front door is not an expensive undertaking,
    so do so if it needs it. Your kitchen should be able to accommodate
    the stock of your monthly groceries with ease.

  37. There is no reason to believe that “robust paleo proxy data” that is sufficient to establish accurate global average temperatures is even possible. We cannot really do it with thousands of thermometers so how can we possibly do it with proxies?

    • thisisnotgoodtogo

      “There is no reason to believe that “robust paleo proxy data” that is sufficient to establish accurate global average temperatures is even possible. We cannot really do it with thousands of thermometers so how can we possibly do it with proxies?”

      Creatively, with the right motivation.

  38. On this Labor Day, the Denizons prefer little labor about length of the pause.

    What did the stadium wave predict?

  39. John Smith (it's my real name)

    one more thing on “missing” heat…
    (day off…idle hands)
    Argo pods go down 2000 meters … shouldn’t they be picking up this “deep ocean” heat, at least on it’s way (via magic) to greater depth?

    • Yes, and they don’t. This is why there is so much doubt about ‘missing heat’ hiding in the abyss.
      ===============

    • Johnsmith etc etc

      I asked that very question of Thomas stocker of the Ipcc at a met office office sponsored climate conference a few months ago. According to Thomas stocker we don’t have the technology to measure heat at depth and that calculation of ocean heat in general becomes more problematic once you get too far below the surface.

      Tonyb

  40. “We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data.  The paleo proxy community also seems to be in a rut, with continued reliance on tree rings and other proxies having serious calibration issues.”

    From my branch in the labyrinth, where only unlearned observers perch, this makes sense. What say you who know? What path to take?

  41. Lauri Heimonen

    Judith Curry:

    ”The key challenge is this: convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century”

    In my opinion, the key problem is how to get rid of the threat of false AGW.

    I interpret JC’s topic https://judithcurry.com/2014/08/28/atlantic-vs-pacific-vs-agw in that way that the impact of anthropogenic CO2 emissions on warming during the industrialized era has been minor – maybe even minimal:

    ”I do regard the emerging realization of the importance of natural variability to be an existential threat to the mainstream theory of climate variations on decadal to century time scales. The mainstream theory views climate change as externally forced, e.g. the CO2 control knob theory. My take is that external forcing explains general variations on very long time scales, and equilibrium differences in planetary climates of relevance to comparative planetology. But it does not explain the dominant variations of climate on decadal to century timescales, which are the time scales of relevance to policy makers and governments that are paying all this money for climate research.”

    In my comment https://judithcurry.com/2014/08/16/open-thread-20/#comment-619105 , as conclusion, I have stated:

    ”As anthropogenic CO2 emissions cannot be accused of the recent global warming, there is no reason to curtail CO2 emissions. Further research must be focused on actions how to produce energy clean and competive enough, and how to adapt life of mankind to natural events of weather and climate.”

    Firstly, according to natural laws, all CO2 emissions to atmosphere, and all absorptions of CO2 from atmosphere to CO2 sinks together control the CO2 contentent in atmosphere. As the recent anthropogenic CO2 emissions have been only about 4 % of the total CO2 emissions to atmosphere, even the share of anthropogenic CO2 in the recent total increase of CO2 content in atmosphere has been about 4 % at the most. Thus the anthropogenic share in the recent annual increase of about 2 ppm CO2 in atmosphere is about 4 %, too, i.e. about 0.08 ppm. This should make anyone be convinced that the anthropogenic CO2 emissions do not dominate the increase of CO2 content in atmosphere.

    Secondly, the recent increase trend of CO2 content in atmosphere have been dominated by natural warming of sea surface on the areas where sea surface CO2 sinks are, which have made absorption from atmosphere to sea surface lessen. This has made even nearly all the annual anthropogenic increase of 0.08 ppm CO2 be possible; otherwise the anthropogenic increase of CO2 content in atmosphere would have been minimal, and possibly indistinguishable from zero.

    Thirdly, during the last century the trends of CO2 changes in atmosphere are consequences of temperature changes and not vice versa. The above mentioned warming of sea surface on the areas, where sea surface CO2 sinks are, is a consequence of warming during periods dominated by El Niño events; periods dominated by La Niña make cool sea surface areas where sea surface sinks are.

  42. John Smith (it's my real name)

    “missing” heat was created to to rebut the “pause,” was it not?
    (relevance to topic)

  43. Well, I put a PDF copy of AR5 into my electronic sausage-maker, adding eye of newt, and toe of frog, wool of bat, and tongue of dog, adder’s fork, and blind-worm’s sting, lizard’s leg, and howlet’s wing.
    After much grinding and digesting, it spat out 35 years, -20 years, + 35 years. It had an outlier of +1500 years.

  44. It would be nice if the most vocal CE skeptics would define exactly what they conclude that the “Pause” means. A pause in what?

    (1) A pause in an increasing step-wise gradient in temperatures?

    (2) A pause before a statistically significant natural cycle cooling swing must occur (excluding things like volcanoes)?

    (3) A “Pause” between something else? Or maybe the term “Pause” shouldn’t be used at all?

    In responding to this question, let’s put it in a time-frame context (and statistically significant modern record anomalies) of the past ~250 years (using a Dr. Curry quote in a UK interview that during the past ~250 years, temperatures have increased).

    • Maybe we should define “climate” as well. And “skeptic.” And “natural.” And just in case Mosher engages, we should probably define “define.”

    • “Or maybe the term “Pause” shouldn’t be used at all?”
      The term I like, because it’s neutral, is “plateau.”

    • Well…

      The climate has a number of cycles that are basically sinusoid oscillations.

      The global warming enthusiasts and climate models basically drew a line through the graph at 0 (slope of 1) and said the warming would continue into infinity.

      We are near the π/2 (90 degrees) point of the curve, slope = 0 (zero) and they don’t look quite as smart.

      I’m not actually sure what is happening but things haven’t changed much for 17 years (the infamous pause). The “pause” means things aren’t changing very much.

      Since intelligent scientists temperature predictions are everything from, “it is going soar moon-ward”, to “the temperature will drop 1°C” I would not bet real money on the future trend at this time.

    • mcKitrrick defines pause.
      definitions are not a problem here.
      he defines it operationally and tests.
      not a problem.

      if you dont like the defintion, define your own and test

  45. “The hypothesis that the industrial epoch warming was a giant natural fluctuation was rejected with 99.9% confidence.”

    99.9% confidence?

    Really? This is a joke, right?

    • Alas, no joke. That scientists can believe such absurd numbers, and get them through peer review no less, speaks volumes about the present state of climate science. The very idea of applying confidence intervals to proxy data is nuts. Confidence intervals measure pure statistical error, which means they assume perfectly accurate measurements. Moreover, the shaky proxy data does not measure global temperatures. That is a whole separate manipulation of the poor data, one that is itself full of error.

      • JimD, “captd, 2 C per doubling works whether you start in 1950 or 1750, as Lovejoy showed.”

        Right and both dates follow some cooling event and if you assume colder isn’t “normal” you get a lower sensitivity. Since he is using CO2 as a proxy for everything, a recovery from a cooler period would have a similar ln shape making it hard to distinguish “forced” from recovery and what “forcing” contributed what. Since we have no clue what ECS might be or even if it exists, we cannot “exclude” anthropogenic forcing but we cannot improve its estimate much either. Just another verse in the psalms of climate change.

        HOWEVER, 2C from 1750 including ALL potential forcings is not greater than 2C which would have been considered low just a few short years ago.

      • captd, well it becomes more than 3 C if you include a 10-20 year delay which correlates just as well, and I think we know that the response isn’t instantaneous anyway. Also 2/3 of the forcing has occurred since 1950, so most of the fit is in the later period and hardly any from the 1800s and before.

    • It says that from what we know about solar and volcanic fluctuations, 0.8 C of warming is unlikely to have been completely caused by them (0.01% probability). Perhaps you disagree?

      • No, I think he is using a statistical approach, at least I hope so, as there is no way to quantify “what we know” along the lines you suggest.

      • In fact I disagree strongly. First I am pretty certain that we do not know that it has warmed the 0.8 degrees that you claim, because the statistical methods being used to make this estimate of warming are very crude. Second I am even more certain that we do not understand the sun-climate link, enough to think it quite possible that if this warming has in fact happened then it may well be due entirely to solar influence. Quantify that.

      • He did quantify that as a 0.2 C standard deviation with various choices of fat tails. It was still very unlikely.

      • Jim D, There is no way to get a standard deviation out of area averaging, which is how global temperatures are estimated. The 0.2 degrees must be a guess of some sort. Either that or he is taking the grid cell averages to be measurements, which of course they are not.

        But when you say “It was still very unlikely” what does “It” refer to? Our knowledge of solar and volcanic influences or statistical variance?

      • He uses global temperature like everyone. It is a standard deviation in a time series which is a well defined concept.

      • It may be well defined but it is statistically erroneous. The time series of global temperatures is not a time series of sample measurements. It is a time series of statistics. Thus the confidence intervals of the global temperature series is statistically meaningless. Confidence intervals are defined probabilisticly based on sampling theory, but the time series values are not samples, rather they are themselves averages (of averages). When you average averages there is no basis in statistical theory upon which to calculate a confidence interval.

      • Would you characterize the pause as statistically meaningless on that basis?

      • Area averaging is one way. Not the only.
        its easy to get a standard deviation.
        and easy to test.

      • Matthew R Marler

        Jim D: It says that from what we know about solar and volcanic fluctuations, 0.8 C of warming is unlikely to have been completely caused by them (0.01% probability). Perhaps you disagree?

        that “unlikely” is dependent on the outcome from estimating a statistical model that could not estimate a natural cycle with a long period if it is there.

      • Matthew Marler, that is what the skeptics have to resort to: a natural cycle somewhat proportional to the CO2 forcing, but decidedly not just feedback, only coincidental. What are the odds?

      • Matthew R Marler

        Jim D: What are the odds?

        I keep explaining that the odds are not knowable on present evidence. If the models with ca 1000 year periods are reflective of real processes (how else could they exist except by equally unlikely chances?) then the odds are high. Conditional on those models, the current warming is occurring at about the right time (i.e., the right “recurrence” time.) Lovejoy’s result depends in part on the fact that he has omitted any possibility of estimating processes with such long recurrence times.

        If there are no such processes, he’s golden; but his analysis does not tell us whether there are any.

      • Matthew Marler, the question was rhetorical, since the odds against such a coincidence are enormous. How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?

      • JimD, “How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?”

        How come the sensitivity required to make the exponentially growing CO2 component mimic climate is constantly being reduced?

      • captd, 2 C per doubling works whether you start in 1950 or 1750, as Lovejoy showed. There is a fundamental upward curve that this captures.

      • Matthew R Marler

        Jim D: Matthew Marler, the question was rhetorical, since the odds against such a coincidence are enormous. How can natural variations mimic the forcing function of an exponentially growing CO2 component without being in some way related to it?

        The conditional odds are quite reasonable. Given that there is a process that produces the observed oscillation, and given that a low point was the last Little Ice Age, and given that the human industrial revolution has coincided with the recovery from the LIA, the probability of the warming phase coinciding with the exponential rise in CO2 concentration is close to 1. Now, how probable is it that the industrial revolution has taken off dramatically since the end of the Little Ice Age? Who knows? It would require postulates that allowed calculation of the probabilities. But whatever the probability, it did in fact happen that way. Therefore, the conditional probability of the rise in CO2 coinciding with the natural warming is 1.

        For some reason, people usually fail to take into account that all probability calculations are conditional probability calculations, and they fail to state the conditions under which the calculations apply. Lovejoy’s calculated p-value (which he inverted to get a confidence statement) is conditional on there being no process that has produced the observed oscillation with a period close to 1,000 years. How likely is it that the observed oscillations occurred without a process driving them? If there is such an oscillatory process, how likely is it to have ended right when CO2 increased above a particular threshold (say, its value in 1880, 1950, or some other)?

  46. There is a name for the “giant natural fluctuation” that Lovejoy rejects with absurdly high confidence. It is called the Little Ice Age. Not calling it by name seems strange. I cannot imagine the statistical magic required to make it disappear, with virtual certainty no less.

  47. John Smith (it's my real name)

    Stephen Segrest
    It seems to me that “pause” is a marketing term, much like “side effect”
    there are no “side effects,” only effects
    I would assume the burden of definition might fall on the side which believes it to be a temporary hiatus on the way to the resumption of predicted warming

    I’m an agnostic on the issue, just trying to learn
    I must say, with all due respect, that the semantic gymnastics that to my ear come mostly from CAGW end of the spectrum, tilt me to the skeptic side

    I respect your comment even though I had little trouble understanding it

    my humble thanks to our host and all who comment for tolerating me and letting me play

    I don’t even bother to comment on SkS even though I read it often – they will just call me a “troll”

    I”m a human being dam* it not a “troll” (joke) :)

  48. John Smith (it's my real name)

    actually. where one to lay eyes on me, most would say that I very much resemble a troll

  49. John Smith (it's my real name)

    were one to lay eyes – dam* spell check

  50. Throughout Earth’s history most of the many sudden climactic shifts have been credited to volcanic eruptions. And, all of such shifts have been downward.

    We may not have perfect explanations for all climate shifts but we do know that such shifts are the reason we are here: the existence of life is explained by abrupt climate change.

    For the last 100,000 years Earth has mostly been locked in an ice age punctuated only briefly by periods of warming such as the interglacial that gave birth to our species. Earth has been locked in ice age conditions for more than 80% of the time over the last one million years. Those are all of the facts. Will the hiatus last forever

    That the Sun might now fall asleep in a deep minimum was suggested by solar scientists at a meeting in Kiruna in Sweden two years ago. So when Nigel Calder and I updated our book The Chilling Stars, we wrote a little provocatively that, we are advising our friends to enjoy global warming while it lasts. ~While the sun sleeps, Translation approved by Henrik Svensmark

  51. I respect Ross McKitrick’s thinking about a method applicable to assessing whether the “hiatus” represents a departure from some temperature trend.

    “The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein.”

    I am reminded of a discussion on Bart Verheggen’s blog in 2008 with “VS”, who a that time, validated the lack of departure from a trend in surface temperatures from 1880 to 2008 thus implicating normal variation in temperatures unrelated to purported CO2 causation.

    I am further reminded that Tomas Milanovic several years ago making a statement that has since stuck with me: “Never twice the same phase space.” That is, in predicting the future, one can not assume the future will have the same conditions as one has experienced in the past. Therefore, building models of the future, once one can model the past with you model, does not mean the model has applicability to the future.

    More simple observations like watching what have been the weather patterns during the PDO, ENSO and AMO, may provide a thumbnail sketch of what one might expect for the future. Just don’t count on it.

  52. daveandrews723

    Where was the scientific community from the 80’s on that has allowed this “man-made global warming” nonsense to become the prevailing scientific wisdom that it now is? Why were the hypothesis and those ridiculous models accepted as “settled science?”
    Mann and Hansen are heroes now to most people. CAGW has now been taught as “fact” to two generations of students. The politicians and mainstream media dismisses any skepticism as “denial.”
    It is nice to see so much skepticism now, but it seems like the horse is already ut of the barn.
    It is a real shame that the scientific method and the scientific community failed 30 years ago. The world is wasting a lot of money and threatening economies to solve a problem that does not exist.

    • David Springer

      daveandrews723 | September 1, 2014 at 3:24 pm | Reply

      “Mann and Hansen are heroes now to most people.”

      Don’t be silly. The vast majority don’t know Mann and Hansen from Adam and Eve.

    • “Mann and Hansen are heroes now to most people. CAGW has now been taught as “fact” to two generations of students.”

      Are you serious – “most people”. I would wager that Mann is disliked/distrusted even within his own cadre of “climate scientists”. It is pure irony that CAGW has been taught (indoctrinated) to students for two generations and yet none of them actually experienced any warming at all.

  53. I think the problem is that no one is applying the Stadium Wave component to any of the model fits. If this is added, then the natural variability is largely accounted for and thus the pause is explained.

    • Matthew R Marler

      oops, supposed to be nested here:

      WebHubTelescope: If this is added, then the natural variability is largely accounted for and thus the pause is explained.

      How long will the pause last? Your statement might be true, and your model might be reasonably accurate over the next 3 decades (as you unpdate taking ENSO into account, or become able to predict ENSO.) How long a pause does your model express.

      • Marler, The pause actually never existed. All you are seeing is variability that is compensating the underlying trend, just as noise compensates an electrical signal. The key is to reveal the underlying trend, which I and Prof. Sean Lovejoy and a few others know how to do.

        The rest of you seem not to understand how to do this, which is unfortunate for you guys.

      • To WebHubTelescope
        Yes, it is actually surprisingly simple to estimate the anthopogenic contribution: using the CO2 forcing as a surrogate for all the anthropogenic turns out to be just as accurate as GCM forecasts one year ahead! In other words if you give me the CO2 level for any year between 1880 and 2013, I can tell you (with one parameter, the “effective climate sensitivity”) the global mean average temperature to within ±0.109o C. This is almost exactly the error in GCM forecasts of the temperature one year ahead! (see e.g. Laepple, T., S. Jewson, and K. Coughlin (2008), Interannual temperature predictions using the CMIP3 multi-model ensemble mean, Geophys. Res. Lett., 35 doi: L10701, doi:10.1029/2008GL033576, 2008 or Smith, D. M., S. Cusack, A. W. Colman, C. K. Folland, G. R. Harris, and J. M. Murphy (2007), Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model Science, 317, 796-799).

        The problem with the pause was that it was somewhat overpredicted by GCM’s: however using a more empirically based approach, it is nearly perfectly predicted (see: http://www.physics.mcgill.ca/~gang/Anthro.simple.jpeg/Simplified.Anthro.small.forcing.jpg).

      • Shaun Lovejoy —
        And Tamino and Lean and Cowtan and others have done similar models, so that what we are doing is not exactly ground-breaking.

        And I agree it is simpler to wrap all the GHG factors into one effective aCO2 factor, as the aCO2 seems to be the leading indicator — others call it the “control knob”.

      • Yes, others have done similar things, but for other purposes attempting notably to estimate “equilibrium climate sensitivity” and sometimes “transient climate sensitivity”. My approach was even simpler, (“effective climate sensitivity”) but nonetheless necessary in order to estimate natural fluctuations directly.
        Sometimes in science the simplest things are done last!
        But you are right that the warming since 1880 has been so large that it may still be statistically tested and rejected with confidence even without a very precise separation of natural from anthropogenic variability: my separation of anthropogenic from natural is not the most important point – more important was the estimation of preindustrial probabilty distributions from multiproxies.

      • Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

        Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.”

        You’re kidding right? Assume that all warming is CO2
        and natural variation has no global energy dynamics effect and find – surprise – that all warmth is anthropogenic. The webbly certainly does – and he is the epitome of superficial and inconsequential analysis. So what do we make of this?

        Let’s assume it is all anthropogenic. The rate of warming is some 0.07 degrees C/decade. The latter century warming rate is irrelevant as natural variation added to the warming and the current seems more likely than not to persist for 20 to 40 years – as seen in the record of Pacific states in the proxy records. In the absence of chaotic instability in the climate system – one might be excused some complacency.
        In the presence of chaotic climate instability – it changes not a whit the practical and pragmatic responses.

        Still I think it curious that the warming coincides both with the modern solar grand maxima and a 1000 year peak in El Nino frequency and intensity. These are connected in top down modulation of the Pacific state and provide a mechanism for solar amplification.

        This from Vance et al 2013 – A Millennial Proxy Record of ENSO and Eastern Australian Rainfall from the Law Dome Ice Core, East Antarctica – http://connection.ebscohost.com/c/articles/85340584/millennial-proxy-record-enso-eastern-australian-rainfall-from-law-dome-ice-core-east-Antarctica

        I’d suggest that a millennial warming in the 20th century seems pretty likely – and that the next mutli-decadal climate shift is not guaranteed to be to a warmer state.

      • The data on ENSO as a steady, yet bounded, contributor to natural variability is gleaned from proxy results.
        http://imageshack.com/i/ezCEAViPg

        We are trying to figure out underlying patterns to ENSO at the Azimuth Project. This is a discussion on using the proxy records:
        http://azimuth.mathforge.org/discussion/1451/enso-proxy-records

      • What on Earth could he mean by steady?

        Moy et al (2002) present the record of sedimentation shown below which is strongly influenced by ENSO variability. It is based on the presence of greater and less red sediment in a lake core. More sedimentation is associated with El Niño. It has continuous high resolution coverage over 12,000 years. It shows periods of high and low ENSO activity alternating with a period of about 2,000 years. There was a shift from La Niña dominance to El Niño dominance some 5,000 years ago that was identified by Tsonis (2009) as a chaotic bifurcation – and is associated with the drying of the Sahel. There is a period around 3,500 years ago of high ENSO activity associated with the demise of the Minoan civilisation (Tsonis et al, 2010). The period had ‘red intensity’ (El Nino) in excess of 200. The red intensity for the 97/98 event was 98. It shows ENSO variability considerably in excess of that seen in the modern period.

        And if by bounded he means by limits not remotely approached in the 20th century – he might be onto something.

      • BTW – poorly fitting a curve to data using parameter scaling in an unphysical equation is not remotely the same as understanding the phenomenon.

      • Why go back thousands of years when the operable time period is hundreds of years?

        This is from the Unified El Nino Proxy records

        [1]S. McGregor, A. Timmermann, and O. Timm, “A unified proxy for ENSO and PDO variability since 1650,” Clim. Past, vol. 6, no. 1, pp. 1–17, Jan. 2010.

      • Matthew R Marler

        WebHubTelescope: Why go back thousands of years when the operable time period is hundreds of years?

        In order to examine whether the operable time period is hundreds of years. This is basically why, if you expect a period of 60 years, you want at least 180 years worth of data. As Prof Lovejoy explained, this part is signal processing 101. As Rice and Rosenblatt showed in 1988 (I think it was in Biometrics), if amplitude, phase, and period are all unknown and must be estimated from the data, you have a real hard problem, and 3 periods of observation are not likely enough. Autoregressive and heteroskedastic background variation do not make the problem easier.

      • Webster, “Why go back 2000 years?”

        ’cause we can.

      • captd, another way to show that data. Spot the difference.

      • JimD, You should discuss that with the author, that was his/her splice :)

      • It was NH extratropics, so the HADCRUT4 line is too. What was your complaint?

      • About the dueling graphs from captdallas and Joshua, this looks like the original: http://agbjarn.blog.is/users/fa/agbjarn/files/ljungquist-temp-reconstruction-2000-years.pdf
        on page 7, that stops at plus 0.4. That additional roughly 0.4 from the SkS graph, how did that get there?

      • Why go back over the Holocene with a high resolution ENSO proxy? Why go back 1000 years? Because we can – and it is informative.

      • Ragnaar, they seem to have debated the datasets at SkS. Anyway, I would have used the CRUTEM4 NH land plot that comfortably has the same range for the period after 1900, since the proxy data was NH land.

      • David Springer

        Shaun Lovejoy | September 1, 2014 at 6:00 pm |

        In other words if you give me the CO2 level for any year between 1880 and 2013, I can tell you (with one parameter, the “effective climate sensitivity”) the global mean average temperature to within ±0.109o C.

        It then follows that if I give you a temperature for any year between 1880 and 2013 you can produce CO2 level.

        Chicken:egg

        Try harder, Shaun.

      • First – independently of the cause and effect – the magnitude of the temperature change is so large that the probability that the trend suddenly started in about 1880 is very low (about one in a thousand). You could always argue that just such a massive natural fluctuation just happened to drive the temperature, up liberating CO2 from the oceans as a consequence.
        If your causal chain is correct, the probability is still extremely low, but of course the policy implications would be totally different.

        However, we know pretty accurately how much CO2 has been emitted by humans and we even know pretty accurately how much has been taken up by the oceans (about half). It can’t be the other way around (the way it apparently was during the ice ages).

        If you accept the strong CO2 forcing/temperature correlation, then the egg has to be the anthropogenic forcing and the chicken has to be the temperature rise. However, the probability of the warming event being natural is the same low value.

      • David Springer

        Please, Shaun. You have no bloody idea how fast twentieth century warming compared to the past because the temporal resolution in paleo temperature series increases decreases with time. Failing to acknowledge the limitations of paleo temperature proxies gives away warmist cabal members every time. Your published paper using the word “denier” doesn’t speak well to your objectivity either. Just sayin.

      • JimD, ” Ragnaar, they seem to have debated the datasets at SkS. Anyway, I would have used the CRUTEM4 NH land plot that comfortably has the same range for the period after 1900, since the proxy data was NH land.”

        Since Way is a regular at SKS, why not use the new C&W version of Hadcrut4? That version cools the past and the 1850 to 1999 mean should be used to make the splice giving the SKS gang a little boost, but 0.4C? And the new version should include the error range I would think for both data sets.

      • JimD, “It was NH extratropics, so the HADCRUT4 line is too. What was your complaint?”

        You mean other than the proxy and hadcru4 diverging? None of the reconstruction proxies provide reliable information on arctic air temperatures especially the winter warming. The Hadcrut4 comparison would need to be location matched with the proxies AND time of year the proxies represent.

      • captd, that is why I would prefer CRUTEM4 of those I know about. If they have a 30-90 N CRUTEM that would be even better. CRUTEM4 easily covers the range of longer red line I showed.

      • JimD, “captd, that is why I would prefer CRUTEM4 of those I know about. If they have a 30-90 N CRUTEM that would be even better. CRUTEM4 easily covers the range of longer red line I showed.

        Nope, doesn’t “easily cover” the range. Since data points are very sparse it requires a lot of interpolation to get back to 1880. Nothing wrong with interpolation, but it has to match what it is being compared to.

        In the NH the “growing” season and “dormant” season have different trends. The “dormant” or lower light half of the year, has less influence on the less variable and liquid water dominate “growing” season. So trying to splice any surface temperature reconstruction to any proxy reconstruction has plenty of issues to be considered. That is why kriging or interpolating over phase changes, both thermodynamic and temporal is very tricky.

        With Hadcrut4, the addition of the very high arctic station that can be out of phase, i.e. Arctic Winter Warming where temperature may change by 10 degrees -30C to -20, but that change is much less energy associated with the anomaly change. 50N-60N could drop by 3C and 60N-90N increase by 6C but there is effectively no change in internal energy. So now you have apples, oranges and pears. Did “average” global mean surface temperature increase? yes. Does it mean anything? Likely Not. If you get different results comparing the 60S-60N Globe to the 90S-90N globe, you might need to understand why a little bit better before sounding the alarm.

      • Matthew R Marler

        Shaun Lovejoy: the magnitude of the temperature change is so large that the probability that the trend suddenly started in about 1880 is very low (about one in a thousand).

        As I wrote, that probability is conditional on the outcomes of other events whose probabilities are not known. What is the probability of a Little Ice Age? What is the probability of the recovery from a Little Ice Age? What is the probability that the recovery from the Little Ice Age overlaps the human industrial revolution? If there is a process with a period of about 900-1000 years, and if it is reasonably well-estimated by Scafetta, Page and others, then the probability of a Little Ice Age and its recovery occurring about when they did is quite high.

        Your probability statement is entirely conditional on your estimated background variability, and if there is a process with such a long period your method totally misses. In other words, you had nearly 0 power to detect the statistical signal of a process of great importance, for which other evidence exists; and your probability is conditional on your not finding something because you assumed that it didn’t exist.

        in secret emails subsequently made public, Mann, Jones and associates discussed the necessity to somehow correct the appearance of a Medieval Warm Period. If that and the other warm periods existed, about the times and magnitudes as have been estimated, then the current warm period is occurring approximately “right on time”

        “Maybes” and “Ifs” dominate these probability assessments.

      • WebHubTelescope

        “in secret emails subsequently made public, Mann, Jones and associates discussed the necessity to somehow correct the appearance of a Medieval Warm Period. If that and the other warm periods existed, about the times and magnitudes as have been estimated, then the current warm period is occurring approximately “right on time”

        What a laughable assertion. This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.

      • Matthew R Marler

        WebHubTelescope: This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.

        Shaun Lovejoy based his 99% confidence on a different guess. Is the long period oscillation there in reality, as estimated by Scafetta and others? Is it continuing? Repeating the question of Jim D: what are the odds?

      • Matthew R Marler

        WebHubTelescope: This commenter bases science on “secret emails” and what appears to be a hope for a lucky guess.

        No science was based on the secret emails.

    • Matthew R Marler

      WebHubTelescope: and thus the pause is explained.

      WebHubTelescope: The pause actually never existed.

      For how long will the natural process continue to produce what looks like a “pause” in the surface temperature and troposphere temperature increase (even though energy continues to accumulate in the briny deeps)?

      • Someone: “Gee whiz, I am looking at data coming in from the Voyager spacecraft, and all I see is noise.”

        Me: “Get a model for the noise, and apply it to isolate the signal”

        Someone: “Thanks”

        Me: “You’re welcome”

      • Matthew R Marler

        WebHubTelescope: Someone: “Gee whiz, I am looking at data coming in from the Voyager spacecraft, and all I see is noise.”

        What? The clear contradiction in your writing caused you some sort of fit?

      • The pause is explained, as it never existed. The underlying warming continued, both by sophisticated time-series analysis (see CSALT model) and also as evidenced by the fact that a significant fraction of the heat sequestered in the ocean continued unabated.

      • Matthew R Marler

        Matthew R Marler: a significant fraction of the heat sequestered in the ocean continued unabated.

        Do you have any expectation about how long the “apparent pause” will continue (i.e. how long the explanatory mechanism will continue to prevent any rise in surface temperature)? Just curious.

      • Matthew R Marler

        oops, I don’t know how that happened. The quote was from WebHubTelescope.

    • Shaun Lovejoy | September 1, 2014 at 6:00 pm

      Yes, it is actually surprisingly simple to estimate the anthopogenic contribution: using the CO2 forcing as a surrogate for all the anthropogenic turns out to be just as accurate as GCM forecasts one year ahead! In other words if you give me the CO2 level for any year between 1880 and 2013, I can tell you (with one parameter, the “effective climate sensitivity”) the global mean average temperature to within ±0.109o C.

      I’m curious does this also allow you to explain what the regional temps were, since warming is not equal globally?

      • That’s the million dollar question and I’m pretty convinced the answer is yes – at least to the theoretical limits that the regional temperatures can be predicted! I currently have a student working on this and there should be an answer shortly. Stochastic modelling has many advantages over GCM’s

      • Shaun Lovejoy commented on

        That’s the million dollar question and I’m pretty convinced the answer is yes – at least to the theoretical limits that the regional temperatures can be predicted! I currently have a student working on this and there should be an answer shortly. Stochastic modelling has many advantages over GCM’s

        Well if you want to see what the actual surface stations measured, I have a lot of data available if you follow the URL in my name.

    • WebHubTelescope (@WHUT) | September 1, 2014 at 5:06 pm

      Marler, The pause actually never existed.

      I’ll accept the pause doesn’t exist, if you accept that there is no warming!

  54. Matthew R Marler

    WebHubTelescope: If this is added, then the natural variability is largely accounted for and thus the pause is explained.

    How long will the pause last? Your statement might be true, and your model might be reasonably accurate over the next 3 decades (as you unpdate taking ENSO into account, or become able to predict ENSO.) How long a pause does your model express.

  55. Matthew R Marler

    Lovejoy: Using preindustrial multiproxy series and scaling arguments, the
    probabilities of natural fluctuations at time lags up to 125 years were determined

    You can see the problem right away. He begins with a model that is incapable of estimating recurrence times of large natural excursions of the global mean temp of the Holocene Climate Optimum, Minoan Warm Period, Roman Warm Period and Medieval Warm Period. With this model, he would have to reject the null hypotheses that those excursions are independent of anthropogenic CO2 (that is, “natural”). His model is less complete and less informative than Nicola Scafetta’s model, or Dr. Norman Page’s model with its 960 year period. This model is less complete and informative than the model of Beenstock et al. N.B., those are rankings. there is no demonstrably adequate dependable model.

    His model is totally inadequate for the problem: How “natural” is the MWP-like warming since the end of the Little Ice Age?

    • Ignore the millennial at your perennial.
      ==============

      • The beauty of differences is that they are high pass filters, so that 125 year changes are indeed unaffected by millennial scale variability. This is signal processing 101.

      • Matthew R Marler

        Shaun Lovejoy: The beauty of differences is that they are high pass filters, so that 125 year changes are indeed unaffected by millennial scale variability. This is signal processing 101.

        That’s what I wrote. The conclusion depends on the assumption that something does not exist, based on a model that could not detect it even if it is there.

        If instead there is a natural process generating the apparent oscillations producing the descent into the LIA and the recovery since then, then there is no support for the conclusion.

        If you are sure you know the signal, you can estimate a noise process to fit it. If you think you know the noise process, you can estimate a signal that might be in the data. If you know neither the signal nor the noise with certainty, then you can’t conclude much of anything from the statistical outputs, other than the fact that they are all model dependent.

        This is signal processing 101. So what? This is at least graduate level statistical time series analysis.


      • So what? This is at least graduate level statistical time series analysis.

        Mustn’t forget the physics, LOL. See the Stadium Wave.

  56. Pingback: Hiding the Real Data-Sets Behind the Headlines | Religio-Political Talk (RPT)

  57. ‘Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study.

    Our research strategy focuses on the collective behavior of a network of climate indices. Networks are everywhere – underpinning diverse systems from the world-wide-web to biological systems, social interactions, and commerce. Networks can transform vast expanses into “small worlds”; a few long-distance links make all the difference between isolated clusters of localized activity and a globally interconnected system with synchronized [1] collective behavior; communication of a signal is tied to the blueprint of connectivity. By viewing climate as a network, one sees the architecture of interaction – a striking simplicity that belies the complexity of its component detail.’ Marcia Wyatt

    The relevance of the Stadium Wave is this idea of a collective system. It is not a singe data series – even one that is influenced by atmospheric angular momentum. The LOD is not the stadium wave. LOD doesn’t provide a mechanism for decadal changes – but is related to atmospheric circulation changes in the global system.

    And yes – the pause does seem to be a perfectly natural shift in Earth ocean and atmospheric circulation on multi-decadal scales seen many times in the paleo records. The fundamental mechanisms for this is deterministically chaotic. A bifurcation in a system that has many interacting components. The persistence of past regimes suggests this one may persist for 20 to 40 years. It is a mistake as well to think that the next shift will be to a warmer state.

    • Rob: Your comments in the past about the climate system being a complex interaction of many subsystems has always meant sense to me and seemed compatible with the stadium wave theory, but never saw you mention it before. I’m new here and may have missed it. Is the stadium wave an idea you’ve always held?

    • The relevance of the Stadium Wave is this idea of a collective system. It is not a single data series – even one that is influenced by atmospheric angular momentum – and is therefore integrating in a sense.

      As shown in Figure 7.

      But it is a manifestation of change in the system and not the underlying cause. Contrary to webby’s `usual simplistic spin the LOD is not the stadium wave but one of a collection of pieces by which the whole is apprehended. Webby reduces the whole to a single data series and fails to see the incongruity. .

  58. Curry,

    Why do you keep taking for granted that there IS an anthropogenic contribution at all to the general ‘global warming’ observed since the 50s (which was basically confined to the 25-year period between 1976/77 and 2001/02)? On what empirical evidence do you base this belief? Where and how do you see the CO2 ‘radiative forcing’ signal on global temps in the latter half of the 20th century?

    The null hypothesis says that 100% of the global rise in mean temps from the 70s to the 00s were caused by natural variation (basically, the ‘ocean cycle’). Have the AGW’ers somehow managed to falsify this hypothesis in any way? If so, I’d be very interested to see the observational evidence from the real Earth system responsible for it.

    • I believe it is because that “Most” of the CO2 rise is “Very Likely” (95% confidence) due to activities of mankind :)

    • David L. Hagen

      Kristian
      Re: “The null hypothesis says that 100% of the global rise in mean temps . . .”
      The thesis is distinguished from the null hypothesis. Thus the corresponding “null hypothesis” is that:
      “the majority of the global warming since 1950 is due to natural causes.”

      • Er, no. The null hypothesis that the AGW hypothesis is supposed to falsify is “100% og ‘global warming’ since 1950 is due to natual causes”. “Climate change is ALL natural. Like it’s been for the last 4-4.5 billion years.” That is the hypothesis that the AGW hypothesis claims is not the case since about 1950.

      • The AGW promoters haven’t even shown that 1% of recent global warming is anthropogenic in origin. So this nonsense about 100% or 50% or more or less than 50% being anthropogenic is simply completely unscientific. Show empirically that there IS a contribution to be observed in the real Earth system AT ALL first. THEN we can talk. Where’s the ‘unnatural’ signal? Outside the models, all already based circularly on the assumption that there IS a contribution and its large.

  59. Darwall’s lead in comment is particularly important concerning the new McKittrick paper, which seems about as robust as statistically possible. 16, 19, and 26 years without statistically significant warming to a 95% confidence level. The importance is CMIP5 model falsification.
    In 2009, NOAA said 15 years pause would constitute falsification. (BAMS 90: Special Supplement August 2009 starting at page 23)
    In 2011, Santer said 17 years. (J. Geophys. Res. 116: D22105 2011)
    Following these ‘official’ declarations from keepers of the warming faith themselves, we must now declare the CMIP5 models falsified. Which means tossing out all of AR5 WG1. What an inconvenient truth.

    See also Fyfe, Gillet and Zwiers, Overestimated Warming…, Nature Climate Change 3: 767-769 (2013) for another 95% confidence level analysis falsification of CMIP5. They did two comparisons of CMIP5 to HadCRUT4, 1993-2012 and 1998-2012. Figure 1 is the money image. Zwiers was lead author on AR5 attribution, so this was an ‘inside’ paper appearing AFTER the July 2012 cutoff for AR5 consideration.

    • Or they could just recalculate and state that the period is 19, 23 or 29 years.
      They can also play the C&W ‘sea water is water and sea ice is land’ game, but that is going to hurt if the arctic continues to recover.

      • Revising the falsification period will be difficult for Santer, given the arguments he made.
        But Paucheri is already giving speeches arguing for 30 years. That could be even funnier, if the stadium wave idea and Joe on PDO/AMO are even directionally correct.

    • The NOAA 15 years does have to be adjusted for ENSO though.

  60. Is there any way to put the Kim-bot on a mute button? It posts more comments that make no sense, literally or figuratively, than all other posters put together and makes the whole blog virtually unreadable. Can’t you give it a time out to give the rest of us a break?

    PS Mosh – I’m using a phone and still able to spell check; it’s not that hard.

  61. Lovejoy’s ideas might be more convincing formulated in terms of standard probability and statistics. The claims need much better testing against data, even when it’s low resolution, high variance proxy data. The power law he uses probably does not fit into standard random processes, but the basic ideas certainly do. Otherwise, the odd plot with some hand waving does not offer that much empirical support. Rather a contrast to McKitrick.

  62. ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.’ http://www.pnas.org/content/104/21/8709.full

    This is how models evolve from slightly different – within the bounds of feasible inputs – initial and boundary conditions.

    There are no unique solutions.

    ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

    The comparison of model solutions to observations seems to miss a fundamental point. The solution is one of many possible and is chosen on the basis of expectations about plausible outcomes. It would seem to be the expectations that are incorrect rather than the models as such.

    • David L. Hagen

      Rob
      Re: ” expectations that are incorrect rather than the models”
      How do you exclude model error? If you cannot, then the models are not scientific – by not being falsifiable. The IPCC lists three sources of error – as distinct from expectations.
      cf IPCC’s pause ‘logic’

      However, an analysis of the full suite of CMIP5 historical simulations (augmented for the period 2006–2012 by RCP4.5 simulations, Section 9.3.2) reveals that 111 out of 114 realisations show a GMST trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble (Box 9.2 Figure 1a; CMIP5 ensemble-mean trend is 0.21 ºC per decade). This difference between simulated and observed trends could be caused by some combination of (a) internal climate variability, (b) missing or incorrect radiative forcing, and (c) model response error.

      They omit “non-radiative forcing” and fail to recognize massive Type B systematic error.

    • Assuming the models are plausibly formulated – the ‘bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms…’

  63. On the issue of Arctic warming (and implicitly Cowtan and Way, 2014) there is this new paper which also discusses Curry (2014).

    http://onlinelibrary.wiley.com/doi/10.1002/qj.2422/abstract

    As mentioned beforehand (here and elsewhere) we have also placed a series of updates (4) on the analysis in CW2014 here:

    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/updates.html

    Including seven different reconstructions of global temperature
    http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html

    • Judith, I would not bother comparing Cowtan and Way to anything in the way of respectable science. Two co contributors to Skeptical Science, the most warmist cabal of warmists put up a method of divining temperatures with a self admitted (Cowtan) method that lines up temperatures from places that are further away as more accurate than places next to each other is not science.
      Their method is “failure proof ” in that it has never dropped a temperature anywhere.
      In all their Arctic work they have never shown a “station” get colder. Why?
      They have also shown near 100% correlation [dare I use the 97% meme, heck why not] with any other data sets they have tested.
      Inconceivable!
      Inigo Montoya: You keep using that word. I do not think it means what you think it means.
      Judith, it may be time for a review of Cowtan and Way, in particular with their Arctic warming continuing over the last 2 years [betcha] while the Arctic has refrozen. Time for their alter egos Way and Hansen to come out with Kriging in the North Pacific. Don’t you all know the heat has been hiding there all the time just waiting for our new method to show it.

      • Don’t be too hard on Cowtan and Way. remember as the AMO goes negative and the Arctic ice recovers their’s is the methodology that will show the most cooling.

      • No, their methodology forbids cooling.

      • read it

        http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf

        No matter what data set we throw at the problem, the answer comes out the same.

        Hadcrut and Giss get the arctic wrong.

        For the above analysis, I passed on AIRS data to C&W.

        For the longest time skeptics criticized the methods of hadcrut and GISS.

        So now you have two new looks at the data.

        Different methods, different data, validation of the methods by looking at previously un used data. validation of the method by looking at alternative instruments ( satillites ).. Multiple satillite datasets, reanalysis.

        There is not one bit of data, not one shred of evidence, that Hadcrut and GISS are right. None. zero. nada

      • thisisnotgoodtogo

        Wasn’t “disaster” when it wasn’t warming?

      • Don’t be too hard on Cowtan and Way. remember as the AMO goes negative and the Arctic ice recovers their’s is the methodology that will show the most cooling. – steven

        I suspect that when they publish C&W July it will be level, perhaps a bit of a jump, because warmth leaked from Siberia into the arctic and fell of the other series.

        As for the AMO, it’s spiking up after the new warming regime shift in 2013, and my hunch is the PDO will rejoin it when July is published in the next weeks.

      • Will do that today, wife permitting.
        Will put up cogent argument to back up my points .
        Can you tell me where Cowtan and Way have ever diverged from their ( cough,cough) hind casting checks with other models and where they have ever shown a negative anomaly in the Arctic?
        Note, such must exist and their non existence strongly suggests they are not dealing with reality.

      • However the CW14 data show a trend of approximately 0.03◦ C/decade greater than GISTEMP over a period of 16 years.
        Is this all they can say, they detected no difference????!!!
        shaking in my boots Steven and wondering what in the heck I was worried about if they show no difference.

      • However the same regional trends also present serious challenges for station homogenization algorithms, which depend on the assumption that climate trends are spatially correlated over moderate distances.
        Really.
        The Steven Mosher I now stated that this is a fundamental basis of physics twice and loudly. Now his mates deny this and he rolls over for them.
        “well the Arctic is an exception and Robert is such a nice trustful guy.”

      • Our initial speculation that the principal difference between GISTEMP and
        CW14 arose from the choice of sea surface temperature dataset was incorrect: [Note; not C and W incorrect algorithms, heavens, no]
        the majority of the difference is in the Arctic, and arises from differences in
        the input land temperature data from meteorological stations.
        Really?
        let’s summarize.
        Most of the Antarctic is sea not land.
        There are very few meteorological stations in the area with very limited data available.
        But if we take a left turn at Albaquerque we can add the temperature from some none adjacent stations like death Valley and the Sahara which are better proxies for the Arctic than some Greenland based station which was still too cold when we adjusted it?
        and hey presto CW 2014

      • There is some evidence that the homogenization adjustment algorithm
        used in the GHCN station data is attempting to eliminate some of the rapid
        Arctic warming over the study period.
        One can be sick without putting the fingers down the throat.
        How can one make such a blatantly wrong comment.
        Zeke explicitly explains why most GHCN homogenisation results in a warming of data, you know , its called TOBS adjustment and new fandangled thermometers introduced, etc

      • . A few rogue cells are also visible arising from CRUTEM4 stations which
        require homogenization adjustments due to changes in station equipment, location or operating practices.
        Damn those rogue cells heh.

      • warming on the Chukchi side of the central Arctic slower
        warming off the coast of Greenland. Neither of these features, if real, would be captured by the thermometer records since there are no land-based weather stations in these regions.
        So where they claim the warming there are no land based stations anywhere to back these claims.

      • “The close proximity of regions of warming and cooling on both the Eurasian
        and Alaskan Arctic coasts mean that it is possible for neighboring stations
        to show a very different temperature trends. Automated homogenization could
        potentially introduce unnecessary adjustments to reconcile these trends.”
        Not allowed to have weather now, are we??
        How dare neighboring stations show different trends . We need a Dalek to take care of these rogue cells ? C3PO? N o CW.
        Two stations in the Kara sea and one in the Beaufort sea have homogeniza-
        tion adjustments which appear to be inconsistent They are GMO IM. E. T., OSTROV VIZE, and BARTER ISLAND; labeled 3, 4and 8
        Exterminate.

      • The 3 problem stations account for nearly 40% of the difference between GISTEMP and CW14. Other unidentified differences in the station data at locations present in both datasets also contribute nearly 40%. The remaining difference arises from stations present in CRU but absent in GHCN.
        All of the difference between GISTEMP and CW14 in the Arctic can be
        accounted for by differences in the input station data.
        So we got rid of the low outliers and hey presto the world is warmer in the Arctic at least.
        Why not remove the 3 highest readings, Steven. Counting that as 40%
        remove another 1 and 1/2 times that and see how cold you could make the Arctic. We could call it the Mosher and Angech counter Krig method and be famous as well.

      • We would like to thank Matthew Menne and Claude Williams for helpful
        discussions relating to this problem. We are also grateful to Zeke Hausfather
        for comments and Steve Mosher for help with the AIRS data.

        Both of whom would surely disagree with the comment at the end of the paper
        since Zeke has explained here how adjustments were necessary over the Satellite period [not to the Satellite data] to the land based station records for TOBS and Thermometer adjustments resulting in at least a 1.4 degrees F rise in global temps over this time.

        “Nonetheless, claims that GHCN adjustments contribute to
        the warming trend over the satellite era are unfounded.”

    • Thanks for posting that information here; It’s good to have input :-)

    • Matthew R Marler

      Robert Way, thank you for the links.

  64. I I could not download Rupert Darwall and Ross McKitrick gets into really incomprehensible statistics which throw up 16 – 26 year hiatus length as one possibility. Now that the duration of the hiatus/pause is at the center of attention, let us admit that we do not know the future. There are of course various possibilities and those who wish to advance them must present scientific reasoning to justify their view. Since the hiatus has lasted a goodly number of years by now any attempt at predicting its future must include a satisfactory explanation of what has happened before. If you are going to say that there is greenhouse warming yet to come you must first explain why there is none today and why there has been no sign of it in the entire twenty-first century. Greenhouse theory, after all, is part and parcel of global warming theory that tells us to pay for our carbon admissions to save the world. The Arrhenius greenhouse theory has been predicting warming, based on the belief that increasing carbon dioxide in the atmosphere will cause warming. That was supposed to be guaranteed by Hansen who told the senate in 1988 that he had discovered the greenhouse effect himself. But it is not working like that which means that the current greenhouse theory must be discarded. The problem, no doubt, is because Hansen did not actually discover the greenhouse effect in 1988 like he said. What he said is that there was a 100 year period of greenhouse warming which proved that greenhouse effect existed. When the years were counted at least thirty of them were not greenhouse years and this nullifies his claim. It turns out that there is another greenhouse theory that correctly predicts what happens where Arrhenius fails. The alternate is Miskolczi greenhouse theory (MGT). Their difference is that while Arrhenius can handle only one greenhouse gas – carbon dioxide – MGT can handle several that absorb simultaneously in the IR. Carbon dioxide is not even the most important greenhouse gas in the air – water vapor is. There is on the average 2 to 3 percent of it in the atmosphere, several hundred times more than carbon dioxide. Carbon dioxide and water vapor are the most important greenhouse gases in the atmosphere and according to MGT they form a joint optimal absorption window which they control. The optical thickness of this window is fixed at 1.87, determined by Miskolczi himself from first principles. If you now add carbon dioxide to the atmosphere it will start to absorb, just as the Arrhenius theory says. But this will increase the optical thickness. And as soon as this happens, water vapor will begin to diminish, rain out, and the original optical thickness is restored. The introduced carbon dioxide will keep absorbing of course but thanks to simultaneous reduction of water vapor the total absorption remains constant and no greenhouse warming is possible. This is why constant addition of carbon dioxide to the atmosphere is unable to cause any warming today. This is not the first time it has happened. In 2010 Miskolczi was studying the absorption of IR by atmospheric carbon dioxide over time. He used a NOAA radiosonde database going back to 1948 and discovered that absorption stayed constant for 61 years while carbon dioxide at the same time went up by 21.6 percent. Constant absorption means no warming and here we have a perfect parallel to the current hiatus from an entirely different time period – no warming while carbon dioxide keeps increasing. There is one more example of no warming that is closer to us – the eighties and the nineties. This particular instance is not known to most people because it is covered up by fake warming in temperature curves emanating from HadCRUT, GISTEMP, and NCDC. I have known and talked about since I wrote my book “What Warming?” two years ago but nobody has taken notice, much less action about this outrageous forgery. I have written comments about it, most recently to Anthony a few days ago. You could look it up and read it by clicking on the URL below:
    *
    http://wattsupwiththat.com/2014/08/21/cause-for-the-pause-32-cause-of-global-warming-hiatus-found-deep-in-the-atlantic-ocean/#comment-1715234
    ***********************************************************************

    • daveandrews723

      So, Hansen, Mann, and a few others had a “Eureka” moment back in the 80’s in terms of the controlling force of CO2. They pitched it. The majority of the scientific community and public bought it hook, line, and sinker. It was quite a marketing campaign. Now it is established “fact” among a clear majority of the news media, politicians, and educational systems.
      Where were the responsible scientists back then to challenge the faulty hypothesis? As a layman I appreciate all of the efforts being put forth now (as in this thread) to bring some sanity to the debate, but I fear it may be too little to late. Just watch and see how the climate march in NYC is received by the media and the politicians later this month.

  65. Shaun Lovejoy – Thank you for the explanations.

  66. The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.

    Should we distinguish between ‘ black swans’ and ‘dragon-kings’? e.g.

    It is the difference between the inherently unpredictable (black swan) and noisy bifurcation (dragon-kings) that is potentially predictable in both climate and economics. There is perhaps some overlap in Mendelbrot’s ‘grey-swans’ that are weakly predictable. The system is chaotic and chaos occurs on all scales in time and space. .

    What this has to do with rejecting the natural variability hypothesis is perplexing and how the past 125 years can be used to bound variability is an incredible nonsense it seems. Quantified variability over the Holocene documents variability in hydrology and – presumably – temperature that is considerably outside the limits seen in the 20th century.

    Moy et al (2002) present the record of sedimentation shown below which is strongly influenced by ENSO variability. It is based on the presence of greater and less red sediment in a lake core. More sedimentation is associated with El Niño. It has continuous high resolution coverage over 12,000 years. It shows periods of high and low ENSO activity alternating with a period of about 2,000 years. There was a shift from La Niña dominance to El Niño dominance some 5,000 years ago that was identified by Tsonis (2009) as a chaotic bifurcation – and is associated with the drying of the Sahel. There is a period around 3,500 years ago of high ENSO activity associated with the demise of the Minoan civilisation (Tsonis et al, 2010). The period had ‘red intensity’ (El Nino) in excess of 200. The red intensity for the 97/98 event was 98. It shows ENSO variability considerably in excess of that seen in the modern period.

    It is difficult to imagine what could possibly be meant by Lovejoy – only that it appears to be utterly misguided. Sounding more like the webbly all the time in fact.

    • ‘The statistics needed to bound the empirical 125 year global temperature changes have tails with exponents 4 and 6 (the exponent 5 works best…), see the probability distributions in: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf). There is therefore a theory (scale invariance) justifying such distributions, as well as much empirical data. But even if the theory is rejected, they can still be used as extreme bounds – all that is needed to reject the natural variability hypothesis with high levels of certainty.’

      Is a quote from Shaun Lovejoy above.

      • There seems to be an error in this paper. Lj presents a temperature model:
        Tglobe (t ) = Tanth (t )+ Tnat 113(t )+ ε(t )

        that should be
        Tglobe(t) = Tclimate(t).

        Its a nonlinear system. Assumptions of separability lead to circular reasoning.

  67. It looks like Shaun gave us a special link that uses the term “deniers.” Here’s another link.

    http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Anthro.climate.dynamics.13.3.14.pdf

    • From that link:
      While no amount of statistics will ever prove that the warming is indeed anthropogenic, it is nevertheless difficult to imagine an alternative.

  68. Shaun turns up just about everywhere I search for “scaling fluctuation analysis.” Is anyone else in climate science using this technique?

  69. Just an opinion. I do not think it is scientifically correct to categorize papers by their authors race, color, creed, or blog habits. They should be evaluated on their own merits, given their arguments, prior citations, and analytic methods used. And critiques should be couched in an equivalent manner.
    For example 1, perhaps Lovejoy did not adequately recognize the limitations of the proxies used. That is worthy of debate. A link to a site that used the term deniers is not. After all, the link could have been to POTUS.
    For example 2, whether CW surface krigging is a legitimate statistical method for that part of the year when the Arctic is not all ice/snow is an interesting question, since the summer surface differences arguably violate the underlying krigging assumptions. That they might also post on SkS perhaps shows a certain lack of social judgement, but is irrelevant to the methods and conclusions of their technical paper.
    Such merit questions require facts and methods digging. Google et. al. enable that. My guest posts here and next book endeavor to illustrate using specifics.
    All else is just opinion.

    • The link Judy included in the main post used the term “deniers.” I didn’t go looking for it.

      But I agree, the piece should be judged on its technical merits. Of course, the use of that term in a scientific, in this case, letter, is questionable nonetheless.

    • Rud

      “Just an opinion. I do not think it is scientifically correct to categorize papers by their authors race, color, creed, or blog habits. They should be evaluated on their own merits, given their arguments, prior citations, and analytic methods used. And critiques should be couched in an equivalent manner.”

      Absolutely. Thanks for staing that.

      “since the summer surface differences arguably violate the underlying krigging assumptions.”

      And [I am very curious] from your understanding what are those assumptions?* exactly what is being kriged? and the local estimates, are they from point kriging or block kriging? how are they /should they be interpreted?** …a lot of nitty-gritty.

      but we are just talking estimation and more will follow.

      —-
      * people here seem to poke the assumptions a little and walk away…too mathish I guess.
      ** yeah, poking s a ‘physical boundary’ argument including pondering how sharp they are in the context of how the estimation is configured. Not wildly excited about it, though…finite shelf-life and all that.

    • I agree with you for the most part Rud. But, what if you see some private e-mails that state that there is something they want to hide or get rid of and then later (often a few months for privileged folks) the paper, lo and behold, has gotten rid of the pesky event. I would think that it is ok to “diss” their motivated “science” at the same time you criticize the methods they used.

  70. Today is an example of why I really like CE — Through Mosher, I learned about Ross McKitrick and went to his webpage and read some of his papers. McKitrick is an interesting person besides his views on the “Pause” — He is a person with conservative values advocating bottom-up actions rather than “Liberal” top-down/command-control.

    Dr. Ramanathan’s “fast mitigation” (low level ozone, carbon black, methane, HFCs) is totally consistent with McKitrick’s views.

    • McKitrick also had the idea of a carbon tax indexed to the temperature rise, which I think would be good, but he would only use tropical ocean temperatures for some reason, not global. I would index it as $10 per tonne for every 0.1 C above the 2000-2010 decade global average, for example, using only moving decadal averages as they vary slowly, maybe updated every 5 years.

      • Jim D — That McKitrick supports a carbon tax is surprising. In his presentations he strongly advocates a view to: Think of GW not as a single problem — put as a 1,000 problems that could be addressed at local levels. As you might know, I really don’t like “liberal ideological” policies like carbon taxes or cap & trade.

      • I agree. A carbon tax is a good idea, indexing to actual temperature is a winner. But there’s a need to offset this tax with reductions in VAT, sales tax, and income tax.

        My position is driven by the possibility that global warming does seem to be driven in part by anthropogenic forces, and also by the need to curb oil consumption, because we are running out of oil. The tax should also encourage more efficient use of coal, and that’s also a plus.

        What I can’t figure out is whether we should tax cement plants and other emissions, for example raw methane from the oil industry and from rice growers, cattlemen, and garbage dumps.

      • Fernando, “I agree. A carbon tax is a good idea, indexing to actual temperature is a winner. But there’s a need to offset this tax with reductions in VAT, sales tax, and income tax.”

        Then we can add a some tax credits for things that offset AGW, like aerosols, green spaces and genocide.

      • CD – the genocide tax credit should be on a per person basis with no minimum group size so that individuals and not just large organizations could claim the credit.

      • “As you might know, I really don’t like “liberal ideological” policies like carbon taxes or cap & trade.”

        Ultimately, Liberals are the road block to carbon taxes as a mitigation effort. They hate regressive taxes. So it has to be structured with giveaways, loopholes and rebates such that the only folks who get shafted by it are middle class suburban dwellers, which just happens to be a demographic that votes. In short it ends up as a policy with no CO2 benefit (thanks to the efforts to exempt people) and a government revenue “benefit” that lasts one election cycle.

      • As I remember it, McKitrick didn’t propose a carbon tax as a “good idea” in general What he wrote was that IF we are going to impose such a tax, then we should at least make some attempt to tie it to the alleged reason for the tax – globalclimatewarmingchange. THAT is why he tied it to the “tropospheric hot spot.” He figured that would be the best metric to determine whether CAGW proved to be true, and would tie the amount of the tax to the amount of evidence of actual AGW.,

      • I always wanted a flip side to it. As the world cooled the government would send increasingly massive payments to the carbon dioxide producers.
        ===============

      • But, what are you going to do with the revenues from a carbon tax? It would probably go to the same feel good but ineffective systems out there, rather than to ones that might actually be effective, like research into Nuclear. We would get wasteful trains, more solar (I just installed solar because energy is so far from market prices here in CA, for instance, it actually makes economic sense for me, but no one else, for instance).

      • One thing to know Jeff;

        “Ultimately, Liberals are the road block to carbon taxes as a mitigation effort. They hate regressive taxes.”

        All taxes are regressive since they can only be passed on to the cost of goods and services or they can be avoided outright which hurts the poor as well.

      • As I have mentioned, the carbon tax serves dual purposes, it reduces carbon emissions AND it reduces oil consumption. The second point is very important because the world is running out of oil.

        Don’t be confused by the USA production increase, a lot of it is condensate, and the oil is mostly from North Dakota. The Bakken oil isn’t limitless, it will peak within 5 to 10 years.

  71. “by Judith Curry

    With 39 explanations and counting, and some climate scientists now arguing that it might last yet another decade, the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified, given its reliance on computer models that predicted temperature rises that have not occurred. – Rupert Darwall ”

    Exactly.

    But “heteroskedasticity and autocorrelation (HAC)” will not provide an answer, because it depends on stationarity. One has only to look at the history of temperature from 1900 to realise that a stationary system could never produce that shape. A stationary system would require that temperature in 1900 would have the same vital statistics as in 2014. Not even the mean is the same. Forget it.

    Climate is an on/off phenomena, not a stationary one. James Chadwick’s discovery of the neutron in the 1930’s can provide the answer to this on/off behaviour. It ia wrong to think of CO2 as a unique molecule in physics. in chemistry, yes. but not in physics, because it can have several isotopic forms, each of which can absorb different amounts of IR radiation.

    The other great driving force is simply the time constant of the oceans. compared with the atmosphere. Following the 0.5C rise in atmosphere temperature between 1910 and 1940, we could expect this to flow through to the oceans about 30 or 40 years later. Unfortunately the IPCC never had a broad enough sweep of history

  72. ‘Puffs of smoke and bits of ejected debris.’ I really liked that one.

    So, step 1 of the grand retreat from GCMs. Lovejoy blinds himself to millenial scale changes and uses shonky paleo to come to a remarkably confident conclusion.

    I’m a little amused that it is dead on now, but way off in just a few short years. Buying time.
    ===============


  73. harkin | September 1, 2014 at 11:26 am

    Waiting for Webby to proclaim Mother Nature performed an own goal.

    I don’t deal in anecdotes, but this is an amusing comment at the end of Wayman Tisdale’s recent post reporting on current record warm SST numbers.


    Siberian_husky
    September 2, 2014 at 5:04 am

    So, global sea surface temperatures are at a record level and we’re not even having an El Nino.

    We really are screwed.

    Who let the dogs out ???

  74. Just saw this climate modelling news.
    If anyone in the world could write a decent multiphysics code its Sandia. Hopefully not just a rehash of the same old GCM’s:

    https://share.sandia.gov/news/resources/news_releases/acme_model/

    • Step 2 away from the current crop of models. An admission of failure.
      ==========

      • Not really, alas. The underlying claim is that model failure is due to limited computing power so we need to move modeling to the biggest available computers. This underlying claim is false. Model failure is due to the lack of understanding of the natural climate mechanisms coupled with a refusal to admit this lack.

        More flops will not cure the flop.

      • “Step 2 away from the current crop of models. An admission of failure.”

        haha, yeah, I thought of that.

        Best case scenario would not be a platform that predicts but one better for exploring sensitivities and laying out the border between known and unknown.

      • Re “laying out the border between known and unknown” it is unclear how a model might do this. What they can and should do is explore alternative hypotheses, not hardwire AGW.

      • “More flops will not cure the flop.”

        Just feeding at the flop trough?

      • With this, what would be the need to fund all the models in the groves of Academe?

        But, I’m not sure having the US government monopolize the field is a good idea.
        ==============

    • Heh, obviously a move to co-opt lucia.
      ============

    • “If anyone in the world could write a decent multiphysics code its Sandia. ”

      Well let’s see. …got a few groups touting this or that code [check]…bundle them (with mods) [check][check] … make an interface with racing stripes [check]… make sexy movies with our high speed graphics [check] …yeah that should dazzle them …

      Hey, someone needs to pull together our lists of follow-on projects …

      what a racket…a sure thing. …more wasted money…

    • Somebody please tell Taylor to remember to include the emissions pathways in their workflows? Please? The front end integrated assessment models need a ton of work.

    • “Re “laying out the border between known and unknown” it is unclear how a model might do this.”

      A good sensitivity engine (e.g. adjoint analysis) could sense underlying instability in the simulation and use adaptivity and/or call a halt to the procedings.

      Problem is, if such a new fangled climate model with sensitivity analysis told us 2 weeks out and past was unknowable then I guess it would end up being just a big waste of money. Kind of where we’re at now….

    • @mwgrant

      Having read the report you might be right that this is not more than just ‘more flops’. Same old models, same players. Oh well. too bad.

      http://climatemodeling.science.energy.gov/sites/default/files/publications/acme-project-strategy-plan_0.pdf

      • nickels,

        I presently know diddly about the project and am judging at a distance. I am just cautious by virtue of past experience with the DOE. To be sure there are very good people at the labs but sometimes the visible projects are projected by management as bigger than life and eventually sap resources and attention away to the detriment of other efforts. Also once expenditures get big there is a tendency to impose application where not appropriate.

        Thank you very much for the project strategy document link. I wonder if there is more project level documentation. (Guess I’ll poke around.) Seeing that is a good sign… indeed looking over the site, perhaps my grumpiness was unwarranted. My apologies. Maybe the ante is being upped…that would be great. Good find.

      • The main difference is a focus on uncertainty quantification. Remains to be seen exactly what they are up to in this regard

      • I really appreciate the fact that this is a planning document…as in laying out the effort, setting metrics, etc. To me this suggests a substantive change in perspective and approach. Of course it should be just the tip of the iceberg. …

      • planning document [nascent] as opposed to an inadequate after-the-fact grab bag of publications. [I think ‘Yucca Mountain style’ reviewed and signed analysis packages….afterall some of these folks have probably been there.] dream, dream…

    • Matthew R Marler

      nickels, from that link:The project initially will examine and try to answer “big science” questions in three areas that drive climate change: the water cycle, biogeochemistry and the cryosphere (areas of the earth where water exists as ice or snow).

      (Water Cycle) How do the hydrological cycle and water resources interact with the climate system on local to global scales? How will more realistic portrayals of features important to the water cycle (resolution, clouds, aerosols, snowpack, river routing, land use) affect river flow and associated freshwater supplies at the watershed scale?
      (Biogeochemistry) How do biogeochemical cycles interact with global climate change? How do carbon, nitrogen and phosphorus cycles regulate climate system feedbacks, and how sensitive are these feedbacks to model structural uncertainty?
      (Cryosphere systems) How do rapid changes in cryospheric systems, or areas of the earth where water exists as ice or snow, interact with the climate system? Could a dynamical instability in the Antarctic Ice Sheet be triggered within the next 40 years?

      Sounds good to me. They are saying 10 years. Then maybe 10 more years for lots of out-of-sample validation/testing of model results?

  75. In the link to the article that has a link to another article (‘Top 20 things politicians need to know about science’), it says, as follows:

    13. Policy and politics are not the same thing

    Policy is mostly about the design and implementation of a particular intervention. Politics is about how the decision was made. Policy is mostly determined in government, where the politics is focused by ministers, the cabinet, and the party leadership. In the House of Commons, there is less policy and more politics.

    That’s pretty insightful. We waste a lot time on science-abetted policy when — as in global warming — if we don’t first take a step back and see the politics-abetted driver of the matter –e.g., do we really want to hand over control of the economy to Eurocommies, no matter what trumped cockamamie rationale is being used (i.e., a hoax and scare tactics) to justify a liberal-fascist power grab?

  76. John Smith (it's my real name)

    “It is the difference between the inherently unpredictable (black swan) and noisy bifurcation (dragon-kings) that is potentially predictable in both climate and economics.” Rob Ellison from yesterday

    wow – does this mean that climate science predictions might someday be as reliable as economic predictions?

    • ‘In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. ‘ IPCC TAR 14.2.2.2

      • I wonder about the chaotic behavior of the models. They are either chaotic, and therefore beyond prediction without perfect knowledge of initial conditions, or they are not. They can’t be both. Which is it? And if they are chaotic, where are the mathematicians (not statisticians)? Why aren’t they jumping up and down shouting that the predictions, based on chaotic models, are absurd?

        Perhaps the mathematicians:

        1. Tacitly approve of the models.
        2. Approve of the implied policy recommendations.
        3. Have nothing to gain from challenging the models.
        4. Have everything to lose from speaking out.
        5. Are too busy publishing or perishing.
        6. Get paid to model.
        7… ?

        Here it is in the very words of the IPCC:
        https://www.ipcc.ch/ipccreports/tar/wg1/504.htm

        Why is this not part of subsequent reports? Is it no longer true? Who wrote that section? Where is the writer now?

        Justin

      • @JustinWonder…

        There are people better qualified to answer this than I, even hanging out here. But I’ll take a shot at it.

        A “model” is a very complex non-linear system. Not nearly as complex as the system it’s modeling, but several tens of orders of magnitude, IIRC. But still very complex.

        Now, AFAIK current “chaos” theory suggests that such a system will typically follow an attractor, a closed loop that maps the determined progression from one state to another (at notionally infinitesimal time increments).

        If we consider the state of such a system at any time to be (represented by) a point in n-space (with very large n), an attractor will follow a linear path wandering through this space, eventually forming a closed loop. (Although the time required to make a circuit of such a loop is technically finite, in many cases is will be many orders of magnitude larger than the known/estimated age of the universe.

        When a factor not considered part of the system causes it to diverge from its attractor, it will typically land on a point in n-space that’s not on the attractor. From there it will progress along a path that eventually merges with the attractor. This is called “perturbation”.

        The collection of all point/paths that end up bringing the system onto the attractor is called its “basin of attraction. Thus the name “attractor”, because it tends to “attract” paths that can only be reached through external perturbation.

        In principle, or so it’s assumed, the extent of the “basin(s) of attraction” for the system, or at least that part of it that can be reached with only minor perturbation, can be explored by running the model for a long time with constant parameters. AFAIK there is no real justification for this assumption, except of the “it’s the best science we have” sort.

        There are potentially many different attractors, or even different parts of the usual ones, that will only be reached rarely. There’s no real way of knowing how many there are, in the models.

        A more fundamental problem is that we don’t really know whether the models offer any valid information regarding how the real system works. Global Circulation Models (GCM’s) typically divide the globe up into cells a few kilometers or more on a side, the atmosphere into fewer than 100 levels. The scale of independent activity within the atmosphere is many orders of magnitude smaller: on the order of a few hundred microns, AFAIK. An assumption that the latter can be accurately modeled by the former is totally unwarranted.

        All of the above according to my best understanding, of course.

        As for why the mathematicians keep quiet… My guess would be most of them are more interested in doing math than criticizing non-math masquerading as math.

      • There are tipping points, like the runaway loss of Arctic sea ice that may be under way already due to its own positive feedback. Similarly Greenland’s collapse and parts of Antarctica, loss of rainforests while other areas green up, permafrost loss and methane release, changing ocean currents and circulations, etc. Most of these are chaotic in a sense, but also pushed to be more likely to happen by global warming than they would be in a stable climate. Just because these have been the same for thousands of years is no guarantee they will remain so as the climate warms to something not seen in millions of years. The report said they are unpredictable, which is true. The Arctic ice loss rate was severely underestimated by anyone, for example, and several more Antarctic ice shelves could collapse in upcoming decades. An analogy would be earthquakes or volcanoes where gradual processes reach a tipping point, and these are typical of chaotic systems that jump from one mode to another with a breakdown of a barrier under growing pressure. Ice Ages also are sensitive like this because of the strong positive albedo feedbacks of continental glaciers. Climate change will likely be anything but smooth on the century scale.

      • @Justin
        “Why aren’t they jumping up and down shouting that the predictions, based on chaotic models, are absurd? ”

        I do my part, for what it is worth.
        From what I can tell and from my experience, mathematicians don’t really seem to be invited to participate. Part of the failing of climate sciene, IMO. Used to be more so. E.g. NCAR used to have a thriving math group (circa 2000). Gone.

        Mathematicians and their naysaying aren’t well accepted I think.

      • ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

        This is the accompanying sketch.

        From small differences in initial and boundary conditions solutions diverge over time.

        We have known that models are chaotic since 1963. Without a doubt. What Tim Palmer and Julia are talking about in the quote is perturbed physics ensembles models giving probabilistic rather than deterministic solutions. Many solutions of a single model with slightly different starting points and boundary conditions.

        Model chaos is well known in the modeling world. Utterly accepted and undeniable. It leads to something called irreducible imprecision.

        ‘In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?

        Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.full

        We have gone precisely nowhere in realistic perturbed physics ensembles or in estimating irreducible imprecision.

  77. Pingback: The length of the “pause”? | …and Then There's Physics

  78. Reblogged this on I Didn't Ask To Be a Blog and commented:
    “[T]here is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 – 26 years in the lower troposphere…”
    Oh, OK … moving along…

  79. David Springer

    Underlying trend is not alarming. 0.06C/decade. Simples. Loehle and Scafetta identified it in a 2010 paper. Curve fitting is not rocket science. L&S 2010 has exactly predicted the intervening 4 years since they introduced the paper so it’s successfully tested to some degree which is more than I can say for any other curve fitters like Lovejoy and unpublished Pukite (@whut).

    https://judithcurry.com/2011/07/25/loehle-and-scafetta-on-climate-change-attribution/

    Put that in your curve fitting pipes and smoke it!

    • David Springer

      The pause is killing the cause and making monkeys out of SkS contributors and sycophants.

      To quote my dear departed friend emeritus professor of biology John A. Davison:

      “Hard to believe, isn’t it?”

      “I love it so!”

      “Write that down.”

      • But in a couple of months, we could see a jump up. Tisdale says the current high SST will show up in lower trop in a couple of months. If it does and if the sat era record SST results in a sat era record lower trop. temp, then why do the stairs lead upward, but never downward? I’ma skeptic, but I still gotta ask the question.

      • jim2,

        If you look at longer time periods we are supposed to be warming. The RWP and MWP occurred right around the years zero and 1,000 which is kind of spooky and here we are at 2,000. Recovery from the LIA as well. Even the “cooling” of the 60’s and 70’s plus the one before that looked flat or slightly down and then there would be a more rapid rise like the 30’s and 40’s. So from a simple “have we seen this before” point of view, nothing we are seeing now seems that far out. Maybe we got an extra 0.1 degree rise not seen in the previous 30+ year warming cycle which included the 30’s.

      • And, the Pacific hot zone is pretty big, one result is radiation eventually to space. So, it is also a cooling event.

      • And Bill, that supplies the alternative to CO2 that Shaun has difficulty imagining.

    • If the underlying trend was alarming, it would be too late to do anything already. This is why this is a critical point for decision-making.

      • David Springer

        The underlying trend is not accelerating so if it isn’t alarming now it won’t be alarming in the future. Forcing from additional CO2 is non-linear, specifically a case of diminishing returns i.e. less forcing per additional ppm of CO2 in the atmosphere. Peak oil ostensibly means the trend should decelerate because we can no longer sustain an increasing amount of aCO2 emission per year.

  80. David Springer

    The ocean ate my global warming. Second law of thermodynamics then put it into Davy Jones’ Locker where it will take a thousand years to slowly, insignificantly reemerge.

    Boo frickin’ hoo. Get it over alarmist boys. Start looking for a new line of work.

  81. David Springer

  82. Off topic, but flagged to Judith as a possible head-post:

    The refreshing take of US legal scholar and literary theorist Professor Stanley Fish on academic freedom is highly pertinent to many discussions at CE. Fish defines five schools of thought on academic freedom:

    (1)— The “It’s just a job” school. This school (which may have only one member and you’re reading him now) rests on a deflationary view of higher education. Rather than being a vocation or holy calling, higher education is a service that offers knowledge and skills to students who wish to receive them.

    (2)— The “For the common good” school.

    (3)— The “Academic exceptionalism or uncommon beings” school.

    (4)— The “Academic freedom as critique” school.

    (5)— The “Academic freedom as revolution” school.

    http://resources.news.com.au/files/2014/09/02/1227045/253700-aus-file-stanley-fish.pdf

    • I’ve been collecting material on academic freedom, the fish article is a good one, maybe i will get to it next week. Too much is happening as of late, planned posts are getting pushed back unfortunately

    • Faustino

      Your school of thought has at least two members. :)

      I have had a lot to do with academics over the years, and many (not all) of them are unbelievably precious petals. They seem to think that, unlike everyone else in the workforce who is being paid by someone else, they are a protected species who can do whatever they want.

      Their notion of an independent “community of scholars” dates back to when the government did not pay for their services – students and private benefactors did, and they did it directly (not via the university bureaucracy).

      If academics want to pursue their own interests without interference, there is nothing stopping them from finding enough students and/or private benefactors who are willing to pay for it.

      • Johanna, that was actually Fish’s remark – the “It’s just a job” para is a direct quote – he seems himself as being in a minority of one. I was fortunate to know some first-rate academics at LSE 1961-64, at Uni of Essex 1996-67 and when I worked in Canberra 1985-91. I’ve come across some top-class academics from elsewhere as well, mainly American. I’ve found few of much quality in Queensland since 1991 however, Tony Makin at Griffith being an honourable exception. I think that one problem has been the huge expansion of the field, it would be impossible to maintain the high level I found at LSE in the sixties. And, yes, there are many “precious petals” in the field, I recall for example a pompous fourth-rate English academic from a third-rate university that I met in 1980, they’ve been around for a while.

      • Thanks for the correction. A someone once said, read more closely!

        I have also met some first-rate, and unpretentious, academics. But, as you point out, the talent pool is so diluted nowadays that they are a decreasing minority.

        So, which number best represents your views?

  83. Leave it to academics to expend more effort on determining the “length of the pause,” based solely on statistics, than on comprehending the reason for it in terms of physical system analysis.

  84. I know I shouldn’t ask it but ‘how long is a piece of string?’

  85. Speaking of pauses, is it my imagination or has el nino taken the longest pause between cycles since 1950?

    • fireman501

      The Christmas Child has taken a siesta indicated early on by the persistence of strong Easterlies in Tahiti and Darwin which make up part of the the Southern Oscillation Index; i.e. the atmospheric component. The hot water in the Western Equatorial Pacific, the oceanic component to El Nino was all the rage in February. Great expectations for a Lollapallooza of an El Nino, comparing favorably with the 1998/2000 El Nino. Another step change in global temperatures was anticipated. How some ever, things don’t always go according to plans, especially for those who know that climate catastrophe is just around the corner. Although the water was warm, the winds from the East persisted; hence, no El Nino.

      What we have from the warmist side of the public relations scientists, is a “Just you wait Henry Higgins” scenario. El Nino will come, although just not as ferocious as predicted. Western Equatorial Pacific hot water is just not enough, so with some breath holding, crossing one’s fingers, hoping against all hope, the Christmas Child will put in an appearance, just not necessarily like the one proposed, anticipated, rooted for in February 2014, just maybe something. Any thing! Otherwise, the Christmas Child may be mistaken for the New Year’s babe in diapers ushering in the New Year and kicking out Father time.

  86. 300 years from now this may be a blip on a graph depicting a pause before an abrupt climate change known as the Little Ice Age 2.

  87. It should be interesting to see what they come up with.

    Biggest hack against climate change ever mobilized – 3,000 developers in 40+ cities
    12-14 Sep 2014 – Global
    Geeklist #hack4good

    Jack Smith

    • From the article:

      BLOWBACK

      They’ve finally done it.

      Chemtrail proponents have pushed too far.

      A dozen years after this reporter first broke the chemtrails story for Environment News Service, corporations and their hired scientists are coming out of the chemtrails closet to urge “geoengineering” Earth’s wildly skidding climate with an all-out assault of toxic, sunlight-reflecting chemicals continuously spread throughout the upper atmosphere.

      Despite warnings from the British government and America’s top military leaders that worldwide demands for oil will far outstrip supplies by 2015, climate change profiteers hope to capitalize on the failed Copenhagen climate change talks by substituting planet-risking techno band-aids for a worldwide shift to renewable energy.

      But this time, the same “quick-fix” mindset that brought us DDT, ozone-eating CFCs, global nuclear fallout, a deluge of DNA-altering chemicals, personal microwave radiation devices, deep ocean oil well blow-outs, chemtrail spraying, biological warfare, depleted uranium weapons and a seemingly endless “catastrocopia” of similarly innovative disasters may be stopped in time.

      http://goldenageofgaia.com/accountability/weather-warfare/calls-for-massive-chemtrail-spraying-spark-worldwide-mobilization/


      • You don’t need chemtrails, you have NASA. A modified jet engine and a good water supply.

        I put it in the same category as sequestering liquid C02 in the ground.

  88. Anyone else have a look at the ‘quadrillions’ paper? Essentially a narrative of change with a ‘pedagogic’ model based on something like this..

    http://www.ncdc.noaa.gov/paleo/ctl/about1.html

    ‘Using modern data we show that a more realistic picture is the exact
    opposite: the quasiperiodic processes are small background perturbations to spectrally continuous wide range scaling foreground
    processes.’

    In reality there are no quasiperiodic processes and the foreground processes are incremental changes in the system that push it past a threshold at which stage the components start to interact chaotically in multiple and changing negative and positive feedbacks – as tremendous energies cascade through powerful subsystems. Some of these changes have a regularity within broad limits and the planet responds with a broad regularity in changes of ice, cloud, Atlantic thermohaline circulation and ocean and atmospheric circulation.

    Outside of the transforming events that changes the nature of the system – the opening of Drakes Passage or the closing or the Isthmus of Panama for instance – climate is internally realised in shifts that occur on a timescale of 2 to 4 decades. With greater or lesser changes as climate abruptly shifts to quasi equilibrium states – attractor basins – between Quaternary glacial and interglacial limits.

    The US National Academy of Sciences (NAS) defined abrupt climate change as a new climate paradigm as long ago as 2002. A paradigm in the scientific sense is a theory that explains observations. A new science paradigm is one that better explains data – in this case climate data – than the old theory. The new theory says that climate change occurs as discrete jumps in the system. Climate is a kaleidoscope – shake it up and a new pattern emerges.

    • Rob

      Like the shifts we can observe in figures 3 and 4 of my article here?

      https://judithcurry.com/2013/06/26/noticeable-climate-change/

      tonyb

      • tonyb

        Thanks for reminding us of the excellent article. Actual long records of historical data, CET, provide a perspective. Do you see changes over the last near period of actual measured temperatures being adjusted significently? the explanations from Mosher and ??? were enlightening but Steven Goddard publishing data for actual vs adjusted and now the cirticla disucssions in Australia about the climate data being major modified without clear tracable adjustments on JoNova seems to put all but independent temp records at Risk. What do you think?
        Scott

      • Scott

        I am not a conspiracy theorist so do not believe the Met office ‘adjusts’ temperatures in the sense they are deliberately trying to change the weather stats to follow the AGW line.

        However, I had a meeting with David Parker of The Met office at the end of last year who was responsible for creating the official Hadley CET 1772 data base.

        In recent years they recognise that the stations they used to collect CET data were likely running too warm , David wrote a paper on this, which was accepted and subsequently they substituted these for slightly cooler stations.

        if you look at CET there is a distinct Hump then a sharp fall over the last decade or so. I suspect the sharp hump is exaggerated and likely the sudden fall is as well . So the temperatures probably rose fairly gently to slightly below the official level stated and have subsequently dropped slightly less therefore than the data suggests.

        in that respect ‘adjusting’ the data may or may not be the right thing to do but if they want to change what has already gone David will likely write a paper first to gauge reactions.

        tonyb,

  89. I shouldn’t worry about the pause as Giss will gradually adjust it out as they did with the 1945-1975 cooling that became a mere plateau. The only reason they haven’t done that yet is because skeptics control the UAH satellite dataset. The other satellite team at RSS though have previously demonstrated great willingness to adjust their data to support what the models predict so all they need is to foment another classic climate putsch at UAH as per the sacking or defunding of dissenting climatologists and professors thus far.

  90. Re Steven Mosher http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf
    Thanks Steven for a good laugh. I remember you commenting on marking freshman papers.
    Apart from the fact that you would cite a conflict of interest to excuse yourself from marking someone when you had such cases, which struck me as terribly good manners, you also displayed a keen insight into how such papers should be marked.
    I guess you should have stuck to your own advice and disassociated yourself from commenting on this paper.
    If you had to mark it, without nepotism, it would have got a C minus with its only relieving factor being the very imaginative , feverish minds working on it. That is as an English paper of course.
    As a science paper it sucks particularly with it’s truncating the outliers on one side only to “prove” a theory and then backing it up with the juvenile non science of saying we should use more distant stations as they are more reliable than closer stations which you had so eloquently dissed in the past.
    Sic transit gloria mundi.

  91. Blimey…!

    Does anyone know what the score is in the cricket?

  92. Pingback: Recipe for a hiatus | Musings on Quantitative Palaeoecology

  93. How long is the pause?

    Tracking the Atlantic Multidecadal Oscillation through the last 8,000 years

    Understanding the internal ocean variability and its influence on climate is imperative for society. A key aspect concerns the enigmatic Atlantic Multidecadal Oscillation (AMO), a feature defined by a 60- to 90-year variability in North Atlantic sea-surface temperatures. The nature and origin of the AMO is uncertain, and it remains unknown whether it represents a persistent periodic driver in the climate system, or merely a transient feature. Here, we show that distinct, ~55- to 70-year oscillations characterized the North Atlantic ocean-atmosphere variability over the past 8,000 years. We test and reject the hypothesis that this climate oscillation was directly forced by periodic changes in solar activity. We therefore conjecture that a quasi-persistent ~55- to 70-year AMO, linked to internal ocean-atmosphere variability, existed during large parts of the Holocene. Our analyses further suggest that the coupling from the AMO to regional climate conditions was modulated by orbitally induced shifts in large-scale ocean-atmosphere circulation.

    http://www.nature.com/ncomms/journal/v2/n2/full/ncomms1186.html

    How long is the pause?

    Answer=>30 to 45 years

  94. G’day dung beetles
    The ”pause” is just as big con, as the rest! There is no such a thing as a ”pause” – just the whole GLOBAL warming is a con / non existent.

    Instead of the Warmist admitting that is no global warming -> they coined / invented the con therm ”pause”
    On the other hand, the ”Skeptics”; instead of demanding the Warmist to admit that is NO global warming – instead, the Fakes embraced the ”pause” otherwise would have to realize what kind of fools Ian Plimer and Anthony Watts made out of them, the whole time. Truth: THAT KIND OF ”PAUSE” will; last for the next 4billion years, not for another 10years!

    The earth’s ”Self Adjusting Temperature Mechanism” is perfect
    THE WHOLE PLANET NEVER GETS WARMER / OR COLDER, FOR MORE THAN FEW MINUTES

  95. Here is a rebuttal to McKitrick’s paper, would appreciate comments
    http://quantpalaeo.wordpress.com/2014/09/03/recipe-for-a-hiatus/

  96. Geoff Sherrington

    Comment on rebuttal.
    McKitrick wrote a dry statistical paper whose purpose he stated: “Hence there is a need to address two questions: 1) how should the duration of the hiatus be measured? 2) Is it long enough to indicate a potential inconsistency between observations and models? This paper
    focuses solely on the first question.” He then gave the mathematics for a calculation he considered appropriate.
    The person doing the rebutting chose another mathematical analysis and applied a model. The value of the model is affected by the accuracy and applicability of the model parts. For example, it uses data back to 1970. It used a number of iterations whose number was subjectively determined by the author. Consequently any errors in such extra data & method needed for the model reduce its value.
    The rebutting author also introduced motive and references to climate change memes, whereas McKitrick confined his mentions to a description of the source of data and no social lessons. McKitrick’s question 2 rejection explicitly rejected models, so the rebuttal note is not about the same subject material. Straw man tactic.
    I have to believe Ross did fully OK. Again. We could sure use his early T3 proposal for tax to combat global warming to be calculated as a proportion of measured warming. No tax for 9, er 16 er 18 years??
    (Sarc on).Let’s work it out with models. (scarc off).

  97. Scientists in Australia are almost 100% sure humans are causing temperature rises & that ‘deniers’ claims of a pause are unfounded..

    CSIRO almost 100% sure humans are causing temperatures to rise => http://www.smh.com.au/environment/climate-change/csiro-almost-100-sure-humans-causing-temperatures-to-rise-20140904-10c7y4.html

  98. See update from McKitrick in main post

  99. WebHubTelescope

    Another paper that uses a CSALT-like model to verify aCO2 as the culprit for current global warming.

    http://www.sciencedirect.com/science/article/pii/S2212096314000163


    To construct the statistical model we use GHG concentration, solar radiation, volcanic activity and the El Niño Southern Oscillation cycle as these are key drivers of global temperature variance

    No Stadium Wave component like CSALT uses but that is OK.

  100. Further on Richard Telfer’s post, as far as I can tell he doesn’t dispute the duration estimates I get, given my definition. He just argues that it’s no big deal because in a linear model with a known trend and AR2 errors you get a pause that long or longer 10% of the time. It’s hardly a “recipe” if it fails 90% of the time. I hope he responds at some point confirming he checked that all the conditions of the pause definition hold, and that he does the same calculation for the UAH and RSS series, and that if he doesn’t agree with my definition or how I implement it, he will provide us with his own.

  101. There seems to be a trend in climate science particularly to use high powered statistical techniques. The problem with this arises with the application of a dangerous meme whereby it is asserted that a critric must fully understand your exact statistical approach otherwise they have no right to critique your findings in any way.

    I am not accusing McKitrick of setting out deliberately to do this here, however this is something I’ve been pondering for a while and the nature of McKitrick’s response brought the topic to mind. All sides seem to use this dangerous meme. Mann’s use of PCA is an example. His hockey stick resisted challenge for so long in part because few could fathom Mann’s unusual (indeed novel) statisical methods, and by application of this meme he asserted that those who could not do so were not qualified to challenge his results. Although his hockey stick was not visible to those analysing the data using other techniques there were shrugged off as invalid criticisms since they were not using Mann’s method.

    I would like to suggest that this meme is dangerous and wrong and its application is damaging to the progress of science. There are a vast multitude of different statistical approaches which could be used to analyse a given set of data, some of them fairly esoteric. Busy statisticians publishing or perishing, continue to generate more. The thing that must be realised about sophisticated statistics is that the use of a sophisticated statistical method does not invalidate the application of other valid and possibly simpler statistical techniques. Different statistical methods coexist and should never be in conflict. A scientific critique based on other statistical approaches cannot therefore be rejected on that basis alone.

    Scientists in other disciplines should not have to be experts in every novel statistical method in order to critique one another’s work.

  102. Pingback: Weekly Climate and Energy News Roundup #148 | Watts Up With That?

  103. Dr Curry says, “The big issue with length of the pause is comparison with climate model predictions;…”

    My understanding is that CGMs do not predict but merely explore scenarios.

    Dr. Lennart Bengtsson and co-authors submitted a paper to Environmental Research Letters that compared the projections of models with the climate as actually observed. More specifically, the paper was concerned with inferences about climate sensitivity from observations and climate sensitivity as estimated in IPCC reports. The reviewer who rejected Bengtsson’s paper did so because the comparison between the IPCC (model) estimates and inferences drawn from observations is not relevant to the discussion about climate change.

    The distinguished scientists who supported the rejection of Dr Bengtsson’s paper know what skill the models have and do not have. They know that the models are not fit for the purpose of predicting anything, but are merely research tools used to explore the ideas of the modelers and their clients.

    This is obvious from the fact that for over a decade the models have not converged but have diverged.

    The estimate of climate sensitivity to CO2 is greater now than it has been for a decade. But four papers by NASA scientists and their colleagues from other institutions illuminate the reasons for uncertainty.

    Stephens et al. stated,

    “The net energy balance is the sum of individual fluxes. The current uncertainty in this net surface energy balance is large, and amounts to approximately 17 Wm–2. This uncertainty is an order of magnitude larger than the changes to the net surface fluxes associated with increasing greenhouse gases in the atmosphere (Fig. 2b). The uncertainty is also approximately an order of magnitude larger than the current estimates of the net surface energy imbalance of 0.6 ±0.4 Wm–2 inferred from the rise in OHC. The uncertainty in the TOA net energy fluxes, although smaller, is also much larger than the imbalance inferred from OHC.”
    [Emphasis added.]

    Observations of radiative flux are the observations that count, not measurements of surface temperature that are influenced by oceanic oscillations. These oceanic oscillations are so important because the heat capacity of the oceans is greater than two orders of magnitude greater than the atmosphere.

    Yet estimates of radiative flux based on the best instruments available have uncertainties (error bars) that include positive, zero and negative net radiative flux.

    The uncertainties in observed radiative flux tell us that the net flux surplus or deficit are so close to zero that interannual variations in total TSI, its spectral components, or internal variability in the climate system could produce either pause or a change in direction of the trend in net warming or cooling.

    Graeme L. Stephens et al, An update on Earth’s energy balance in light of the latest global observations. Nature Geoscience Vol. 5 October 2012

    Loeb et al, Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Nature Geoscience VOL 5 February 2012.

    Loeb et a. (2009): Toward Optimal Closure of the Earth’s Top-of-Atmosphere Radiation Budget based on satellite observations. J.of Climate, AMS, V.22, p.748.

    Pamela E. Mlynczak, G. L. Smith and P. W. Stackhouse Jr. Interannual variations of surface radiation budget, 22nd Conference on Climate Variability and Change

    References here: http://geoscienceenvironment.wordpress.com/2014/09/04/the-emperors-of-climate-alarmism-wear-no-clothes/

  104. Pingback: Hielo marino del Ártico, 2014 | dignidadyresponsabilidad.com