Tamino’s adjusted temperature records and the TCR

by Frank Bosse

Separating out the impacts of internal variability on evaluations of TCR.

In a blogpost from May 2016 I did some simple investigations about the Transient Climate Response (TCR) as it’s observed. The starting point was the record of Cowtan/Way (C/W) and the Forcings due to greenhouse gases (GHG), land use and so on as they were described in IPCC AR5 . The result was a TCR for this record very near the TCR as it was determined by Nicholas Lewis from the HadCRUT4 record.

A few days ago Tamino (aka Grant Foster) released a blogpost with all the data (thank you for this, Tamino) in which he introduced a “sophisticated adjustment” to eliminate the influences of ENSO, solar TSI- changes and volcanic activities on the temperatures from 1951 to the present ( 8/2016) in many records. While there are several criticisms that can be made about this procedure, e.g. ENSO could be a part of the signal and not noise to eliminate, nevertheless I followed the method of Tamino. I was interested in using the records for the global mean surface temperatures (GMST) to recalculate the TCR as it was observed from 1951 to 2015 with annual data.

First, I made a figure of the adjusted temperature time series:

slide1Fig.1: The adjusted time series of the Records GISS, HadCRUT4 (CRU), C/W and Berkeley. The difference between GISS and the average of the other three is shown in black.

GISS shows a higher warming in the adjusted record. The source of this discrepancy seems to be an additional positive trend from 1970 to 1988. After this year there is more or less a constant offset. The other three series are very close to each other. Only after 2005, the CRU series shows less warming (sea ice land/water?).

In a second step I do a linear regression of the forcings versus the adjusted surface temperature data from Tamino. This method does not equate to the formal definition of the TCR, although it has been used also in reviewed papers (see http://onlinelibrary.wiley.com/doi/10.1029/2008JD010405/full ); therefore it should be a good tool for a close approach. I excluded solar and volcanic forcings because their influences on the temperatures were also excluded by Tamino. For the case of C/W (as also shown in the former blogpost) the plot looks like this:

slide2Fig.2: Linear regression of the AR5-forcing data versus the temperature anomaly in the series of C/W. The trend slope shows the warming due to a forcing: 1 W/m² gives 0.36 K warming.

For comparison: the “unadjusted” record as it was used in the May -blogpost:

slide3Fig. 3: The regression of the unadjusted C/W series from 1940–2015. The result is almost the same, only one hundredth of a degree difference.

The only prominent difference is the bigger R²: In the adjusted time series in Fig. 2, the forcings account for 86% of the variance of the GMST, whereas in the unadjusted series it’s only 78%. This shouldn’t make us wonder: The volcanic and solar effects and also ENSO have no influence on the adjusted series –the adjustments work fine.

Now let’s also take a look at the residuals of the years to the linear trend line in Fig. 2:

slide4Fig. 4: The residuals of the adjusted C/W series, which is the temperature variability not explained by the evolution of forcing with a 15 year smooth (Loess).

The same procedure from the May- blogpost gave this:

slide5

Fig. 5: The residuals for the unadjusted C/W series. The big (but short) ticks for volcanoes and ENSO ( see 1992/1993, 1997/1998) are not visible in fig. 4, however the long term pattern is not influenced by much.

The smoothed time series shows the same picture. Also after removing the volcanic and ENSO events, the internal variability remains almost the same.    In the May blogpost I compared this pattern with the AMO (see Fig. 5 there) and the similarity seems to stand. The uptick between 1990 and 2005 is also clearly visible in the ENSO- adjusted series of Tamino. Therefore this low frequency internal variability has nothing to do with ENSO; it seems to be a result of the AMO.

In a third step I compared the records mentioned in Fig. 1. First, let’s have a look at the smoothed residuals:slide6Fig. 6: The smoothed (15 years Loess) residuals of the linear regression of the GMST anomalies of the adjusted records versus the forcings.

The pattern is very stable in all records. The exception here is GISS. During 1970– 1995, the internal variability is dampened. A possible reason for this could be the ERSSTv4 adjustment during this period due to the change of the measurement methods (Karl et al. 2015); see also Fig. 1.

Finally I calculated the trend slopes of the 4 (adjusted by Tamino) records and the resulting TCR using the basic definition: a doubling of the CO2- content in the air generates a forcing of 3.71 W/m²slide7The average  TCR of the 4 records is 1.39 K/ 2*CO2. The average is heavily influenced by GISS, which shows some remarkable behavior in the residuals. The median of 1.35 K, which is less influenced by any one series, is probably a better measure. The R² of the linear trends of GMST versus forcings is for HadCRUT4; C/W; Berkeley; GISS: 0.86; 0.86; 0.88; 0.95. As higher the R² of the trend line as lower is the internal GMST variability of a given record.

Conclusions:

  • The estimated TCR of ~ 1.35 (see Nicholas Lewis) is confirmed by the adjusted temperatures of the recent blogpost by Tamino. He stresses the physical importance of his statistical operation with the evaluation of his model.
  • In contrast to the statement of Tamino that “there is a steady warming since 1976” with almost no variability, there is a decadal up and down in the adjusted time series very similar to the AMO-pattern with an amplitude of round about 0.2K .
  • The TCR estimate from observations of ~1.35 is supported by at least 3 independent records: CRU, C/W and Berkeley with a deviation of only around 6%. The reason for the upward divergence of the GISS series associated with a suppressed internal variability can only be guessed. A closer investigation of this divergence is beyond the scope of this blogpost.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

 

 

 

215 responses to “Tamino’s adjusted temperature records and the TCR

  1. Pingback: Tamino’s adjusted temperature records and the TCR – Enjeux énergies et environnement

  2. Frank Bosse,

    The estimated TCR of ~ 1.35 (see Nicholas Lewis) is confirmed by the adjusted temperatures of the recent blogpost by Tamino.

    That’s the take-home message for me.

    • Only if you ignore negative feedbacks and natural variability. TCR may be zero.

      • TCR can’t be 0 without rewriting physics.

      • “TCR can’t be 0 without rewriting physics.”

        AGW = It’s All Gone Wrong!

      • Rewriting physics. Here you go: ysicphs

      • Steven Mosher

        Its is so funny actually hilarious that the skeptics who bemoan the state of science think, actually think, that their blog comments amount to
        anything of substance.

        TCR is zero
        No its not
        yes it is.
        lets do an Issue tree, thats some science right there!!!!
        No, lets turn this into a echo chamber like WUWT! go team
        Let the sun nuts play ! that will show everyone we are serious about science
        No lets post some dreck and let the crowd throw tomatoes.. That is SUCH an improvement on peer review and pal review. big improvement..
        Retract a post that’s wrong? NAAAAA.. lets criticize journals for not retracting Gergis, we hold ourselves to lower standards!
        Look a Typo!
        its a fraud I say, an outrage

        I want my damage function and I want it now.

      • Steven Mosher

        Jim.

        You fundamentally misrepresented his paper. Read it. If you cant see your error. Read it again.
        Read what you wrote and you will find this.

        There is ZERO support in his text for what you wrote.

        1. You made the claim his paper contained certain citeable findings.
        2. You never established your claim.
        3. your claim is false.
        4. Since YOU made the first claim, the ball is in your court. Not mine.

        Nice attempt at shifting the burden.

        Its like this Jim

        If I claim: “Jones said X”
        and you say “No he didnt”
        Then it is YOUR JOB to support the claim that started the debate.

        When you apologize to the author, I’m sure they will contact me.
        Until then, miss me with your burden shifting games.

      • “TCR can’t be 0 without rewriting physics.”

        A typical Mosh’ smart reply which totally misses the point.

        It can be 0.1 without rewriting physics. Even if it’s half what the alarmists claim it’s not a problem ( which is quite possible with the data we have outside broken models ).

        BTW who is this “Jim”, I don’t see any comments under that name.

      • MoshTCR can’t be 0 without rewriting physics.

        Which bits and how ?

      • TCR can’t be 0 without rewriting physics.

        Yes it can.

        Anybody who thinks it can’t doesn’t understand either physics or the limited utility of “averages”.

    • 1.35C is a good take-home message.

      The CO2 level is going to top out around 460 PPM.

      At the current 400 PPM:
      1.35*ln (400/280)/ln(2) = 0.6947°C

      At 460 PPM:
      1.35*ln (460/280)/ln(2) = 0.9669°C

      Since the 1.5°C and 2.0°C targets have been met, we are done here, it is Miller time.

      • PA, i’m really glad for your comment, especially the math here… So, we have a TCR at this point of .7C (by that i mean what we should see by now, at almost half way to a doubling of CO2) and we’ve only seen a rise of .8C as of the so called “pause”. That means nearly the entirety of the rise in temps since the LIA is due to CO2. Is that feasible?

      • Steven Mosher

        So quickly skeptics settle their own science.

        No uncertainty monsters.. move along..

        Talk about the debasement of Science.

        go team Skeptic

      • afonzarelli | October 26, 2016 at 2:13 pm |
        PA, i’m really glad for your comment, especially the math here… So, we have a TCR at this point of .7C (by that i mean what we should see by now, at almost half way to a doubling of CO2) and we’ve only seen a rise of .8C as of the so called “pause”. That means nearly the entirety of the rise in temps since the LIA is due to CO2. Is that feasible?

        Well… no.

        I’m just playing the numbers game with the numbers given. The 1.35 is a worst case.

        I’m a believer that much of the warming is ALW (Anthropogenic Local Warming), NGW (Natural Global Warming) and, CGAGW (Computer Generated Anthropogenic Local Warming), with some station aging and other artifacts thrown in for good measure. GHG/AGW might cover 1/3 to 1/2 of the warming.

        The 11 year UCB forcing study from 2000 to 2010 found the forcing was 0.2 W/m2 for a 22 PPM rise in CO2. The TCR is based on a 20 year period so their 0.2 W/m2 is almost the TCR. Computing the warming from the UCB result:
        [0.2 * ln(2)/ln (392/370)]/3.7 = 0.6486 or 0.65°C for 2x CO2.

        The real value is somewhere between 0.65 and 1.35. The 1.35 includes some ALW, NGW, and station aging induced warming and all of CGAGW.

        GISS currently has 0.26°C of “virtual” warming (CGAGW). This explains the difference between GISS and the other temperature indexes.

      • PA, thanx for the answer to my question which was a “rhetorical” question that i wasn’t really expecting an answer to… i think the biggest clue would be the difference between temps at the pause verses (or minus) the temps at the commencement of the pause circa 1940. Data sets vary, so what little agw warming there is isn’t known (even if we assume that is telling us something). BTW, i prefer heineken to miller, but if it has to be miller make it a lite! ☺

      • “The trend repeats”
        The graph repeats, endlessly, and no-one bothers to update it. The data ends in 2009.

      • Come on, Stokes, ain’t this gutter sniping way beneath you? (you’re a classy dude and you don’t need to be doing this) What are you trying to say here? That there is no pause? That the data leading up to the el nino wouldn’t be showing the same thing as this graph? Your complaint is a baseless one. The recent hiatus in warming has confirmed for the sixth time the natural 30 year warming/cooling cycle that dominates the temperature record. Stop yer nitpicking and acknowledge the obvious (cause yer way too good!)…

      • The trend of Hadcrut4 from Jan 1975 to now is 0.18 °C.decade. That’s a period of 41 years, total rise (of trend) 0.75°C. The previous arrows you show rose by 0.32 and 0.45°C respectively. That isn’t repeating.

      • Here is a version with data to Aug 2016:

      • O.K., fine… So you’re saying there never was a pause then. And saying that based on the el nino spike. Basing your entire point (“that isn’t repeating”) on 8 months worth of data. Are you serious (or are you just pulling my leg)?

      • OK, here’s a version with pauses added, in green:

      • Natural variability drives temperatures rise and fall regardless of the CO2 concentration.

        Global averaging prior to the comprehensive satellite records makes no sense!
        The CO2 craze is a bit reminiscent of the 17th century alchemistry. No I am not saying that people who chase AGW are stupid, contemporary records show that Isaac Newton spent more time and effort on transmuting lead into gold than on the theory of gravity.

      • It is quite astonishing that Eyjafjallajökull caused the coldest winter in 30 years just before (!!) the eruption.
        More assertions without fact-checking follows: “Global averaging prior to the comprehensive satellite records makes no sense!”.
        What if UAH only were allowed to retrieve MSU readings from 85 specific locations worldwide? Would it make any difference? Check this chart:
        https://drive.google.com/open?id=0B_dL1shkWewaT083ZHdiWHhieTA

        With this in mind one may question the “comprehensive satellite records”:
        https://drive.google.com/open?id=0B_dL1shkWewaNDVmS0t1bjZjQXM

        You see, the problem is not the global coverage, but the stitching together of MSUs and AMSUs by the flip of a coin, or ideological inclination, aka “the Cadillac calibration choice”..

      • I prepared a fairly simple model using what I felt was a reasonable amount of fossil fuel resources and derived a 630 ppm peak. This figure depends on the uptake of nuclear and renewables, and the carbon cycle.

        And every time I go around the problem I see climate change as a secondary issue to a future energy crisis unless we curtail population growth and invest a lot more to build nuclear plants.

        The lower TCR helps make it even clearer that efficient energy supply is more important than global warming.

      • Olof, hi
        It is not that the actual eruption cause all of the change.
        Down-welling of warm currents south of Iceland creates a semi-permanent atmospheric pressures which moves Rossby waves of the Arctic jet stream, which in turn is controlling climate of the North Hemisphere.
        Tectonic movements on the sea floor affect dip waters stratification and consequently intensity of down-welling. Tectonic movements often are preceding eruptions by some months; by February 2010 Meteorological Institute of Iceland was on a high alert, volcano erupted in April, it is located less than 10 miles from the southern coastline.

        The CET fell from 10.14C in 2009 to 8.86C in 2010, by the end of 2011 the CET recovered back to 10.72C
        http://www.metoffice.gov.uk/hadobs/hadcet/cetml1659on.dat

      • So you ARE joking… you know damn well that you can play with those little green trend lines all day long and get what ever you want. Just come out and say it! You don’t think the recent hiatus in warming is long enough and/or confirmation of the thirty year warming/cooling cycles that go back to the commencement of the global temperature record. (so we’ll just have to wait and see) It sure has been doing the expected thing and sure looks like confirmation to me. Do consider this: at this point agw may be interfering with the 30 year cycle. (this is dr spencer’s take on it) It certainly appears that the cooling phase of the cycle has indeed emerged, but it may well be interfered with. (so there’s no real reason to dismiss the existence of the thirty year cycle even though we’re seeing some anomalous behavior) At the very least the temperature record, with the onset of the pause, is behaving as expected given the cyclical nature of that record…

      • What is clear, assuming there is even a cycle at all, is that the cooling phase has come and gone.

      • fernandoleanme | October 27, 2016 at 11:18 am |
        I prepared a fairly simple model using what I felt was a reasonable amount of fossil fuel resources and derived a 630 ppm peak. This figure depends on the uptake of nuclear and renewables, and the carbon cycle.

        http://www.globalcarbonproject.org/carbonbudget/
        This is fine and good. But the five year average absorption in 1962 was 2.61 GT per year of environmental carbon absorption (318.45 PPM CO2 atmospheric). The five year average absorption in 2012 was 5.90 GT/Y (393.82 PPM). This projects to about 9.4 GT/Y at 460 PPM.

        If the emissions remain around 10 GT/Y, as appears likely, we will never hit your 640 PPM.

      • It is a good sense to keep 30 year period for judging climate change.
        N. Atlantic and consequently N. Hemisphere has temperature peaks about every 60 years, i.e. starting from one peak for about 30 years cooling is prevailing trend, while for the next 30 years there is a warming trend. These up & down trends are nothing to do with AGW, but can be easily derived from combined solar and geodynamic oscillations as shown in this link
        not that I expect Mr. Stokes to agree with any of it.

      • Vuk, i don’t think it’s unreasonable for Stokes (and his pom-pom toting cheerleader jch) to take a “wait and see” position. If the pause resumes then he’ll be forced to reexamine his position, and if not then his position obviously has some merit. But, it’s sheer lunacy to come to any conclusion based on less than one years worth of data…

      • JCH, were you born annoying or did it take you a life time of achievement to get that way? The only thing that is “clear” is that the el nino “has come and gone”…

      • As usual, Professor Curry allows her attack dogs to bully unfettered…

        How’s that stadium wave doing? Or better yet, how is it still not doing?

      • As usual, Professor Curry allows her attack dogs to bully unfettered…

        Sheer projection from one of the local Hillbullies.

      • JCH | October 27, 2016 at 8:11 pm |

        What is clear, assuming there is even a cycle at all, is that the cooling phase has come and gone.

        There have been short term upward excursions during the previous cooling periods so your statement is baseless.

        There is a cycle – it is a mutated triangle instead of a sinusoid but there is a triangular waveform. The maunder minimum didn’t (as far as I can tell) have this sort of oscillation so it could be an energy distribution function in times of temperature change.

        If you are right and it goes away then presumably the warming period since the mid 1800s is over.

      • The cooling phase lasted from ~2006 to ~2013. We are now in a ramp-up phase of the PDO. GMST it surging, and it will continue to surge. Current prediction for 2017 is high-end ENSO neutral, which would mean a very hot year.

      • Current prediction for 2017 is high-end ENSO neutral, which would mean a very hot year.

        NOAA is a cheerleader for El Nino so their outlook doesn’t mean much.

        If 2017 is an El Nino year it will probably be warmer. But that is a big if.

        I got into an argument with a warmunist about CO2 annual increase. He thought the average was going to move over three. We barely beat the 1998 increase in an equally large El Nino.

        I predict 2017 will have a CO2 increase less than 2.0 PPM and the ONI index will be on average below 0.

        Let’s see who has game.

      • PA,

        The CO2 level is going to top out around 460 PPM.

        Can you please lay out clearly how you arrived at that figure. Please state the assumptions and data sources (authoritative, please). I acknowledge you have tried to explain this in the past, but it was not clear.

      • Fossil producers estimates since the 1970s have had 460 PPM as the CO2 peak.

        I use CDIACs global carbon budget for numbers.

        The current (400 PPM) absorption rate is 7 GT/Y as of 2014. The 2010 to 2014 period averaged 5.6 GT/Y. The 1959/1960 absorption was around 2.2 GT/Y. Presumable the 280 PPM absorption (net) was zero.

        This projects to a 460 PPM absorption of 9.4 GT/Y. The current 10 GT/Y of emissions is basically the peak. This will be the emissions level out to around 2050/2060 when scarcity starts driving fossil fuel prices to uncompetitive levels.

        At constant emissions the only thing that changes is absorption (driven by the differential from 280 PPM). The ocean with 38000 GT of carbon already is an infinite sink and the absorption is proportional to the differential from 280 PPM.

        Both the green area increases (deserts green) and plant growth increases. I’ve posted charts on this before and land absorption is increasing faster than the ocean but I assume a similar dependency on the CO2 level for simplicity.

        At 460 the CO2 level will be rising about 0.6 GT/Y or about 0.3 PPM.

        Given the above 460 PPM is the reasonable max in the 2050 to 2060 time frame. Once the emissions decrease below 9.4 PPM it is all over.

    • The other take home message for me is that there is lots of work being done to improve the estimates of ECS and TCR, but next to no work being done to gather the empirical evidence to calibrate the damage function and to improve the damage function. This is the most important input for justifying climate mitigation policies, yet comparatively little research is being done on it.

      Why?

      • Social cost of carbon should be set at zero

        Assessing the Social Costs and Benefits of Regulating Carbon Emissionshttp://reason.org/files/social_costs_of_regulating_carbon.pdf

        The Case Against a U.S. Carbon Tax” explains many, but not all, the problems with the estimates of SCC and of the hypothesized costs of climate change. https://object.cato.org/sites/cato.org/files/pubs/pdf/pa801.pdf

        Richard Tol, one of the foremost authorities on estimating the economic cost of climate change, published Figure 3 here http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf ) which I interpret to show that warming would be net- beneficial to around 4C or more (if we ignore for now the energy cost item; IMO, he overestimates the cost of energy because I think he assumes cost of energy will increase very significantly in future; I understand Tol tends to believe renewables are the future).

      • but next to no work being done to gather the empirical evidence to calibrate the damage function and to improve the damage function.

        1. This presumes there is damage.
        2. The damage function has to be a “net” damage function since there are benefits which are getting very little attention
        3. If no one is even working on the benefit function it is impossible to compute net damage.

        The 60% growth increase since 1900 would seem to make net damage from CO2 unlikely.

        Further, with Tanzania-style levels of 1% deforestation per year or worse – Indonesia has almost deforested their territory – we need more plant growth just to stay ahead of the slashers and burners.

      • PA,

        Responses to your three points:

        1. No it doesn’t. The damage function will produce different net damages for diffenerent inputs and different starting states and different regions. The damages may sometimes be negative (i.e. net benefits) and sometimes positive. The issue is we do not have a valid damage function so we cannot estimate damages and we can on justify expenditure on mitigation or on the climate industry, or on climate industry or their hanger’s on (like Mosher).

        2. Correct.

        3. Correct – see response to 1.

      • All signs I see tell me fossil fuels will be much more expensive. By 2075 oil and gas will be out of reach for poor countries.

      • The time scale I am talking about is long – century to multi mellenia. Fossil fuels are limited and prices will rise eventually, but we will transition to other fuels as we have done in the past. Nuclear fuel is effectively unlimited and the price of nuclear now could have been around 10% of what it is https://judithcurry.com/2016/03/13/nuclear-power-learning-rates-policy-implications/, if not for the impediments that have been put on it. When we remove the impediments, which will inevitably happen, we can return to rapid learning rates and cost can resume their decline.

  3. Let me differentiate four variables:
    Equilibrium Greenhouse Sensitivity (EGS)
    Transient Greenhouse Sensitivity (TGS)
    Equilibrium Climate Sensitivity (ECS)
    Transient Climate Sensitivity (TCS)

    I am able to argue that EGS is zero; the other three parameters are unknown and they are depending on CO2 only in an indirect way.

    The clear-sky and the all-sky greenhouse factors are sitting very precisely at their equilibrium values of g(clear) = 1/3 and g(all) = 2/5.
    See the tables and the whole flux structure at

    http://globalenergybudget.com/#Table2
    http://globalenergybudget.com/#Poster

    Details also in 89.§, Consequences:
    http://globalenergybudget.com/#Conseq

    Miklos.

  4. stevefitzpatrick

    Frank,
    I can find no description at his blog of the ‘model’ Tamino uses. Am I missing something?

    In his silly (but actually published!) paper from a couple of years back he used bizarre, non-physical, response times for different forcings to ‘optimize’ his linear regression model. This led to a gross overstatement of solar influence and gross understatement of volcanic effects. Maybe his current efforts are better, but without a complete description, it is impossible to tell.

    In any case, if you accept the AR5 forcing estimates, then the transient response has to be pretty low, as Nic Lewis and several others have shown. You should however keep in mind that the value for transient response is by definition based on a specified rate of increase in forcing: 3.71 W/M^2 over 70 years, or 0.053 W/M^2 per year. Since the actual increase has been slower, the number you calculated probably overstates the transient response a bit, since the system has had ‘more time’ to respond to the forcing than it would at a rate of 0.053 W/M^2 per year. A better estimate of the transient response is probably ~10% to 15% lower than what you calculated, somewhere near 1.2C.

    • This doesn’t help much, but I’ve looked and can’t find it. Checked a few past posts there, too. No luck.

    • In addition to accepting all the AR5 forcing estimates you have to accept that all the processes they ignore are insignificant. If you read carefully you will find that these processes are ignored because they are not well understood, which is very different from being insignificant.

      • Steven Mosher

        Yes. Its unicorns!!!!

        If the known processes explain the data…then the unicorns are farting in the wind.

    • Steve,
      If on a global mean scale the real climate system behaves closely, at least for surface temperature, like a linear time invariant system, as do virtually all CMIP5 models, then reducing the rate of increase in CO2 from the specified 1% pa compound will makes no difference to the ratio of the transient temperature response after the same period (~70 years) to the CO2 forcing at that point. Accordingly, the TCR estimate will be unaffected.

      However, the period for which forcing has been increasing prior to measuring the response does make a difference to TCR estimation, as does departure from a linear forcing ramp, if significant. In practice, using the evolution of forcing over the historical period seems to provide a pretty accurate estimate of TCR – I’ve done modelling to investigate the point. Although the historical period is much longer than 70 years, most of the increase in forcing has occurred over the last ~60 years.

      • Nic, Working on anything interesting right now? I have been looking for a new post on Climate Audit.

      • Nic, The structure specifies the internal flux ratios. Recent annual global surface LW emission, 399 W/m2, with the absorbed solar radiation of 240 W/m2, defines a greenhouse factor of 0.4. And that’s the steady state value in the structure. It might fluctuate around it (though now it seems to be sitting in its place perfectly). With the increased CO2, atmospheric LW absorption, window, and downward emission all occupy very sharply their prescribed ‘periodic table’ value.

        Further, the surface energy budget precisely follows the closed shell geometry, as E(SRF, clear) = 2OLR(clear). Check the NASA data (see the ‘box model’ and below, 4. §):
        http://globalenergybudget.com/#I4

        And look at the diagram or the poster:
        http://globalenergybudget.com/#Poster

      • billw1984
        “Nic, Working on anything interesting right now?”
        Yes, but focussing mainly on working on something intended for peer-reviewed publication rather than on a blog post. But I probably will have a blog post here and/or at CA during November.

      • stevefitzpatrick

        Hi Nic,
        I’m having trouble seeing that. Seems to me that if you reduce the rate of increase in forcing (compared to the specified rate of 0.053 W/M^2 per year for the transient response), then you pretty much have to have a measured response which lies between the transient and equilibrium values. Eg, if we specify an increase of 0.01 W/M^2 per year, then in 371 years the temperature response would be very close to the equilibrium response (slightly less of course due to continuing small ocean heat uptake). Am I missing something?

      • Steve,
        “if we specify an increase of 0.01 W/M^2 per year, then in 371 years the temperature response would be very close to the equilibrium response”

        Yes, but you are varying the TCR definition, in the case of CO2 increasing by under 1% p.a., by keeping the requirement for CO2 to double and extending the normal 70 year timescale to acheive that.

        My argument applies instead to maintaining the 70 year time period and scaling the response at the end of it up to what it would be if the forcing at that point eqaulled that from a doubling of CO2 concentration. That is what estimating TCR from the evolution of forcing and GMST over the instrumental period corresponds quite closely to, as explained in my first comment.

      • stevefitzpatrick

        Nic Lewis,
        OK, in that case, it is not going to be far from wrong. My point was that calculating TCR needs to consider the rate of warming relative to the canonical 70 year to doubling rate. If we are examining the whole of the instrumental record, the time makes a difference.

    • steve, this was not the focus of the post I think. Tamino made some operations ( I also don’t know how because there is no code included in his original post) and the results ( posted by Tamino) make a TCR of about 1.35 likely. The internal variability stands at 0.2 K and thisa is remarkable, isn’t it?

  5. With the very noticeable El Nino-related upticks in 2015 in the adjusted data…

    …Tamino needs to do some work on his (laughable) ENSO adjustment.

    Ciao

  6. Nice continuation of your previous analysis. Deriving the Lewis and Curry TCR by other means is a nice confirmation that ~1.35 is about right. More important is the resulting discrepancy to the CMIP5 TCR given by AR5 WG1 table 9.5, the mean being 1.8C. Yet another way to onservationally show the models run excessively hot compared to observations. Added to model ECS ~2x high, tropical troposphere (even after Santer’s new papers erroneous efforts to reduce the discrepancy from ~3x to 1.7x high), and the missed pause.

    The models can also be argued ‘wrong’ from first principles. Sufficiently fine grid scales to simulate convection are computationaly intractible by 6-7 orders of magnitude. So such essential processes are parameterized, the parameters tuned to best hindcast (for CMIP5, from YE2005 back to 1975). But this introduces the attribution problem, since the period from 1920-1945 is essentially indistinguishable from 1975-2000. AR4 SPM fig 8.2 said the former was mostly natural, not GHE. Natural variation did not cease to exist in 1975. So it is not (yet) possible to know how much of the parameter tuning is misattributed to the GHE, only that some must be. Leading to the observational discrepancies in the first paragraph.

    • István, the equilibrium value of the all-sky greenhouse factor is 2/5, and recently, if we believe in the CERES data, surface upward LW = 399 W/m2, OLR = 239.8 W/m2, so we have only g = 0.399, which is below of its steady state. Evidently the difference is within the observation error; but it can be stated that there is no increase in the GHE from CO2 at all — there is only a slight natural fluctuation around the equilibrium position. Look at the data in the table:
      http://globalenergybudget.com/#Table2

      • MZ, I know you think you have shown that there is no net CO2 GHE. I looked at your stuff, and am quite certain it is incorrect. Without feedbacks, doubling CO2 produces +~1.2C. Or, as Monckton computd from IPCC inputs, ~1.16. There is much misunderstanding about the nature of feedbacks. The primary climate system is obviously near equilibrium and damped, mainly through albedo and ocean thermal inertia offsetting WVF. Water vapor and cloud feedbacks in the sense used here are like the first derivatives, a change in the the base equilibrating mechanism given a change in CO2. All the evidence, including in this guest post, is that the overall GHE feedback is positive, Bode net f maybe plus 0.25-0.3. Less than half of the IPCC 3 => f=0.65. Your zero net GHE from CO2 would require a Bode f lower than -100. (At exactly zero a Bode f of minus infinity.) Quite impossible. Read Lindzen and Choi 2011, figure 11 and accompanying text for context to understand what your sensitivity assertion of zero mathematically implies.

      • ristvan:
        Under “the primary climate system” you might think of 280 ppmv CO2.
        But in the “secondary” system, today, with 400 ppmv, we still have an arithmetically perfect energy flow structure:
        Fo = I x LWCRE;
        g(clear) = 1/3,
        g(all) = 2/5,
        transfer function = planetary emissivity = cloud cover =
        f(all) = ep = beta = 3/5,
        albedo = 1 – sin 45° = 0.293,
        surface energy balance constrained to the TOA unequivocally as
        E(SRF, clear) = 2OLR(clear),
        E(SRF, all) = 2OLR(all) + LWCRE, ‘closed shell geometry’.
        Atmospheric LW absorption, downward, upward LW and window, latent heat, OLR, ULW: each exactly in their Fo position, separately also for the clear-sky part.
        Doesn’t seem to be ‘perturbated’, on the contrary, seems to be arithmetically exact.
        I of course agree with you that an ‘initial’ ‘internal’ perturbation might come from extra CO2, but this always happens as a change in the background conditions, trying to cause a ‘push’ from the equilibrium, but the shift cannot be seen: the system recently very precisely sits (according to the current data) in the required, ‘quantized’ energy flow structure, cloudiness and albedo.

        One might speculate about the feedback: through the amount/distribution of water vapor? Oceans have enough to equilibrate.

  7. As a result of the adjustments, does the warming pause that is now approaching two-decades still exist?

    • Alas, the ‘Pause’.
      The question of our age.
      I say it’s still just wine and bread.

    • Never was a pause.

      • Uhh you are becoming an extremist. Trenberth, Mann, Santer, they all believe the pause is real and have published so.

        I see you subscribe to Thomas Karl hocus pocus.

      • Trenberth, Mann, and Santer most likely have no significant disagreements with Karl.

      • Steven: Regarding the large amount of literature from the mainstream side that acknowledges and / or attempts to explain ‘the pause’, do you subscribe to Lewandowsky et al’s (2015) explanation for this? I.e. that ‘seepage’ of contrarian memes is to blame:

        “This, we suggest, is evidence of seepage: that non-epistemological considerations have seeped into—and thereby altered—scientific research and debate.”

        After listing over 60 papers “devoted to ‘the pause’ or ‘hiatus’ “, which list is explicitly claimed not to be exhaustive, Lewandowsky et al say:

        “After all, the IPCC represents the thoroughly vetted consensus view of the scientific community, and its treatment of the “hiatus” as a phenomenon worthy of explanation confirms that its existence has entered the mainstream scientific discourse—a proposition supported by two special collections of articles in Nature journals.”

        …adding that although some of these papers are set in a pause framing while not actually acknowledging a real pause, overall…

        “However, we argue that acceptance of the “pause” constitutes a departure from standard scientific practice and is indicative of seepage.

        http://www.sciencedirect.com/science/article/pii/S0959378015000515

      • And, wouldn’t the real problem of any supposed “seepage” be that the foundational integrity of AGW alarmist science is seriously eroded– that, it has been built of a house of cards that cannot even keep out water vapor?

      • Steven Mosher

        “Steven: Regarding the large amount of literature from the mainstream side that acknowledges and / or attempts to explain ‘the pause’, do you subscribe to Lewandowsky et al’s (2015) explanation for this? I.e. that ‘seepage’ of contrarian memes is to blame:”

        No, I think the language around the pause or hiatus or slowing or whatever word we choose to use is largely confusing the mathematical issues. That is, folks would have a more clear understanding if the avoided subjective words like “pause” or “Hiatus” and focused instead on mathematical descriptions of what the wide variety of mathematical approaches show.

        People tend to talk about “trends” in the data as if they were real things, when in fact the data has no “trend’. MODELS used to summarize, explain, condense, compress, data have trends. Data is just data. Just points in time. Trends are manufactured when you ASSUME a model to fit the data and assume an underlying data generating process.

        So technically one could say something like this.

        1. Given the data
        2. Assuming that the following model is a correct representation
        of the data generating process.
        3 The Trend in the model, may have changed from X to X` and
        we have the following confidence in this change.
        4. Assuming all that we can choose to use the qualitative word
        “pause” to describe this possible change.

        but you know I have to deal with faux skeptics who think that things like “pauses” actually exist. There is no pause in the data. Pause is a word people use to describe the particular results of specific assumption laden calculations performed on data.

      • Steven Mosher

        “Uhh you are becoming an extremist. Trenberth, Mann, Santer, they all believe the pause is real and have published so.

        I see you subscribe to Thomas Karl hocus pocus.”

        ##########################

        Hmm. I see that you have ZERO skepticism about your assessment of Karl.

        There is a simple way to assess the Karl Improvments. First understand that they are not Karl’s !

        Second there are several independent datasets that you can look at
        to assess the differences..

        Forthcoming….. you’ll have to wait.

        Now the reason for checking the “Karl” adjustments is simple.
        Skepticism.
        Note that skepticism DIFFERS from mere rejection which is what you practice. Practicing rejectionism, requires no practice. Even an untrained Ape can do it. Like you just did.
        A real skeptic has to actually do the work to question “his” adjustments
        That work is pretty much done. You would never think to do this, because you are not a scientist. you are a rejectionist. Just say No.

      • Steven: Your answer seems ambiguous to me. You start with a ‘no’, which seems clear, yet then much of the text afterwards looks very much like a ‘yes’. Hence my confusion.

        You say: ‘I think the language around the pause or hiatus or slowing or whatever word we choose to use is largely confusing the mathematical issues.’

        Which appears to be very similar indeed to what Lewandowsky et al are saying, i.e. that the framing associated with these words has disrupted the true scientific process. Whatever those whom you label as ‘faux skeptics’ think, Lewandowsky et al’s point is that it is the *mainstream* folks and organizations who are using these words. This is demonstrated by the volume of appearance in their literature, of which the context for most is clearly in the sense that a ‘pause’ exists (for instance the mainstream papers that posit reasons for its existence, and per clips above inclusive of IPCC acknowledgement and Nature special editions etc).

        Your point about a mathematical approach may or may not be correct, but doesn’t address my question (although this may hinge upon *who* exactly you mean by ‘folks’ and ‘people’). If as implied by your ‘4’ above, ‘pause’ is in fact a reasonable word to use in respect of model trends, but you don’t like to use it because you feel it gives ‘faux skeptics’ too much latitude in regard to a physical explanation, the point is that the *mainstream* are using it in much literature, and yes very much in the context of that full latitude. I.e. that it is a real, existent thing in the physical world.

        For better or for worse Lewandowsky thinks he has an explanation for why the existence of a (real, physical) ‘pause’ has been acknowledged by *mainstream* climate science. Can you please clarify whether you agree with his explanation or not. If your answer is unambiguously the ‘no’ with which you began the last text, may I ask what is your own explanation for the acknowledgement of the (real, physical) ‘pause’ by mainstream climate science?

        I don’t believe my question has anything to do with what skeptics think, be they faux or otherwise. Unless you do agree with Lewandowsky et al I guess, in which case they are implicated as his posited source of the ‘pause memes’.

        >’…but you know I have to deal with faux skeptics…’

        Presumably, that’s actually a voluntary occupation ;)

  8. From the perspective of AMO warming being caused by declining solar wind pressure increasing negative North Atlantic Oscillation conditions since the mid 1990’s, it then becomes apparent how little that rising greenhouse gas forcing must be doing as that should be increasing positive NAO conditions.
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-3-5-6.html

  9. Grant Foster, imagining himself as the prince Tamino of the magic flute, looks at data through the high magnification shades for ‘driving data’ to the hight the others can’t reach. http://i.ebayimg.com/00/s/ODAwWDgwMA==/z/la4AAOSwMNxXVtdz/$_35.JPG?set_id=8800005007

  10. For comparison, CMIP5 historical-RCP8.5 global near-surface air temperature trend:
    1951-2015 trend median +0.15 C/decade (5-95 % range 0.10-0.21 C/decade, HadCRUT4 obs at 13th percentile).

    Sampled as in HadCRUT4 (Cowtan et al., 2015; Richardson et al., 2016):
    1951 trend median +0.12 C/decade (5-95 % range 0.08-0.19 C/decade, HadCRUT4 obs at 36th percentile).

    Before Tamino’s adjustments, the obs fall at the 36th percentile of the CMIP5 ensemble of TCR ~1.8 C (range 1.2-2.4 C)

    See e.g. supplementary information of this, which also contains links to code and data.
    http://dx.doi.org/10.1038/nclimate3066

    • Surface temperature increase went parallel to TOA outgoing LW radiation increase; the ratio (planetary emissivity, and hence, the normalized greenhouse effect) remained the same.

    • MarkR,
      Good point, compare apples to apples when comparing observations and models..
      Observations of Global SAT are difficult to achieve due to the elusive ocean SAT. Reanalysis is one possibility. Here is a chart with ERA 20C+interim:
      https://drive.google.com/open?id=0B_dL1shkWewaS1hQSDRoWUp1WlE

      The trend of the ERA-index (1951-now) is 0.143 C/decade and that of CMIP5 rcp8.5 is 0.153 C/dec.

      The only observational index that attempts to estimate the global SAT, is Gistemp dTs, doing so by extrapolating temperatures from coastal and island met stations out over the oceans.
      The trend of Gistemp dTs 1951-now is 0.175 C/decade. GISS says it may overestimate the global SAT. On the other hand, Gistemp loti which uses SST may underestimate the global SAT trend due to the thermal inertia of the oceans. The trend 1951-now of Gistemp loti is 0.143, so the real Global SAT trend should according to GISS be between 0.143 and 0.175 C/dec.

      All observational indices suffer from incompleteness, assumptions, and uncertainty. I suggest that we accept them as they are, and just say for instance:
      Nic Lewis has not estimated the true global TCR and ECS, he has just estimated the TCR and ECS of the HADCRUT4.4 index.

      • I don’t regard GISS time series as very reliable measures of global temperature. And the claim that “SST may underestimate the global SAT trend due to the thermal inertia of the oceans” is completely nonsensical.
        In fact, NOAA v4.0.1 and GISStemp loti both use air temperature (nighttime) over the ocean, at all but sub-decadal timescales. NOAAv4 is infilled to complete global coverage, unlike GISS. Its trend in annual global temperature over 1880 (its start) to 2015 is 0.067 K/decade, only marginally higher than for HadCRUT4v4 at 0.065 K/decade, and almost the same as GISStemp loti at 0.168 K/decade.
        Over the satellite period, 1979-2015, during which much of the last 150 years’ warming took place, the HadCRUT4v4 trend is higher: 0.165 K/decade vs 0.155 K/decade for NOAAv4.0.1, and is in line with the ERA interim globally complete 2m air temperature reanalysis trend of 0.166 K/decade (with ECMWF’s correction for SST discontinuities – the trend in the uncorrected ERAinterim time series is lower). GISStemp loti has a lower trend, of 0.162 K/decade (unless they’ve upped their trends again since April).

      • AFAIK, HadNMAT2 is only used to adjust ship readings (the uncertainty whether they are bucket or ERI). From 1980 there is an increasing amount of drifting buoy readings, in the end of the nineties they were about 50% of the total, and today the are totally dominating the SST datasets. They are not adjusted by MAT.
        However, there are larger differences between the various SST datasets than between land based datasets. To avoid that SST bias, when looking at differences due to incomplete coverage, compare HADCRUT4 and C&W directly (and corroborate the latter with BEST l/o if necessary). C&W and BEST have higher trends on almost all time scales, compared to HADCRUT 4.

        My ERA-blend in the chart above has a trend of 0.202 C/dec in the satellite era, Gistemp dTs is 0.209.
        ERA-interim is not flawless, it does strange things in Antarctica for instance. The satellite era trend of the Amundsen-Scott gridcell is -0.42 C/decade whereas that of the actual met station (from SCAR/CRUTEM) is +0.19 C/dec (ERA 20 C is close to the latter).

      • Yes, let’s look at reanalysis data vs. the surface temperature data (and the UAHv6 tlt data).

        Here are three of the most advanced (“3rd generation”, “state-of-the-art”) reanalyses used today: NASA MERRA, ERA Interim and JRA 55:

        Combine the three into one “Reanalysis Mean” and compare it with HadCRUt3 (adjusted down 0.064K from Jan’98), HadCRUt4, GISTEMP LOTI, and UAHv6:

        (BTW, H3 with the Jan’98 adjustment included matches JMA very well.)


        Also, CERES EBAF OLR at ToA data strongly supports HadCRUt3 and UAHv6:

      • Yes, let’s look at reanalysis data vs. the surface temperature data (and the UAHv6 tlt data).

        Here are three of the most advanced (“3rd generation”, “state-of-the-art”) reanalyses used today: NASA MERRA, ERA Interim and JRA 55:

        Combine the three into one “Reanalysis Mean” and compare it with HadCRUt3 (adjusted down 0.064K from Jan’98), HadCRUt4, GISTEMP LOTI, and UAHv6:

        (BTW, H3 with the Jan’98 adjustment included matches JMA very well.)


      • Also, CERES EBAF OLR at ToA data strongly supports HadCRUt3 and UAHv6:

  11. MarkR: surface temperatures increased together with OLR, which means that planetary emissivity (and the normalized greenhouse effect) remained the same.

  12. you probably should post your own data and code.

    Take an analogy from code sharing. When we share open code and other folks use it, they are expected ( and often forced to by litigation) to share and share alike.

    That said.

    What Tamino showed is that you can use 1950-1989 as a training period
    To create a model of temperature that is a function of

    A) Solar
    B) El Nino
    C) Volcanic Forcing.
    D) a Linear component of anthropogenic warming.

    And that Using those variable you can explain or predict the 1990 to present values.

    Its been shown before ( Thanks for confirming.. many idependent eyes is a good thing) that there is a residual that looks like AMO

    All of which means..

    We seek understanding. We want to know how things work. We are always explaining. If we look at the temperature series we can see that the ups and downs can IN FACT be explained as function of

    1. Solar forcing ( its small but what the heck every digit counts)
    2. El Nino– yup you got some natural bounciness in the data.. quasi
    periodic so heard to use in prediction.
    3. Volcanoes.. yup, you block the sun and the temperature does not go UP
    4. AMO.. yup you got some other bouncinesss in the data.
    5. A linear trend

    Now of course folks can quibble with the details, but in fact the time series can be explained as a function of these variables.

    Could there be other variables.. say X, and Unicorns, and Planet Jupiter
    and Pirates, and welll too many theories… ( HINT google underdetermination ) of course there could be other explanations. And unicorn fans should actually present them.. they never do..

    So we have an explanation. It works. It fits with what we know about the physics of GHGS… that is, we can assert that the linear trend is basically the human caused part of the equation.

    None of this is strange, or post normal science, or science in a crisis. Its basic data analysis.

    Could that linear trend be “something else?’ yup could be unicorns
    could be some secret long term natural cycle, hitherto unnamed and unobserved… call it “natural variation”.. which means… “dunno”

    So what do we have.

    1. We have one explanation that works. It uses KNOWN quantities of KNOWN processes, and it fits with or is consistent with known physics.
    There are no unsolved mysteries: the crime is solved:
    “Man dunnit with help from the sun, volcanoes, and some quasi
    periodic wiggling”

    Against that we have what?

    2. the argument that it COULD BE something else
    3. the argument that it is something else, (natural variation) but
    we really have no clue what causes that or how to find that in the historical record..

    • Mosher, I hope your post earlier wasn’t based on my silly joke. I meant nothing by it. It just popped into my head an amused me to respond that way.
      The skeptics (lukewarmers) that I admire the most are people like McIntrye and Nic Lewis and yourself, although lately you seem to get testy a lot. I don’t think that anyone that comes and makes a short comment on a blog has to go and do their own analysis of data just to express an opinion, but I agree that those that go on and on and say things way out of the mainstream even for lukewarmers should “put up or shut up” if they want to be taken seriously.
      My take on this and Tamino’s post is that this gives a number similar to Nic Lewis’s number so I won’t dismiss it out of hand. And secondly, Tamino took GISS which out of about 6 or 7 temperature series has the highest values and the largest slope. And once he removed all of his chosen factors, the linear component looks to be ~ 1.7 C per century. That seems to be much lower than the 2.5 to 3 C that gets so much press. What is your take on this?

    • stevefitzpatrick

      Steve Mosher,
      The details of what Tamino is doing matter. If his model is inconsistent with plausible physics (as was his paper along similar lines from a couple of years back) then the model could look “right” for a 15 year hold-out period but fail to make accurate predictions of the future. Tamino should just explain the basis of his model and save everyone a lot of time by avoiding arguments about his results. If the residuals (as Frank showed) are consistent with a contribution from a longer term oscillation like the AMO, and Tamino’s model makes physical sense, then that would be a good thing…. but we still have to see how the model works.

      • Steven Mosher

        “Steve Mosher,
        The details of what Tamino is doing matter. ”

        Did you see me claim that they dont matter?

        I am contrasting two approaches to analysis.
        one that seeks understanding
        one that seeks ignorance.

    • Mosher, “3. the argument that it is something else, (natural variation) but
      we really have no clue what causes that or how to find that in the historical record..”

      I believe that depends on how hard you look. Oppo, Rosenthal et al. are looking in areas that are highly relevant from a heat capacity reference and have less noise on long climate time scales. Their “something” else is simply initial conditions and that some volcanic and solar forcing is normal and if you don’t estimate that right, less volcanic aerosols than normal would be a positive forcing. Not really Earth shattering.

    • Mosher, you say:

      Could there be other variables.. say X, and Unicorns, and Planet Jupiter and Pirates, and welll too many theories… ( HINT google underdetermination ) of course there could be other explanations. And unicorn fans should actually present them.. they never do..

      Simples. The Sun+The Ocean. The sun is the great provider of heat, the ocean the great distributor.

      • Steven Mosher

        Ocean is not a cause. Check ur units.

      • “Ocean is not a cause.”

        Why not? It holds back energy. Just like CO2 is supposed to do.

        Also, you seem to have missed the part about the sun …

      • Mosher, “Ocean is not a cause. Check ur units.”

        Perhaps you could try showing your work for a change. Joules are Joules Bubba.

      • Steven Mosher

        “Perhaps you could try showing your work for a change. Joules are Joules Bubba.”

        unless you can magically create joules ex nihilo the ocean is a mere conductor or capacitor.

        It doesnt cause warming.. it is the warming. In other words ITS THE FRICKING JOULES IN THE OCEAN THAT NEED EXPLAINING.

        get that through your head. its the warminng manifested in the ocean and in the atmosphere that are the THINGS TO BE EXPLAINED..

        Look, if I want to explain your fever.. I can say.

        The Virus causes a fever. See the structure of explanation?
        Explaining the warming of the ocean by pointing at the ocean is akin to saying “having a warm body causes a fever”

        To repeat. The change in STT and SAT are the effects to be explained.

        schematically

        Watts from the sun & Watts from GHGs … gets you a change in ocean joules.. Its not that hard

      • Mosher, “It doesnt cause warming.. it is the warming. In other words ITS THE FRICKING JOULES IN THE OCEAN THAT NEED EXPLAINING.”

        Been explained endlessly dipstick, varying surface winds and currents change the rate of mixing i.e. el frigging nino. Toggwieler, Brierly and others have noted the amazing impact varying westerlies can have on surface temperatures and the rate of ocean heat uptake.

      • Steven Mosher says, October 30, 2016 at 3:43 pm:

        It [the ocean] doesnt cause warming.. it is the warming. In other words ITS THE FRICKING JOULES IN THE OCEAN THAT NEED EXPLAINING.

        get that through your head. its the warminng manifested in the ocean and in the atmosphere that are the THINGS TO BE EXPLAINED..

        We try again: “Ocean is not a cause.” If the ocean is not a “cause”, then CO2 isn’t a “cause” either. The ocean can hold back energy. Just like CO2 is supposed to do.

        Also, you – still – seem to miss the part about the sun. Remember? Sun+Ocean. The “fricking joules in the ocean” derive from the sun, Steven …

    • Steve: Estimates of TCR and ECS based on observed warming are based on an assumption: All warming is forced warming; unforced variability doesn’t contribute to the overall trend.

      In this case, unforced variability associated with El Nino (and forced variability from volcanos) has been removed. The residuals show another signal for unforced variability, which appears to be the AMO. Fortunately, it’s period is approximately the length of the data, so it makes little contribution to the trend. We can’t be sure another type of unforced variability (that is aperiodic or has a long period) isn’t distorting the data.

      If one adds Otto (2013) with four decades of relatively constant TCR, one might think unforced variability isn’t a problem. However, the influence of the AMO is missed in that work.

  13. Steven Mosher, I would like to see you comment substantively and not indulge in what IMO seems like drive-by wiseassery. I enjoy reading your comments when you actually engage the issues.

    I’m sure very few people here think TCR is zero. People like me consider themselves to be skeptical but, like true skeptics, are willing to be proven wrong.

    You sometimes act as though you consider yourself above engaging some of the arguments made here. For example, what’s wrong with the case that TCR is 1.35? Is it that the case is not made in a peer-reviewed article?
    Peer review studies are all over the place on TCR, which is obviously one of the central issues of climate change.

    You obviously think TCR is higher. Why?

    • The observed HadCRUT4 trend from 1951-2015 is at the 36th percentile of the historical-RCP8.5 CMIP5 distribution. With the huge uncertainties in forcing, variability that’s not particularly strong evidence in favour that TCR’s around 1.35 C.

      It’s easy to argue that the obs will underestimate TCR in this approach based on new knowledge of real-world PDO/trade winds, stratospheric vapour, low-scale volcanism, solar activity.

      It’s also possible to argue that changes in the aerosol indirect effect or something else should mean that the obs overestimate TCR. I think the balance of evidence favours underestimation so 1.35 C is possible but probably too low.

      Policy aside: TCR of 1.35 C with ECS of ~2 C would make the 2 C Paris target easier. For example, global CO2 emissions growth could peak as late as 2025, then fall around 4%/year with a quick drop to zero near 2100. Assuming net aerosol of zezro by 2100, all non-CO2-GHGs drop by about half of methane’s current forcing and no change in airborne fraction. This would give hope to policymakers and others keen to hit the Paris targets. I just think that’s currently too hopeful – all else being equal those emissions cuts would be inadequate.

  14. Steven, this seems to be a good summery:
    “1. Solar forcing ( its small but what the heck every digit counts)
    2. El Nino– yup you got some natural bounciness in the data.. quasi
    periodic so heard to use in prediction.
    3. Volcanoes.. yup, you block the sun and the temperature does not go UP
    4. AMO.. yup you got some other bouncinesss in the data.
    5. A linear trend

    I want to make the addition: The linear trend comes from radiative forcing and three of four records (incl. Berkeley) show a slope due to carbon dioxide of about 1.35K/ doubling for TCR when we assume that Taminos tool works good for solar, volcano and ENSO. Not 2 K/ doubling as the CMIP model mean postulates….

  15. “This method does not equate to the formal definition of the TCR”
    Indeed so. That is for a CO2 doubling in equal multiples over 70 years. The thing is, forcings have lagging effect on T, so the answer you get depends on timescales. That is why ECS is usually quoted – it is the change after all lags have caught up. Otherwise, the answer depends on the timing. It’s true that Gregory and Forster did a regression like yours; however they say of the result:
    “Again, as in the model, the resistance from this method (2.5 + 0.5–0.4 W m−2 K−1 for 1970–2006) is larger than from OLS regression of ΔT against F_GA, which we prefer, for reasons summarized in section 3.4.”
    Personally, I don’t think restricting to F_GA (GHG and aerosol) really answers the timing issue, but it is different.

  16. Nick, thanks for your comment. You wrote “That is why ECS is usually quoted – it is the change after all lags have caught up. ” This says implicite that the observations can’t catch the ECS and it was the intention to look at the thinks we can observe as it was done by Tamino for 1951…2015. This leads IMO to the nullhypothesis: TCR=1.35.

    • Frank,
      “This says implicite that the observations can’t catch the ECS and it was the intention to look at the thinks we can observe”
      Yes, it means that. ECS is time-invariant, but can’t be observed, only computed (and that takes a long time too). Transient is potentially observable, but is different for each time duration (and assumed even increase to doubling). Because of lag, the temperature response will be faster in the last part of the period. So if you observe for a shorter period you will get a lower TCR.

      I couldn’t find a TCR calc in Tamino’s post linked above.

      • Nick,
        “So if you observe for a shorter period you will get a lower TCR.”

        True. However, the regression slope is influenced most by the change in T and F between points early and late in the analysis period. It is accordingly not self-evident that regressing T on F over the full instrumental period will underestimate TCR.

        In practice, it seems that regressing deltaT on deltaF over the instrumental period probably gives a pretty accurate estimate of TCR, even if years with sizeable volcanic forcing are included (unlike in Gregory and Forster 2008). The regression slope TCR estimate using AR5 forcing and HadCRUT4v4 is closely in line with the deltaT/deltaF estimate using the Lewis & Curry 2015 2-period energy budget basis.

        Moreover, when the Lewis & Curry method is tested on pseudo-GMST data generated by a sensibly parameterised 2-box model with an ECS of ~1.7 K, it produces a TCR estimate closely in line with the actual TCR of the 2-box model. As you will know, 2-box models can accurately emulate the GMST response to CO2 forcing of almost any recent AOGCM.

      • Nick and Nic ;-) : For the sake of quantities I made this little figure:

        It confirms your conclusions: as shorter the observed timespan as smaller is the TCR. Anyway, it seems that the functions converge with increasing timespans vs. 1.35 for CRU, Berkeley and C/W and vs. 1.56 for GISS ( all ajusted). It seems to me that a possible error due to the selection of timespans with the method of the regression dF/dT is very small if this timespan is longer than about 60 years.

  17. John Von Neumann (at east according to Enrico Fermi) –

    “Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk”.

    Steven Mosher is still looking for the fifth parameter, before he can even fit his unicorn. Only joking, of course.

    Climate is the average of weather. There is no TCS. Change the weather, and climate changes. Not the other way round.

    As to temperatures, given that the reduction of supposed GHGs leads to increased temperatures (at least in places that receive bright sunlight e.g. the arid tropical deserts), more GHGs, result in reduced temperatures, not higher.

    As Steven Mosher pointed out, if you block the Sun, temperatures fall. The atmosphere blocks about 30% according to NASA.

    But of course, the GHE enthusiasts have it totally backward. They think that reducing the amount of sunlight results in increased temperatures!

    GHGs block sunlight by absorbing it. NASA agrees. Tyndall’s experiments quantified the amounts for different gases at different pressures.

    All this curve fitting is reminiscent of the epicycle hypothesis of astronomical movements. As new observations revealed facts which didn’t fit the hypothesis, the epicycle enthusiasts added more epicycles. And more and more. Planets had to loop backwards while simultaneously abruptly changing their distance from the Sun!

    Kepler proposed that it was all nonsense. He was shown to be right.

    So keep adding adjustments, excuses, and data torture of various types. It won’t work in the long run.

    Keep at it. You might accidentally come upon something useful along the way!

    Cheers.

  18. People here have to parse reality based on fundamental knowledge ( which is hard to gain). Two simple examples:
    1. My comments to Zaegoni above. Fail on Bode f equivalent model.
    2. Basic principles observational GHE and sequelae. Another predictive fail.

    • ristvan: It seems we are talking about two different perspectives. You say that the system is under pressure and in a transient situation because of the continuously increasing CO2, and you think this requires some reaction (feedback). I understand this stance but what I say is an observation: the atmospheric flux structure does not show this pressure: it is calm and relaxed, and occupies very precisely its predetermined exact arithmetic position. All of the flux parameters sit at their integer ratio and relationship.

      Solve this climate sudoku:

      and you will see that it is impossible to increase atmospheric absorption, DLR and ULW without an increase in solar absorption.

      Here is the solution, based on data for time period 2000-2015:

      Increasing CO2? All right, modified H2O, zero net GHE increase. g = 2/5.

  19. Reading this post and the comments which follow is like listening to mediaeval monks discussing the number of devils that can sit on the head of a pin. All this talk of “trends” and “smoothing” is statistical nonsense. Modern techniques for dealing with time series have been developed in other fields such as signal processing, radio astronomy and econometrics but seem to have passed by the little cottage industry of climate science. What you are all doing is proposing various models to explain the data. What none of you do is to test your proposed models to see if they actually do fit the data.

    There are some well established tests that can be used to do this, e.g. Ljung-Box and Breusch-Godfrey. All regression models rest on the assumption that the residuals are independant, identically distributed random variables. If the the residuals turn out not to be independent of one another then the model must be rejected. I have done this. I fitted a linear trend plus multidecadal oscillation to the 166 year-long HadCRUT4 global average surface temperature data set. The tests show that the residuals are indeed self-correlated and that this model must be rejected at a very high level of significance, i.e. we can be very sure that it is wrong.

    Alternatively, the state of the of the system can be regarded not as a deterministic function of time, but dependent only on the previous state or states of the system. Instead of regression, an auotregressive moving average (ARMA) model is used. When such a model was fitted to the HadCRUT data, the residuals were indeed unselfcorrelated and the model could not be rejected. Such a model is stochastic in contrast to the rejected deterministic regression model. Thus the stochastic nature of global average temperature is not a theory, it is an observed fact. If anyone wishes to challenge this it is up to them to find a regression model which has uncorrelated residuals.

    ARMA models have the added advantage of providing a variance spectral density estimate of the data or “power spectrum”. The spectrum of the HadCRUT data exhibited a pole near zero making it a “red” or “random walk” spectrum. It is well known that such spectra give rise to spurious regression when time is used as the independant variable. The time series does not have to be a “true” random walk with the pole equal to zero for this to happen. Spurious regression is likely whenever the low-pass cutoff period is longer than the sample length. Clearly global average temperature is such a spurious regression. It is a red noise fluctuation with no significant trend.

    My draft paper on this topic can be found at http://blackjay.net/wp-content/uploads/2016/08/MS.pdf . I am still trying to get it published in a peer reviewed journal. I suspect editors and reviewers are unable to get their heads around the idea that the supposed upward trend in global temperature is no more than artifact of the primitive statistical methods used by climate scientists.

    • John Reid,

      Tiny typo. Your audio amplifier has a bass control, rather than a base control.

      Cheers.

    • Why is this not it’s own post? There are enough legitimate issues raised that a full frontal discussion is warranted, especially from some of the usual suspects.
      I’m afraid these comments will not get the attention they deserve.
      John, have you contacted Judith about having this as a post?

      • Thank you for making your comment. I probably would not have read it if you had not.

        I second your suggestion. I think this would make an interesting post. It is way over my head, but the discussion would be interesting and I’d learn a lot from that.

        I’d urge John Read to offer a blog post to Judith.

        It may also get some publicity so the reviewers of the submitted paper are more inclined to publish it.

      • Thank you cerescokid and Peter Lang. I would be more than happy to do that but I don’t know how to go about it.

      • I assume Judith has an email where she can be reached. If other denizens are reading this and have sent emails to Judith could you help John out? Thanks.

    • John,
      “but seem to have passed by the little cottage industry of climate science”

      Posts at this blog do not represent the pinnacle of climate science. The idea of using autoregressive models is quite commonplace in climate science.

      ” Instead of regression, an auotregressive moving average (ARMA) model is used. When such a model was fitted to the HadCRUT data, the residuals were indeed unselfcorrelated and the model could not be rejected. Such a model is stochastic in contrast to the rejected deterministic regression model. “

      I use an AR(1) model routinely to determine significance of trends. For HADCRUT, you can find that set out here. You can interrogate significance levels for various periods. taking account of monthly autocorrelations certainly means you need lomger periods to achieve significance, but it is not true at all to say that there are no such trends.

      Tamino prefers a ARMA(1,1) model. I disagree; I think he is fitting a particular oscillation rather than the random component. I have set out the general arguments, with spectra and ACFs, in a series of posts. I’d give a link, but I think here two links sends you into moderation, so I’ll follow with a link to several of my blog posts, which also go into Quenouille methods for doing the modified regression.

      • Links to various Moyhu papers on regression, trends, autocorrelation etc are here.

      • Nick I have looked at your Moyhu blog and I am not impressed. In fact it supports my statement about “cottage industry”. You seem to know all the right buzz words “Quenouille”,”Toeplitz”,”significance levels” and so on but you don’t seem to have got the hang of statistical inference as a method for gaining insight into underlying physical phenomena. There is nothing in any of your diagrams that convinces me that I am looking at anything more than short sequences of random noise.

        I use an AR(1) model routinely to determine significance of trends.

        So what? Why not use an AR(37) model or an ARMA(17,15) model? The important thing is not which model you use but whether your model actually fits the data. If the model does not fit the data, if the residuals are correlated, then the significance levels are meaningless.

      • Middle para is a quote but bracket quote bracket didn’t work.

      • John,
        I’ve read your draft paper, too, which is titled “There is no significant trend in global average temperature. “
        The claimed basis for that is (Sec 5.1) a fit for the ARMA(1,2) stochastic model, with four parameters. You quote a std err for the component c, and apply a test as if it had a univariate normal distribution. But it doesn’t, and you can’t do this. It is a multivariate distribution over 4 parameters, and what you call the std err of c is presumably the diagonal term of the covariate matrix. You can’t test this in isolation. It just isn’t univariate normal.

        “The important thing is not which model you use but whether your model actually fits the data. “
        By fits you mean that the Ljung-Box or whatever test fails to reject. And you accept your ARMA(1,2) model because it was not rejected. But there is no uniqueness there. There is a vast range of models which would have survived the test. And of varying power regarding the trend. Choosing just one of them and showing insignificant trend proves nothing, even if the test were done competently.

        I do look in some detail at the fit of AR(1), and Tamino’s corresponding ARMA(1,1), in this post. I show how the ACFs compare, and where each fits better – that is my basis for using AR(1). Having a test fail to reject is no use here – you need to see which is the better fit.

      • John Reid

        Your paper would be Considerably improved by using some graphical examples. It would Then be very interesting to see this as a guest post here. I am sure other denizens would have alerted Judith by now.

        Although I rarely agree with nick stokes I do have a high regard for him so watching a debate between you and others would be most interesting

        Tonyb

      • Yes, I did assume that the drift term has a marginal univariate normal distribution in a similar way to the regression coefficients when considered as random variables. Perhaps that is not justified. I will have to check it out. However I would be surprised if the drift term can be shown to be significantly different from zero.

        “By fits you mean that the Ljung-Box or whatever test fails to reject.” I also used the Breusch-Godfrey test which gave similar results. The LB test is very strong. Not all ARMA residuals pass it. For example I could not find an ARMA model for annual sunspot numbers which passes the LB test. An ARMA(1,1) model for HadCRUT fails both the Breusch-Godfrey and the Ljung-Box test.

        At least I do go to the trouble of setting up null hypotheses and testing them. I have not seen you do this once. It is a bit rich being accused of “test shopping” by someone who does no rigorous testing at all.

      • Steven Mosher

        “Nick I have looked at your Moyhu blog and I am not impressed. ”

        Looks like Mr. Reid has zero clue then

      • “At least I do go to the trouble of setting up null hypotheses and testing them”
        But the test is pointless here. You are testing the null hypothesis that the residuals are uncorrelated. Test results are useful if you can reject the null hypothesis at 95%, or whatever. But here you seem to be wanting to verify the null hypothesis, and you say that it failed to reject. But all that means is that your observation is not in the 5% tail. There is at least a 5% chance that the results could have arisen if the null hypothesis were true. That says nothing against the null hypothesis, but gives very little confidence. You say
        “None of the probabilities listed in Table 4 lie below the critical value of 0.05 and so there is no reason to reject the null hypothesis that the non-zero value of the ACF are due entirely to chance. The ARMA(1,2) model is a very good fit to the HadCRUT4 time series.”
        The conclusion is false. You have merely shown that the null hypothesis is possible, not that it is likely. For a rigorous result, you need to test and reject alternative hypotheses.

      • Matthew Marler

        John Reid: At least I do go to the trouble of setting up null hypotheses and testing them. I have not seen you do this once. It is a bit rich being accused of “test shopping” by someone who does no rigorous testing at all.

        The interchange between you and Nick Stokes has been interesting. You have shown that the data can be decomposed into a process that is autocorrelated (ARMA, with estimated parameters) plus a process that is iid normal. Post hoc , you have then proposed that the obtained decomposition is the “appropriate” null hypothesis for testing the presence of non-stochastic trends. For every ARMA model, there is a mathematically equivalent spectral decomposition (this is presented in Wayne Fuller’s book on time series analysis, and probably in many others). Thus you have shown that the data can be represented as a Fourier series or trig polynomial or harmonic regression, which has already been shown by Scarfetta and many several others. Their claim is that the obtained periods represent true periodic processes, mathematically equivalent to your claim that the obtained ARMA coefficients represent a true random walk (or diffusion process.)

        There is, unfortunately, no good way to choose the “appropriate” null hypothesis post hoc. Just as you can always find a trend that is “statistically significant” by a bunch of post hoc model searching and model building, you can always find a null hypothsis that renders every trend “not statistically significant”.

        This is not to say that your model is necessarily “wrong”, but there is no reason to believe that it is the “truth” about the temperature process.

      • I have been looking forward to a discussion like this or quite a while. I will address your concerns in the order in which they are listed on the page:

        climatereason 5:38pm: Good idea but what sort of graphical examples?

        Stephen Mosher 3:36am: clued in or not, I am “Dr” Reid. I have a PhD in physics.

        Nick Stokes: 5:37am: I compare two methods of analysing the time series, a regression model and an ARMA(1,2) model. The null hypothesis in each case is that the residuals are uncorrelated. The null hypothesis can be rejected in the first case but not in the second. This proves the first method is wrong but does not prove the second method is correct. (Under the scientific method you can never prove something is true. Frequentist statistics encapsulates this concept.) The first method shows an upward trend in the data, the second method shows no significant trend. If we are to choose between the two we must reject the first. There is no significant trend in global average temperature. That is the title of the paper. The onus is on the people who believe that there is a significant trend to demonstrate that. They have failed to do so; their regression model does not work.

        Matthew Marler 12:19pm: Discussion of spectra in terms of “trig polynomials” can be very misleading because it requires hidden deterministic assumptions. When we discuss variance spectra we must assume a stochastic process. The spectrum is, by definition, the Fourier transform of the variance/covariance function of the time series. As in all statistical descriptions we distinguish between the population value, the spectrum, which we can never really know, and estimates of it which are derived from the data. One estimate is the periodogram, others are functions of the ARMA coefficients. The periodogram is noisy because each ordinate has a chi-squared distribution with only 2 degrees of freedom however long the series may be. The various dramatic peaks in the periodogram are usually just noise and we can assess this by plotting confidence limits (based on the known chi-squared distribution) above and below the smoother ARMA spectral estimate. Peaks which do exceed the confidence limits are evidence of a deterministic sinusoidal function in the data. Peaks which do not are just noise.

        You say “Just as you can always find a trend that is “statistically significant” by a bunch of post hoc model searching and model building, you can always find a null hypothesis that renders every trend ‘not statistically significant'”. No you can’t – not if you test the residuals of such models as I have described.

        You say: “Their claim is that the obtained periods represent true periodic processes, mathematically equivalent to your claim that the obtained ARMA coefficients represent a true random walk (or diffusion process.)” They are not mathematically equivalent. A random walk is not usually the output of a diffusion process. It is usually the output of an integrating or summing process; see the Hasselmann reference in my paper.

        I am not claiming my model is true, I am claiming that the regression model is false and that the apparent trend is spurious and the outcome of a red-noise process similar to that described by Nelson and Kang.

        Stephen Mosher 3:41am: Alright then, I really chose HadCRUT because it is the most respectable. You still haven’t explained why temperature cannot be a random walk. As it happens I am not claiming that it is a random walk (which is non-stationary), only that it is a centrally-biased random walk (which is stationary). We would expect something of the sort on theoretical grounds – see Hasselmann.

        Peter Lang 3:44am: Thanks Peter I shall get in touch.

        kenfritsch 12:59pm: “I have found little to no convincing evidence of semi periodical reoccurring features in these series and have best resolved these series as a non linear trend with red and white noise. The red noise can be modeled with an ARMA model and leave a residuals series that shows no dependence in a Box test.” Good. We are on the same page.

        “The residuals from a non linear trend are more readily fit with an ARMA model that produces Box test independence”. Agreed.
        “… without resorting to high orders of an AR model where if allowed an unlimited order almost any residual can be fit.” Agreed. I started out like that too.

        “If a linear or near linear response of global and large global regions temperatures to GHG levels in the atmosphere can be shown convincingly from theory then my model thinking and assumptions would have to change.” I strongly disagree. Science is empirical. If the theory cannot account for the observations then the theory is wrong.

      • “The first method shows an upward trend in the data, the second method shows no significant trend.”

        As I have said below, here, the second model makes no sense, and testing c for zero value does not test for zero trend. c can be assigned any value merely by adding a constant offset to temperature (eg switching from C to K).

      • John Reid: They are not mathematically equivalent.

        ARMA models and Fourier models are interconvertible, as I said. That’s proved in Fuller’s book and in lots of places. They are mathematically equivalent, so a choice of which to use for modeling has to depend on something other than the model fits.

        No you can’t – not if you test the residuals of such models as I have described.

        As you chose. Other choices are possible. Had your choice not given the result that you desired, you had alternative choices available. Your choice of residual tests is as much a potential victim of multiplicity as the choice of models in the analysis of much-analyzed data.

        I am glad that you don’t claim that your model is true. Without that caveat, a reader might think that you believe you chose the correct test of residuals.

    • Temp s cannot be a random walk.
      Try again.
      This time violate no physics.

    • For Judith’s email address go to the About page. You’ll find it on her web site and on her CV.

    • I have been fast reading through this thread so I might have missed some very cogent points, but I have to relate my experiences in modeling the instrumental temperature data here with the hopes that a poster can perhaps enlighten me and answer some of the questions that have arisen from my analyses.

      My modeling has to make certain assumptions about what is deterministic trend, red and white noise and semi cyclical and reoccurring features of the series. I have used several methods in attempting to objectively separate the components in the temperature times series and found that all required me to make choices of parameters in the method and those choice have affected the final decomposition and modeling of the series. I have used Singular Spectrum Analysis, Empirical Mode Decomposition and a number of other smoothing methods including linear regression with and without segmented break points. I have found little to no convincing evidence of semi periodical reoccurring features in these series and have best resolved these series as a non linear trend with red and white noise. The red noise can be modeled with an ARMA model and leave a residuals series that shows no dependence in a Box test.

      I was concerned about the residuals that remained as shown in the introduction to this thread using a linear regression model – as it leaves the model used in doubt without further component analysis – and that is what motivated this post. I suppose with most temperature/forcing relationships I see that using a linear deterministic trend might be anticipated for a temperature time series with stochastic red and white noise and perhaps some semi periodical features thrown into the mix. I do realize that the decomposition and modeling of these temperature series requires some strong theoretical and/or independent empirical evidence in consideration for the assumptions made in the series decomposition and modeling. Without assigning semi- cyclical features to the series it turns out that model has a deterministic non linear trend that has a natural part and a part contributed by increasing GHG levels in the atmosphere. The residuals from a non linear trend are more readily fit with an ARMA model that produces Box test independence without resorting to high orders of an AR model where if allowed an unlimited order almost any residual can be fit.

      If a linear or near linear response of global and large global regions temperatures to GHG levels in the atmosphere can be shown convincingly from theory then my model thinking and assumptions would have to change. The question then would be whether the residuals are handle with an ARMA model or a combined ARMA and semi cyclical model. I think that a linear regression approach is often taken in these analysis without further component analysis as a matter of convenience and not from a strict theoretical basis. I have heard Isaac Held express a strong opinion, as I recall, about anthropogenic, or at least GHG, forcing producing linear trends.

      Nic Lewis, if you are still reading at this thread I will tell you that I have a post at your Abram thread at CA in moderation that concerns some of the issues of trends that I have discussed in this post.

  20. Frank Bosse wrote:
    “In a blogpost from May 2016 I did some simple investigations about the Transient Climate Response (TCR) as it’s observed. The starting point was the record of Cowtan/Way (C/W) and the Forcings due to greenhouse gases (GHG), land use and so on as they were described in IPCC AR5”

    It is impossible to calculate TCR or ECS from historical temperature data, because there is no sufficient data on aerosols in the 20th century — whose emissions need to be known as a function of latitude, because that determines the angle at which they reflect sunlight.

    Period.

  21. Does anyone know of a current paper/link summary/synopsis of all the major TCR estimates (e.g. ranges, probabilities)?

    For example, a table comparing say someone like Schmidt’s low end estimate to the Lewis/Curry TCR estimate.

    Thanks.

  22. Fifteen recent analyses of ECS compared with the ECS used by IWG to estimate SCC in Figure 1 here: https://object.cato.org/sites/cato.org/files/pubs/pdf/pa801.pdf

  23. The AMO has to be one of the biggest teases ever.

    The PDO, the blue line, is heavily influencing the global SST and GMST and a baked turkey called the AMO. The AMO is simply along for the ride. It is not influencing anything; it is being influenced.

    What this means is, if the PDO is influencing the GMST, then it had a pronounced cooling influence on the GMST from around 1985 until around 2013, when the PDO stopped dragging down the GMST. One clue, once the PDO stopped dragging down the GMST, the GMST shot up like a rocket… free, free at last. It has soared.

    TCR is over 2 ℃; ECS is likely higher end … 3.5 ℃ to 4.5 ℃. Why? Because atmospheric CO2 had to overcome a pronounced downward natural pull on the global mean surface temperature from ~1985 until ~2013. ACO2 has muscles, and they are getting bigger. Natural variation, the aging champ, is aging and beat up… finished, and that’s the 21st century’s future story.

    We are in a heatwave.

    • We are in the second coldest ice age in the past half billion years!

    • JCH, “Because atmospheric CO2 had to overcome a pronounced downward natural pull on the global mean surface temperature from ~1985 until ~2013” If this would be the case I would expect a strictly other behavior of the residuals to the forcings. There we see during 1990…2013 an additional warming, not a cooling as one would expect if the PDO cooled down naturally.

      • As far as I am concerned, Tsonis has added a single important thing; it’s when trends change direction that matters. The PDO changed directions around 83-90; the GHE did not. At that point the PDO progressively contributed less warming until 2013; ACO2 had to overcome that lessening of natural warming, which it did until around 2006; hence, the so-called pause/hiatus.

        The AMO did not do anything; it neither warms nor cools the GMST in any meaningful way; it’s a hitchhiker… a complete fraud of an ocean cycle… a story of a con artist cycle and its victims.

      • At this stage of the game a change in trend has be downgraded to simple divergence, according to a very smart man.

      • You have a amazing ability to focus on that which does not matter at all. In the 20thcetury there was a single divergence between the direction of the PDO and the direction of the GMST. Focus on that divergence. That other divergence is absolutely a content-free waste of time.

      • Now at least you are in the Ball Park.

      • JCH: You say “The AMO did not do anything; it neither warms nor cools the GMST in any meaningful way; it’s a hitchhiker” with some remarkable sureness. Anyway, there are some papers which don not agree. See this one: http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-16-0443.1. From the conclusions:
        “Our results suggest considerable importance for the AMOC, as a pacemaker for multidecadal–scale global temperature changes”
        If you agree that the AMO is a fingerprint of the AMOC as stated in many studies and accept Isaac Held as an expert in climate science and you read this:”But AMOC warms the climate on average. You might think that a circulation transporting heat from the southern to northern hemisphere would warm the north and cool the south more or less equally, but because of the asymmetry of the land-ocean configuration, and feedback from northern ice and snow among other things, the northern warming is much larger, resulting in global mean warming with increasing AMOC” ( see https://www.gfdl.noaa.gov/blog_held/64-disequilibrium-and-the-amoc/ ) … how can you be so sure that the AMO(C) is only a hitch-hiker of the GMST? I think your “analyses” are only statistical homework without any physical meaning.

      • JCH

        ‘At that point the PDO progressively contributed less warming until 2013; ACO2 had to overcome that lessening of natural warming, which it did until around 2006; hence, the so-called pause/hiatus.”

        I dont think its sensible to talk about PDO or AMO or El Nino as ADDING warming or not adding adding warming.

        They are merely patterns of warming, expressions of warming, not causes

        Simple thought experiment. I heat a pot of water. as I watch the water warm with say an IR device, I notice a pattern.. call it the center blob, where the center warms faster than the rest.. and then I notice a shift to another pattern, the outer blob… etc. These patterns are not the cause of warming… they are the warming itself expressed as a spatial pattern.

        The cause of the warming is what it ever was:. the flame

      • The PDO is a proxy for the EB of the Pacific ocean. It’s a significant percentage of the earth’s ocean surface… and an area of the ocean where a great deal of upwelling can take place, or not take place. EBU in the Eastern Pacific cools the GMST… sort of like leaving front door open in a January blizzard in North Dakota.

        England’s anomalous wind, the kimikaze, looked like this in mid-September 2010:

        . There we see during 1990…2013 an additional warming, not a cooling as one would expect if the PDO cooled down naturally. …

        Yes, there is one divergence… the defeat of an old fighter. But it was still a fight. It’s not the AMO.

      • SM – I fully agree with Wunsch when he says the oceans never forget.

  24. The hottest places on Earth are the driest – that is, lacking H20, supposedly the most important GHG.

    How does that fit into the graphs?

    I also point out that according to the Australian Bureau of Meteorology, reliable temperature records didn’t commence until 1910.

    Good luck with all the correlations of assumption against estimate. It means nothing without a scientific basis.

    GHGs heat nothing, The Earth is mostly molten rock, and has been cooling for four and a half billion years.

    Cheers.

    • “The hottest places on Earth are the driest – that is, lacking H20, supposedly the most important GHG.”

      Wrong. Dry Valleys Antarctica, worlds driest spot.

  25. Frank says of Tamino’s paper:
    “In contrast to the statement of Tamino that “there is a steady warming since 1976” with almost no variability,…”
    This is completely wrong, and is not at all what Tamino says.
    Tamino describes ..
    – variability with the trend removed
    – trend with the variability removed
    It is in the context of the second exercise that he uses the phrase Frank quotes.
    I trust Frank will correct this blog post by removing this error he has made.

    • There was no error included because the citing was “there is a steady warming since 1976” as it’s in the text of tamino. …with almost no variablity was not a citing ( no ” …”) but a interpretation of the words “steady warming”.

  26. ristvan,
    You wrote: “I looked at your stuff, and am quite certain it is incorrect.” Actually, my ‘staff’ is observations within the published CERES EBAF Ed2.7 and Ed2.8 data tables, and within the latest published energy balance diagrams. I simply observed
    – internal patterns and relationships,
    – constrains between the surface and TOA energy budgets,
    – a strong connection between the single layer total IR-opaque cloud area fraction (CERES SYN1deg data, time period 2000 – 2016) and the all-sky transfer function,
    – an equilibrium position for the Bond-albedo at 1 – sin 45°.

    Please point out which of them are you quite certain is incorrect.

    You wrote further that “Fail on Bode f equivalent model”.
    No. I only say that the temperature loop is missing from the feedback chain.
    “Lukewarmunists”, as you call yourselves, think of this:
    Add CO2 —> increased atmospheric LW absorption –> temperature adjustment –> negative water vapor / cloud feedback.
    But the energy flow observation suggests this:
    Add CO2 —> increased atmospheric LW absorption –> direct radiative constraint from the E(SRF, clear) = 2OLR(clear) geometric requirement –> immediate (instantaneous?) negative radiative water vapor feedback.

    Okay, instantaneous? Within a day? Within one hydrologic cycle (10 days)? One year? Who knows (I almost said: who cares.) The result is important: zero net GHE increase, OLR(all) = 239.4 W/m2, ULW = 399 W/m2, f(all) = 0.6, g(all) = 0.4.

  27. There appears to be someone who thinks that because deserts are hot then then the GHE does not exist.
    HeyHo.
    Would that person care to ponder, why it is that convection cools the planets surface – and why it is that convection is suppressed in deserts?
    He might start by going to a local library and studying a text book on meteorology.
    He might also consider that the lack of H2O means evaporative cooling cannot occur.

    • Tony Banton,

      You are correct. The GHE doesn’t exist.

      Plenty of convection exists over deserts. Or do you deny that things like sandstorms or thermals exist?

      It is likely that evaporative cooling doesn’t occur when there is nothing to evaporate. Do you deny this as well?

      Deserts are the hottest places on Earth. Little H2O. H2O is supposed to be the most important GHG. Maybe GHGs don’t make things hotter?

      What a surprise! No GHE.

      You can’t even provide a falsifiable hypothesis in support of your bizarre assertion that CO2 can increase the temperature of anything, let alone a planet!

      Keep trying!

      Cheers.

      • Deserts and dry climates have much more cooling at night than moist ones. MF wonders why.

      • Jim D,

        Don’t be stupid. I know why dry areas areas cool faster at night.

        For precisely the same reason they get hotter during the day.

        Deny all you like, the common factor is a distinct lack of that most important GHG, H2O.

        You haven’t even a falsifiable hypothesis in support of your contention that conventional physics is in error, and that CO2 has miraculous planet heating properties!

        Go for it, Jm D. I say that arid tropical deserts heat up faster during the day, and cool faster at night, for the same physical reasons.

        You obviously deny this. I’d be grateful if you could provide me some facts to set me straight, Maybe I’m wrong.

        Cheers

      • Jim D | October 28, 2016 at 5:11 am |
        Deserts and dry climates have much more cooling at night than moist ones. MF wonders why.

        For the same reason as asphalt.

        The specific heat of asphalt or sand is around 0.8-0.9. Water has a specific heat of around 4.19.

        The lack of latent heat loss means dry sand and asphalt reach a peak temperature 20°C to 35°C higher during the day than moist ground or vegetation. The high temperature turns the subsurface into a thermal mass that must be discharged at night. Given the higher temperature at twilight and lower specific heat (than moist ground or vegetation) the surface has to cool more.

        Thought this was obvious.

      • “Plenty of convection exists over deserts. Or do you deny that things like sandstorms or thermals exist?”

        Deep convection.
        Sandstorms are not a permanent feature and dust devils will only lift to a few hundred feet at best…. All that is happening there is that the hot air is being stirred around NOT taken aloft.
        Same thing happens when you get a hot spell locally over mid-latitudes. The phenomenon is called an Anticyclone. And what is the single outstanding meteorological feature of a High? Descending air. And what does descending air do? Warms and dries. So what happens when you have warm above cool (relative to a DALR)?
        Heat builds up at the surface and can’t convect beyond a few thousand feet and then only if it has reached a significantly high temp.
        It is in fact the TRUE GHE … Due to the suppression of convection, with the addition of zero evap cooling and thermally insulation sand and/pr dry soils.
        Note I said ADDITION – NOT caused by it.
        Do try to think along multiple lines my friend.
        Same as saying the GHE is the cause of all local warming. No. Meteorology comes into it. Why that does not scan with you is beyond me (unless deliberate of course).
        Critical and logical thinking is required.
        That is why I consider you something that begins with a T on here.
        And why I don’t consider entering your rabbit-hole a constructive exercise, as you do true sceptics a disservice and advance the cause of science due to your bizarre Sky-Dragon slaying MO.
        I will tell you, you are wrong, in this case with the benefit of a career with the UKMO and move on. You may or may not educate yourself on the subject, but somehow I suspect not. You wear your ignorance as a badge, proud of it.
        Nothing more telling for others to see.

        As you say – keep trying (!)
        If nothing else it is instructive of psychology in stupidity.
        Or some other psychopathy.

      • “Thought this was obvious”

        It is to any averagely educated person PA.
        Or to someone who isn’t a T.

        May I add that Desert soils are dry and the delta T with depth is large, with an insulating effect confining most heat in the topmost layers which quickly cool down at night then insulate from the heat flux from below. Same thing happens to a marked degree over snow, especially fresh snow. A lot of trapped air, making for a very efficient radiating surface.

        Anticyclones have a very dry Troposphere and so LWIR has a clearer exit to space.
        Calm winds cause a shallow surface inversion with the cold layer only a 100 ft or so at most deep – this rapidly warming out soon after dawn.

      • Tony Banton,

        You really don’t have a clue, do you?

        No GHE. No falsifiable hypothesis. Nothing.

        The planet warming properties of CO2? None at all.

        Less GHGs – faster heating, faster cooling. As on the Moon.

        Deny all you like, it’s a fact, nevertheless.

        Cheers.

      • Less GHGs don’t affect heating (which happens to be solar, by the way) but they do affect cooling (which is IR), so you are only half right on that, MF.

      • PA, you can put a dry sand pit under a moist tropical sky and it won’t cool any faster than the moist soil next to it. The difference is from the moisture (or lack of it) in the air. If only the skeptics knew some physics, understanding GHG effects would be so much easier.

      • Jim D,

        Dry sand under a warm tropical sky? Really?

        Dry sand under a dry tropical sky seems about right.

        Warm has nothing to do with dry. Dry is dry, hot or cold.

        I wonder not. Basic physics. You agree, I presume.

        Cheers.

      • MF, you seem confused. Please keep up.

      • Jim D,

        I’m not confused at all. Places with less GHGS heat up faster, cool faster.

        Deny away. Your denial won’t change facts.

        Do you really believe arid tropical deserts heat less during the day, and cool less during the night than more humid locations?

        I’m sure the desert dwellers would be glad to hear it!

        Just for fun, you might also deny the coldest areas are also deficient in GHGs – Antarctica, say. GHG heating? Doesn’t seem so, does it?

        Cheers.

      • Explain why places with less GHGs heat up faster, then. How do GHGs affect solar energy at the surface? I thought you used to say GHGs had no effect on anything, but now you say they affect both solar and IR fluxes, so when did you change your mind and what made you do this, or is this just another part of your confusion?

      • Jim D,

        As Tyndall showed, the more energy that reaches the object, the hotter it gets. GHGs block energy. As do clouds, aerosols, sunshades and so on.

        Less GHGs, more energy reaches the surface. According to NASA, about 30% more.

        Cloudless arid tropical deserts have the highest surface temperatures.

        I merely said GHGs heat nothing.

        There is no GHE. I suspect that you may be changing your opinion, as new facts become apparent. As would any rational person.

        However, I understand that faith is preferred to fact, in many cases. If it brings you contentment, why not?

        Cheers.

      • Correct: GHGs have no effect on solar energy, but do block IR. Hence their name. Heating by the sun is unaffected, cooling at night is slowed down. I think you finally have it.

      • Jim D | October 28, 2016 at 7:55 am |
        PA, you can put a dry sand pit under a moist tropical sky and it won’t cool any faster than the moist soil next to it.

        This statement is absurd. Moist.Soil has a specific heat of around 1.5 (kJ/kg°C).

        Dry soil because of no latent heat loss gets about 20°C warmer. At the end of a summer day the 30°C moist soil is radiating 479 W/m2, the dry soil at 50°C is radiating 618 W/m2 and it only has 2/3 the heat capacity (less alone TB’s point that dry sand is much less thermally conductive moist sand : 0.15 to 0.25 vs 0.25 to 2).

        Claiming dry sand cools at the same rate as moist dirt or vegetation is just ridiculous.

        Dew forms in the Negev desert 200 days a year. Obvious there is some atmospheric moisture and the ground is very cold in the morning.

      • PA, the limiting factor is the sky temperature. Sure water cools slower than land, but when the air is moist both cool less. I am not sure why you don’t agree with the important effect of moister air on cooling rates. Don’t you agree that moist tropical air reduces surface cooling rates compared to dry desert air when you put the same sand under both. I think you just misunderstood this point.

  28. Frank Ross,
    Maybe you could ask Tamino the seperate adjustments for MEI, volcanoes and the sun.
    see:
    http://www.nature.com/ngeo/journal/v7/n3/fig_tab/ngeo2098_F1.html
    https://thsresearch.files.wordpress.com/2016/09/wwww-ths-rr-091716.pdf

  29. “The TCR estimate from observations of ~1.35 is supported by at least 3 independent records: CRU, C/W and Berkeley with a deviation of only around 6%.”

    These 3 ‘independent’ records share the same error of using radiative forcing instead of TOA net radiative flux. Spencer used CERES data and the best fit model gives TCR = 0.8 K

    http://www.drroyspencer.com/2016/10/what-do-16-years-of-ceres-data-tell-us-about-global-climate-sensitivity/

  30. Dr. Stranglove: Please remember the 1st sentence of Roy’s Post: “Short Answer: It all depends upon how you interpret the data.”. The field of this post was an estimate of the TCR considering the adjusted data for the GMST from Taminos post. Not more and not less…

  31. Nick Stokes | October 28, 2016 at 3:57 am |

    You are correct that these tests have a null hypothesis that the residuals are independent and rejecting it produces the alternative. I think though that a Box.test function in R that produces p.values of 0.70 or so can be shown by simulations to give strong evidence of independence.

    My concern with what I think I saw for the ARMA model (1,0,2) is that such models when applied to temperature series usually run into or close to the unit root problem. I suppose by not using a first difference term in the ARMA model we are not talking about a random walk model here and which in my view would be not physical. Of course, a deterministic trend could cause a unit root problem and is why the ARMA model needs to be applied to the residuals and not the original series.

    I have linked a paper here that looks at observed temperature series in a totally different approach different than that with which I am familiar. The authors appear to me to accept a unit root and instead of attributing it to a random walk claim it is due to a persistent stochastic trend. Their model is claimed to account for human generated GHG affects on temperature without resorting to a deterministic trend. In my first fast read it is as if they have defined away a deterministic trend in the series and without consideration of the physics involved.

    https://economics.indiana.edu/home/people/yoosoon/files/time-series-analysis-of-global-temperature-distributions-identifying-and-estimating-persistent-features-in-temperature-anomalies.pdf

    • ” a Box.test function in R that produces p.values of 0.70 or so can be shown by simulations to give strong evidence of independence”
      The Ljung-Box p-values were much lsss. But anyway, I don’t think you can say that. It only says that independence is one possibility. Maybe other hypotheses lacking independence could also explain the results. That is why successful statistical testing involved successfully rejcting the null hypothesis, not failing to reject.

      In fact, the use of ARMA here is very confused.

      ARMA models are used for the residuals, not the temperatures. There is a good reason for that. ARMA is used to model something with a combination of ε’s which have zero mean. So the temperature combination would have to have zero mean. But that makes no sense unless a₁=1; temperatures can have an arbitrary offset – eg you can use C or K. If you add a constant d to y, then c has to change. You can make it zero with an appropriate offset, so there is no use testing it for zero trend.

      In fact, the fitting process does produce a near-unit root, but I think that is just an artifice result of the fact that no other result makes sense for this model. And as Steven observed above, a random walk, popular in some quarters, is unphysical. It says there is nothing anchoring T to flux values; it can go anywhere, regardless of cons energy. And we know that that should not be so, and historically has not been. The seas have not boiled, as they would have done at some stage with random walk.

      • Failure to reject the null hypothesis of independence even with high p.values for the Box.test does not say that from the hypothesis testing that the residuals are independent, but it does in my view allow one to proceed with the model with some assurance that no more modeling is required to account for the residuals. Further confirming tests for a model are QQ plots that allow judgment on normality or the ARMA residuals. A Shapiro test for normality of the ARMA residuals has a null hypothesis of normality and thus even with a high p.value that does not from hypothesis testing say that the series is normal but it again puts good confidence that no more modeling is require. In the case of 2 or more competing models that might appear acceptable, an AIC or BIC evaluation may be in order.

        Nick, I am curious how you proceed in evaluating a model with the intent of producing white noise residuals white noise.

      • “Nick, I am curious how you proceed in evaluating a model with the intent of producing white noise residuals”
        Well, there are two problems with using the p-value of a failed rejection as a measure. You don’t have a range of whatever measure of dependence (say autocorrelation) you are testing. Just an attempted black/white answer. And you don’t know (or aren’t told here) how powerful the test is. A weak test will hardly ever reject. That doesn’t affirm the null.

        One test for whiteness here is just to fit, say, AR(1), and look at the error range of the AR1 parameter. That tells whether the noise could be white, or iff not, how red could it be. Another is to test various measures of non-independence. As a null, could some autocorrelated model account for the results. Then whittle down the correlation postulated.

        The AR1 parameter test is relevant here, because what you really want is the modified uncertainty of trend, and it is that parameter that you use in a Quenouille correction. So that gives error bars on the error bars.

      • One test for whiteness here is just to fit, say, AR(1), and look at the error range of the AR1 parameter. That tells whether the noise could be white, or iff not, how red could it be. Another is to test various measures of non-independence. As a null, could some autocorrelated model account for the results. Then whittle down the correlation postulated

        You have not said whether you would use a Box test for independence to determine whether your residuals are dependent. What about using a QQ plot?

        I do additionally look at the confidence intervals for the ARMA coefficients for the ARMA model I use on the series residuals to render the ARMA residuals white. I am not clear how you are applying the test here and on what series, i.e. the original residual series or the ARMA model residuals. If applied to the ARMA model residuals, whiteness would be implied by, in your example, the AR1 coefficient CIs including zero. If applied to the original residual series, CIs for an AR1 coefficient that included zero would imply that that ARMA model was not a good fit or perhaps the original residuals are white.

        I guess you need to clearly state in some detail an hypothesis test here where rejecting the null hypothesis provides an alternative that the residuals are independent. When you say: “As a null, could some autocorrelated model account for the results. Then whittle down the correlation postulated” I am not clear on what you mean here by results and results of what. By results do you mean as in the residuals from the ARMA model. If I assume that is to what you are referring here since that was main thrust of the discussion, I would suppose that one would propose an autocorrelated (ARMA) model to account for ARMA residuals and the null hypothesis would be that the model accounts for the residuals and rejecting it would lead to the alternate that that specific model does not account for the residuals. That is, however, only one of many models that could be tested. What boundary conditions do you propose to limit the models to be tested.

      • Nick, I should have added to my list of analytical techniques in determining series independence, the use of ACF and PACF plots and statistical significance of dependence. Do you use these tools?

      • “You have not said whether you would use a Box test for independence to determine whether your residuals are dependent.”
        I’ve said I don’t think such a test can determine independence. It may be able to exclude independence, but that is all. Otherwise the hypothesis test is being used backwards (failed to reject), and that can’t work. I said that if the test says independence can’t be ruled out, that is just one of many possibilities. In fact, there is an obvious continuum. If white noise could explain, then what about pale pink? Pinker? Maybe even red? You need to quantify that scale, which Box doesn’t help with.

        As an example often seen on skeptic sites of this misuse of hypothesis testing, I’ll note (from here) that from March 2014 to now HADCRUT shows no significant warming! The CI’s on the trend are from -0.3 to 24.0 °C/Cen. There has actually been lots of warming, but because of the way weather might have turned out, and the short period, one can’t rule out a model with zero trend. It’s about as likely as 24°C/Cen, which is of course very high indeed, and equally can’t be ruled out. Both are fringe – the max likelihood is about 12°C/Cen. So saying there is no significant trend is really a misleading description of the range.

        As to a QQ plot, I’m not sure how it would be used here. John Reid’s stochastic model applied to temperatures really redefines the notion of residual. Instead of a difference between obs and regression predictor, which could be ARMA correlated, it is now a realisation of the ε’s of the model, which are anyway constructed as if independent (so is it right to then test them for independence?). I’m not even sure how they are obtained for ARMA(1,2), since they are specified by a second order recurrence, with no initial conditions.

      • Kenneth,
        ” Do you use these tools?”
        Yes. There is a rather complete ACF analysis in this post and the one that follows (spectral).

      • Nick, thanks much for your replies. To be clear my interests in this discussion involve looking at the independence of ARMA residuals and were not with regards to any specific post or yet-to-be-reviewed paper. I have some familiarity with fitting ARMA models and evaluating the residuals. I have not seen an approach were a null hypothesis is used that leads on rejection to the alternative that the residuals are independent. Rather most often seen is the use of AFC plots (which is an application of the Box test) and AIC or BIC evaluations to give good confidence that an optimum ARMA has been selected. I like to add the determination of CIs of the model coefficients and particularly to determine whether a unit root is within the CI range. If unit root problems are not carefully considered a seemingly excellent fitting model can often be found by using higher orders of ma. The paper to which I linked above was willing settle for a model with unit roots and talked about temperature series without deterministic trends. That is something I do not understand.

        I was disappointed that you did not provide the details of a null hypothesis test where on rejection the alternative leads to accepting the residual series as independent.

      • Ken,
        “I have some familiarity with fitting ARMA models and evaluating the residuals. I have not seen an approach were a null hypothesis is used that leads on rejection to the alternative that the residuals are independent.”

        I haven’t seen a successful one either. That’s why I can’t provide one.

        I am also interested in fitting models. I described here (with code) a way of fitting by Newton-Raphson solving just a quadratic equation in a few variables. And here there is a development of extended Quenouille corrections.

      • Thanks for letting me know that you have also not seen this alternate hypothesis for independence that I suppose would be more statistically robust. The heuristic tools I commonly use in showing independence I think can work well for my purposes – if all its limitations are considered and understood. I’ll have a look at your modeling proposal.

        By the way, as an aside, that is somewhat related to your work with ARMA modeling of temperature time series, I was surprised in using simulations that the confidence intervals (CIs) for trends derived from a time series with a negative ar coefficient and adjusted for this autocorrelation provide narrower CIs than those for the unadjusted case. I have seen a reasonable case made for this result in a paper but not a formalistic proof. Negative autocorrelations are supposed to be rare for temperature time series, but I have found such series not infrequently in the residuals extracted from series where I allow non linear trends.

      • “the confidence intervals (CIs) for trends derived from a time series with a negative ar coefficient and adjusted for this autocorrelation provide narrower CIs than those for the unadjusted case”
        You’d expect this from the Quenouille correction, which multiplies variance by (1+r)/(1-r). If r is negative, this is less than 1. I was thinking of negative r in considering random walk with drift as differenced OLS. The random term is then εₜ₊₁-εₜ, and you’d expect this to be negative auto correlated. That part-explains why random walk gives much greater spread of drift if you replace that difference by a simple iid.

  32. Jim D,

    You wrote –

    “Don’t you agree that moist tropical air reduces surface cooling rates compared to dry desert air when you put the same sand under both.”

    I think you are confusing the issue. The problem is that moist air blocks more inwards energy as well as outwards energy. EMR is EMR, and if a body absorbs enough energy of any wavelength, its temperature will rise. Conversely, if it absorbs less than it emits, it cools.

    Microwave ovens work on this principal. The wavelengths are much longer than IR, but heat water very efficiently.

    Sunlight is more than 50% infrared anyway, so the bizarre GHE notion that the atmosphere acts as a one way energy trapper, is just silly.

    No GHE. No CO2 or H2O heating. Cooling more slowly does not increase temperatures. The Earth has cooled for four and a half billion years, as one might expect a large ball of molten rock suspended in space, about 150 million km from the Sun, to do.

    That’s life, I guess.

    Cheers.

    • The billions of years cooling crankishness is very disappointing in this day and age. Don’t you even read what the skeptics are saying about GHGs? And it is not beside the point to say that moist nights stay warmer than dry nights. You have no explanation for that observed fact while science does.

      • Jim D,

        And moist days are cooler than dry days – as in the arid tropical deserts. You might not like the fact that the Earth has cooled for four and a half billion years, but it seems to be true.

        If a Warmist ever decides to provide a falsifiable hypothesis about the magical heating properties of CO2, then it will have to include a mechanism to account for the Earths demonstrated cooling.

        This might explain why there’s no falsifiable hypothesis to support the nonsensical GHE proposal.

        Rattling on about “science” is pointless. Any fool can describe themselves as a scientist – as you may have noticed.

        Cheers.

      • I gave you a falsifiable hypothesis weeks ago. Move on.

      • Jim D,

        More waffle. You just can’t bring yourself to cut and paste this wondrous hypothesis. Are you scared that the Nobel,Committee will hound you unmercifully, trying to thrust honours upon you?

        Don’t forget to include the part that explains four and a half billion years of cooling, prior to the heating.

        Most Warmists deny this fact, so I’m pleased you have overcome this small deficiency. I wait with bated breath.

        On the other hand, maybe you’re going to tell me that matter can be heated. You won’t get a Nobel Prize for that. Sorry, you’re a bit late. Even gases can be heated, and allowed to cool.

        Cheers.

      • Where’s your thermometer record for 4 billion years of cooling. Does that include the snowball earth period? Think.

  33. Every time I see that temperature curve in your first figure I get irritated and my blood pressure rises because that temperature is a falsified temperature. It has been so since the late nineties, they know about it, and are doing nothing. The false warming starts in1979 ends in 1997, an 18 year stretch. The true temperature there sjould be flat because the eighties and nineties were a hiatus period. That hiatus is shown in figure 15 of my book “what warming” that came out in 2010. The falsification was done by a cooperation of three temperature controllers: GISS, NCDC, and HadCRUT. What points to them is a joint temperature correction they used in which one operative corrected all three data sets into an identical state and in the process left an identical set of sharp spikes attached to all three publicly available temperature curves. That is actually funny because two of these spikes sit right on top of the super El Nino of 1998 and make it taller. If the correct temperature were rgrown the temperature rise they show after 1980 would not exist because it would be interrupted by a flat section for the next 18 years. It is obhvious that whoever supervised this temperature job either screwed up or cooperated with the perpetrators.

    • Arno, you wrote:”The falsification was done by a cooperation of three temperature controllers: GISS, NCDC, and HadCRUT.”
      Some kind of conspiracy? Indeed the GISS and NCDC records are not independend, because they use the same SST-analyses. ( see http://onlinelibrary.wiley.com/doi/10.1002/qj.2949/full ). That’s why the NOAA record was not included in fig.1 of the post. But the Berkeley record ( ask Mosh for details…) was not part of your estimated world wide conspiracy and it’s behavior is very similiar to HadCRUT4 and C/W. So, sorry… IMO there is no falsification.

  34. “Tamino (aka Grant Foster) released a blogpost with all the data in which he introduced a “sophisticated adjustment” to eliminate the influences of ENSO, solar TSI- changes and volcanic activities on the temperatures from 1951 to the present ( 8/2016) in many records.”
    All the adjustments are made on very dubious estimates of the effects of said volcanoes, sun and ENSO.
    The given for making such adjustments is to remove the “excess “heat while keeping global temperature going upwards at the prescribed C02 doubling rate.
    Hence removing them gives the built in trend only.
    The role of natural “natural variation” is completely ignored.
    Neither method of using guessed adjustments is adequate for truly looking at TCR and the time of measurement is far too short.

  35. Pingback: Weekly Climate and Energy News Roundup #246 | Watts Up With That?

  36. I noticed this lack of multidecadal variability in GIStemp record years ago. The reason is in the adjustments they have done. The variability is inconvenient and has to be gradually erased in order to justify the GCM-based public scaremongering about the evilness of CO2.

  37. Pingback: Bulletin des climato-réalistes n°49 | La Terre du Futur

  38. Pingback: Prospects for a Prolonged Slowdown in Global Warming in the Early 21st Century | Climate Etc.

  39. Pingback: Prospects for a Prolonged Slowdown in Global Warming in the Early 21st Century – Enjeux énergies et environnement

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s