Are Climate Models Overstating Warming?

by Ross McKitrick

A number of authors, including the IPCC, have argued that climate models have systematically overstated the rate of global warming in recent decades. A recent paper by Millar et al. (2017) presented the same finding in a diagram of temperature change versus cumulative carbon emissions since 1870.

The horizontal axis is correlated with time but by using cumulative CO2 instead the authors infer a policy conclusion. The line with circles along it represents the CMIP5 ensemble mean path outlined by climate models. The vertical dashed line represents a carbon level where two thirds of the climate models say that much extra CO2 in the air translates into at least 1.5 oC warming. The black cross shows the estimated historical cumulative total CO2 emissions and the estimated observed warming. Notably it lies below the model line. The models show more warming than observed at lower emissions than have occurred. The vertical distance from the cross to the model line indicates that once the models have caught up with observed emissions they will have projected 0.3 oC more warming than has been seen, and will be very close (only seven years away) to the 1.5 oC level, which they associate with 615 GtC. With historical CO2 emissions adding up to 545 GtC that means we can only emit another 70 GtC, the so-called “carbon budget.”

Extrapolating forward based on the observed warming rate suggests that the 1.5 oC level would not be reached until cumulative emissions are more than 200 GtC above the current level, and possibly much higher. The gist of the article, therefore, is that because observations do not show the rapid warming shown in the models, this means there is more time to meet policy goals.

As an aside, I dislike the “carbon budget” language because it implies the existence of an arbitrary hard cap on allowable emissions, which rarely emerges as an optimal solution in models of environmental policy, and never in mainstream analyses of the climate issue except under some extreme assumptions about the nature of damages. But that’s a subject for another occasion.

Were Millar et al. authors right to assert that climate models have overstated recent warming? They are certainly not the first to make this claim. Fyfe et al. (2013) compared Hadley Centre temperature series (HadCRUT4) temperatures to the CMIP5 ensemble and showed that most models had higher trends over the 1998-2012 interval than were observed:

Original caption: a, 1993–2012. b, 1998–2012. Histograms of observed trends (red hatching) are from 100 reconstructions of the HadCRUT4 dataset1. Histograms of model trends (grey bars) are based on 117 simulations of the models, and black curves are smoothed versions of the model trends. The ranges of observed trends reflect observational uncertainty, whereas the ranges of model trends reflect forcing uncertainty, as well as differences in individual model responses to external forcings and uncertainty arising from internal climate variability.

The IPCC’s Fifth Assessment Report also acknowledged model over-estimation of recent warming in their Figure 9.8 and accompanying discussion in Box 9.2. I have updated the IPCC chart as follows. I set the CMIP5 range to gray, and the thin white lines show the (year-by-year) central 66% and 95% of model projections. The chart uses the most recent version of the HadCRUT4 data, which goes to the end of 2016. All data are centered on 1961-1990.

Even with the 2016 EL-Nino event, the HadCRUT4 series does not reach the mean of the CMIP5 ensemble. Prior to 2000 the longest interval without a crossing between the red and black lines was 12 years, but the current one now runs to 18 years.

This would appear to confirm the claim in Millar et al. that climate models display an exaggerated recent warming rate not observed in the data.

Not So Fast

Zeke Hausfather has disputed this in a posting for Carbon Brief. He presents a different-looking graph that seems to show HadCRUT4 and the other major data series lining up reasonably well with the CMIP5 (RCP4.5) runs.

How does he get this result?

Hausfather isn’t using the CMIP5 runs as shown by the IPCC; instead he is using data from a different archive that modifies the outputs in a way that tilts the post-2000 model trends down. Cowtan et al. (2015) argued that, for comparisons such as this, climate model outputs should be sampled in the same way that the HadCRUT4 (and other) surface data are sampled, namely using Surface Air Temperatures (SAT) over land, Sea Surface Temperatures (SST) over water, and with maskings that simulate the treatment of areas with missing data and with ice cover rather than open ocean. Global temperature products like HadCRUT use SST data as a proxy for Marine Air Temperature (MAT) over the oceans since MAT data are much less common than SST. Cowtan et al. note that in the models, SST warms more slowly than MAT but the CMIP5 output files used by the IPCC and others present averages constructed by blending MAT and SAT, rather than SST and SAT. Using the latter blend, and taking into account the fact that when Arctic ice coverage declines, some areas that had been sampled with SAT are replaced with SST, Cowtan et al. found that the discrepancy between models and observations declines somewhat.

Figure 4 in Cowtan et al. shows that the use of SAT/SST (“blended”) model output data doesn’t actually close the gap by much: the majority of the reconciliation happens by using “updated forcings”, i.e. peeking at the answer post-2000

.Top: effect of applying Cowtan et al. blending method (change from red to green line)

Bottom: effect of applying updated forcings that use post-2000 observations

Hausfather also uses a slightly later 1970-2000 baseline. With the 2016 El Nino at the end of the record a crossing between the observations and the modified CMIP5 mean occurs.

In my version (using the unmodified CMIP5 data) the change to a 1970-2000 baseline would yield a graph like this:

The 2016 HadCRUT4 value still doesn’t match the CMIP5 mean, but they’re close. The Cowtan et al. method compresses the model data above and below so in Zeke’s graph the CMIP5 mean crosses through the HadCRUT4 (and other observed series’) El Nino peak. That creates the visual impression of greater agreement between models and observations, but bear in mind the models are brought down to the data, not the other way around. On a 1970-2000 centering the max value of the CMIP5 ensemble exceeds 1C in 2012, but in Hausfather’s graph that doesn’t happen until 2018.

Apples with Apples

The basic logic of the Cowtan et al. paper is sound: like should be compared with like. The question is whether their approach, as shown in the Hausfather graph, actually reconciles models and observations.

It is interesting to note that their argument relies on the premise that SST trends are lower than nearby MAT trends. This might be true in some places but not in the tropics, at least prior to 2001. The linked paper by Christy et al. shows the opposite pattern to the one invoked by Cowtan et al. Marine buoys in the tropics show that MAT trends were negative even as the SST trended up, and a global data set using MAT would show less warming than one relying on SST, not more. In other words, if instead of apples-to-apples we did an oranges-to-oranges comparison using the customary CMIP5 model output comprised of SAT and MAT, compared against a modified HadCRUT4 series that used MAT rather than SST, it would have an even larger discrepancy since the modified HadCRUT4 series would have an even lower trend.

More generally, if the blending issues proposed by Cowtan et al. explain the model-obs discrepancy, then if we do comparisons using measures where the issues don’t apply, there should be no discrepancy. But, as I will show, the discrepancies show up in other comparisons as well.

Extremes

Swanson (2013) compared the way CMIP3 and CMIP5 models generated extreme cold and warm events in each gridcell over time. In a warming world, towards the end of the sample, each location would be expected to have a less-than-null probability of a record cold event and a greater-than-null probability of a record warm event each month. Since the comparison is done only using frequencies within individual grid cells it doesn’t require any assumptions about blending the data. The expected pattern was found to hold in the observations and in the models, but the models showed a warm bias. The pattern in the models had enough dispersion in CMIP3 to encompass the observed probabilities, but in CMIP5 the model pattern had a smaller spread and no overlap with observations. In other words, the models had become more like each other but less like the observed data.

(Swanson Fig 2 Panels A and B)

The importance here is that this comparison is not affected by the issues raised by Cowtan et al, so the discrepancy shouldn’t be there. But it is.

Lower Troposphere

Comparisons between model outputs for the Lower Troposphere (LT) and observations from weather satellites (using the UAH and RSS products) are not affected by the blending issues raised in Cowtan et al. Yet the LT discrepancy looks exactly like the one in the HadCRUT4/CMIP5 comparison.

The blue line is RSS, the black line is UAH, the red line is the CMIP5 mean and the grey bands show the RCP4.5 range. The thin white lines denote the central 66% and 95% ranges. The data are centered on 1979-2000. Even with the 2016 El Nino the discrepancy is visible and the observations do not cross the CMIP5 mean after 1999.

A good way to assess the discrepancy is to test for common deterministic trends using the HAC-robust Vogelsang-Franses test (see explanation here). Here are the trends and robust 95% confidence intervals for the lines shown in the above graph, including the percentile boundaries.

 

UAHv6.0              0.0156 C/yr       ( 0.0104 , 0.0208 )

RSSv4.0                 0.0186 C/yr       ( 0.0142 , 0.0230 )

GCM_min              0.0252 C/yr       ( 0.0191 , 0.0313 )

GCM_025             0.0265 C/yr       ( 0.0213 , 0.0317 )

GCM_165             0.0264 C/yr       ( 0.0200 , 0.0328 )

GCM_mean           0.0276 C/yr       ( 0.0205 , 0.0347 )

GCM_835             0.0287 C/yr       ( 0.0210 , 0.0364 )

GCM_975             0.0322 C/yr       ( 0.0246 , 0.0398 )

GCM_max             0.0319 C/yr       ( 0.0241 , 0.0397 )

All trends are significantly positive, but the observed trends are lower than the model range. Next I test whether the CMIP5 mean trend is the same as, respectively, that in the mean of UAH and RSS, UAH alone and RSS alone. The test scores are below. All three reject at <1%. Note the critical values for the VF scores are: 90%:20.14, 95%: 41.53, 99%: 83.96.

H0: Trend in CMIP5 mean =

Trend in mean obs          192.302

Trend in UAH                      405.876

Trend in RSS                         86.352

 

The Tropics

In addition to the above comparison, if the treatment of Arctic sea ice is the major problem, there should be no issues when confining attention to the tropics. Also, since models project the strongest response to GHG warming in the tropical LT, this is where models and observations ought best to agree.

Again the blue line is RSS, the black line is UAH, the red line is the CMIP5 mean, the grey bands show the RCP4.5 range and the thin white lines denote the central 66% and 95% ranges. The data are centered on 1979-2000.

Trends:

UAHv6.0              0.0102 C/yr       ( 0.0037 , 0.0167 )

RSSv4.0                 0.0139 C/yr       ( 0.0085 , 0.0193 )

GCM_min             0.0282 C/yr       ( 0.0199 , 0.0365 )

GCM_025             0.0277 C/yr       ( 0.021 , 0.0344 )

GCM_165             0.0281 C/yr       ( 0.0207 , 0.0355 )

GCM_mean           0.0289 C/yr       ( 0.0209 , 0.0369 )

GCM_835             0.0296 C/yr       ( 0.021 , 0.0382 )

GCM_975             0.032 C/yr          ( 0.0239 , 0.0401 )

GCM_max             0.0319 C/yr       ( 0.023 , 0.0408 )

H0: Trend in CMIP5 mean =

Trend in mean obs          229.683

Trend in UAH                      224.190

Trend in RSS                         230.100

 

All trends are significantly positive and the models strongly reject against the observations. Interestingly the UAH and RSS series both reject even against the (year-by-year) lower bound of the CMIP5 outputs (p<1%).

Finally, Tim Vogelsang and I showed a couple of years ago that the tropical LT (and MT) discrepancies are also present between models and the weather balloon series back to 1958.

Summary

Millar et al. attracted controversy for stating that climate models have shown too much warming in recent decades, even though others (including the IPCC) have said the same thing. Zeke Hausfather disputed this using an adjustment to model outputs developed by Cowtan et al. The combination of the adjustment and the recent El Nino creates a visual impression of coherence. But other measures not affected by the issues raised in Cowtan et al. support the existence of a warm bias in models. Gridcell extreme frequencies in CMIP5 models do not overlap with observations. And satellite-measured temperature trends in the lower troposphere run below the CMIP5 rates in the same way that the HadCRUT4 surface data do, including in the tropics. The model-observational discrepancy is real, and needs to be taken into account especially when using models for policy guidance.

Moderation note:  As with all guest posts, please keep your comments civil and relevant.

 

402 responses to “Are Climate Models Overstating Warming?

  1. Is the Pope a catholic?

    As well as IPCC AR5 WG1 chapter 9, this is also discussed in chapter 11:

    “This provides evidence that some CMIP5 models have a higher transient response to GHGs and a larger response to other anthropogenic forcings (dominated by the effects of aerosols) than the real world (medium confidence).’ The ASK results and the initialised predictions both suggest that those CMIP5 models that warm most rapidly over the period (1986–2005) to (2016–2035) may be inconsistent with the observations.”

  2. Reblogged this on Climate Collections.

  3. They tried hard to get the models to predict the worst outcomes to induce policy changes, and they are trying hard now to get the models to predict less warming to reduce the disparity with observations. This is not science and they are not scientists, even if they are paid as such.

    CMIP5 RCP4.5 with 1961-1990 baseline compared to HadCRUT4 with 1961-1990 baseline, according to KNI climate explorer.

    https://i.imgur.com/CXwGBte.png

    The models believe we are stuck in a perpetual very strong El Niño that is getting worse with time. That is how wrong they are.

  4. Hi Ross,

    It won’t change your overall results (e.g. that TLT data is running a bit cooler than the models in the lower troposphere), but I would strongly suggest using CMIP5 pressure levels associated with the lower troposphere rather than the surface when doing comparisons. Conflating the surface and lower troposphere can create confusion, as they are really not that tightly coupled, particularly over short timeframes where tropospheric amplification of El Nino and La Nina effects can matter quite a bit. This recent Santer et al paper does a pretty robust comparisons of tropospheric models and observations: https://www.nature.com/ngeo/journal/v10/n7/abs/ngeo2973.html

    Similarly, there is pretty large structural uncertainty among satellite records (and large differences between recent versions of both UAH and RSS) that needs to be taken into account: https://skepticalscience.com/surface_temperature_or_satellite_brightness.html

    Regarding the apples-to-apples comparisons, I don’t think that 1979-2001 values from the tropics are necessarily representative of a global relationship between SSTs and MATs. I’d be somewhat skeptical of the homogeneity of the MAT record they used over that period given changes in measurement times and devices. The NMAT dataset they use, MOHMAT, was pretty extensively revised in its most recent form (HADNMAT2). In Cowtan et al 2015 we spent quite some time comparing the model results to behavior in both reanalysis and observations, and these comparisons can be found in our supplementary materials. In general, reanalysis shows quite similar behavior to the models, while the differences between observational NMAT and SST are an order of magnitude larger and much noisier than in either models or reanalysis, making them difficult to glean much from: http://onlinelibrary.wiley.com/store/10.1002/2015GL064888/asset/supinfo/grl53276-sup-0001-supinfo.pdf

    • Also, its worth mentioning that masking models to common coverage with HadCRUT4 is just as important as using model SSTs, and I can’t really see any compelling argument as to why a mask shouldn’t be used.

      Finally, I’d direct interested readers to the recent Medhaug et al paper in Nature. They did a nice job of looking at the various factors responsible for differences between surface warming in models and observations in recent years: https://www.nature.com/nature/journal/v545/n7652/full/nature22315.html
      https://s26.postimg.org/580sr6le1/Screen_Shot_2017-09-21_at_11.33.05_AM.png

      • Curious George

        The situation is much simpler than anybody here admits. There are hundreds of runs of models, creating a “spaghetti graph”,of which modelers selected some to include in AR4. Had they selected other runs – which they are selecting today – they would have created a good projection. But they did not know back then what projection would be “good”.

        It is an old trick. Lord Knowitall to butler James:
        – Think of a number between 1 and 10.
        – Eight, Your Lordship.
        – I knew you would say Eight. Read a note in the flower pot on the windowsill.
        – “I knew you would say Eight”. How did Your Lordship do it?
        – Had you said Four, I have a note with Four in a vase.

      • Had they selected other runs – which they are selecting today – they would have created a good projection. But they did not know back then what projection would be “good”.

        It wasn’t the projections (the model results) that they’re selecting now, it’s the forcing scenarios; the model inputs.

        The takeaway is that if you know what the solar and volcanic inputs are going to do over the next 30 years, then you can make a pretty good prediction of where the climate will be then.

      • George is correct – of the 1000’s of plausible, exponentially divergent, non-unique solutions for any model – the mystery of how a single solution ends up in the CMIP opportunistic ensemble is inexplicable. And I might add –
        as scientifically indefensible as it gets.

        https://watertechbyrie.files.wordpress.com/2017/04/slingo-and-palmer-e1506928933615.png
        http://rsta.royalsocietypublishing.org/content/369/1956/4751

        Pick one. James Mcwilliams speaks of a posteriori solution behavior – it qualitatively looks about right. That’s right – they pull it out of their asses. There is that children’s joke about the constipated mathematician – except that they have progressed from pencils to supercomputers.

        http://www.pnas.org/content/104/21/8709.full

        Then there is the surface temperature record. Cobbled together from hundreds of mostly less reliable sources with great gaps in coverage and with conspicuous latent heat artifacts.

        https://watertechbyrie.files.wordpress.com/2017/10/humidity-temp.jpg

        Forget fruit – they are comparing chimera with camels. And although it creates a titillation, a frisson in the hearts of climate commentators – it is not worth any depth of consideration. George is correct in another way – in digging down to fundamentals rather than interminably arguing with as much substance as angels on pinheads.

      • I borrowed this comment from another website about the paper, abridged a little.”Authors response. it makes the point about the difference between emissions and concentrations. Makes another interesting argument which is essentially that GMST is defined in terms of the HadCRUT4 dataset and essentially that the 1.5oC limit is when HadCRUT4 would reach 1.5oC. Therefore, under this definition we have about 0.6oC to go.
        It does illustrate that one of the issues seems to be that we haven’t properly defined the baseline. Does it mean an actual globally averaged warming of 1.5oC or 1.5oC as determined by one of the commonly used temperature datasets [HadCRUT4].”
        Zeke, if it is he in Carbon Brief appears to be arguing from an extremely different period, 1970 to 2015 Hadcrut instead of 1870 to 2015 Hadcrut.
        This lets him get away with comments like
        “In reality, the results obtained from the type of model-observation comparisons performed in the paper depend greatly on the dataset and model outputs used by the authors”. instead of asking why the 0.3 C discrepancy exists.
        The data sets mentioned all use backward lowering of real temperatures in a method Zeke approves of. If these data sets were not adjusted so severely the actual temperature rise over this time would be a lot less and the CO2 effect grossly overstated. It is hard to argue on his terms and data sets when they switch from what has been in common usage to ” researchers have produced blended model outputs, which combine air temperatures over land with surface water temperatures over the ocean to mirror the way temperatures are actually measured.” because the traditional measures were overstating heating.
        This only needed to be produced when an actual paper was published showing the massive discrepancy.

  5. Thanks Zeke. My LT comparison does use RCP4.5 TLT levels, not TAS, from CMIP5, (though I’ll double check). I used both RSS and UAH in order to avoid having to get into the series’ differences, which turn out not to matter for this issue since RSS also rejects against the CMIP5 mean. I agree that the NMAT records have lots of problems, as do the SST products. Any issue in which the signal being sought is small relative to the size of measurement errors will defy easy resolution. That’s why I think looking at alternative measures that aren’t affected by those same problems helps clarify things.

    • Hi Ross,

      Apologies if you did use CMIP5 TLT-analogues (it wasn’t very clear in the text). Did you use Christy’s version, Santer’s version, or make your own? It turns out it is actually fairly non-trivial to construct a TLT analogue since its not a single layer of the atmosphere and models don’t have any directly comparable fields (you end up needing a weighted average of different pressure levels with different weights over land and oceans, among other complications).

      • There is some variability of the precise levels that MSU represents. But both the MSU analyses (UAH and RSS) and RAOB clearly indicate the lack of Hot Spot:
        https://climatewatcher.webs.com/HotSpot2016.png

      • That’s false, Turbulent Eddie, and you’ve been shown this again and again. So I don’t know why you keep making this false claim.

        Here’s what Spencer says the hot spot is:

        “One of the most vivid predictions of global warming theory is a “hotspot” in the tropical upper troposphere, where increased tropical convection responding to warming sea surface temperatures (SSTs) is supposed to cause enhanced warming in the upper troposphere.”
        http://www.drroyspencer.com/2015/05/new-satellite-upper-troposphere-product-still-no-tropical-hotspot/

        Multiple satellite-based analyses show that in the tropics, upper tropospheric warming is greater than surface warming; i.e. there’s a hot spot. That includes RSS (DOI: 10.1175/JCLI-D-16-0333.1; DOI: 10.1175/JCLI-D-13-00767.1), UW, NOAA/STAR (DOI: 10.1175/JCLI-D-13-00767.1), UMD (DOI: 10.1029/2005JD006392), and an analysis from Weng and Zou (DOI: 10.1007/s00382-013-2012-5). The only exception is UAH (DOI: 10.1175/JCLI-D-13-00767.1).

        Furthermore, multiple radiosonde analyses show that in the tropics, upper tropospheric warming is greater than surface warming; i.e. there’s a hot spot. That hot spot appears in all of the analyses from 30+ year periods beginning in 1959, and it shows up in every analysis (except HadAT2) for period beginning in 1979 (DOI: 10.1002/2017GL073798).

  6. I used Christy’s version. Regarding the use of masking, I concur with the reasoning but the Cowtan et al. paper Fig 4a suggests it doesn’t have a huge effect.

  7. All you really need to know about “observations” and model comparisons:

    https://stevengoddard.files.wordpress.com/2014/10/screenhunter_3233-oct-01-22-59.gif

  8. Pingback: Are Climate Models Overstating Warming? — Climate Etc. – NZ Conservative Coalition

  9. The model-observational discrepancy is real, and needs to be taken into account especially when using models for policy guidance.

    Go with observational and throw to model junk far away.
    Only observational should be used for policy.
    I believe North Carolina did pass a law that said that.
    We need a national law that says that.

  10. Dear Zeke, Dear Ross,

    First of all, Thanks for your civilized discussion.
    There was at the beginning of discussion a very simple question :
    How much CO2 can we inject into the atmosphere before reaching the threshold of 1.5 degC? Right ?
    The answer is: it may vary from 1 to 3.5 in a ratio (245 GtCO2 vs 880 GtCO2) depending on the way you are computing the budget, right? ( depending the dataset, the baseline, etc etc )
    Can we seriously make a decision impacting billions of people with such an uncertainty?
    All that I see here as a non=climate expert is low-level signal processing Science … How can we seriously conclude something? Sorry to be so abrupt.

    • I think there are more uncertainties than that. One of them is that in the Paris agreement, it is agreed that:
      «1. This Agreement, in enhancing the implementation of the Convention, including its objective, aims to strengthen the global response to the threat of climate change, in the context of sustainable development and efforts to eradicate poverty, including by:
      (a) Holding the increase in the global average temperature to well below 2°C above pre-industrial levels…»

      Cowtan et al. note that:
      «The results from the models use air temperatures over the whole globe, whereas the observations use a mix of air temperature over land and water temperature for the oceans» – Robust comparison of climate models with observations using blended land air and ocean sea surface temperatures

      That makes me wonder – which definition of temperature forms the basis for the Paris agreement?

      Taking into account that the temperature measurements in 1850 was hardly representative for any definition of global average temperature, the Paris agreement seems like an attempt to keep the highly uncertain value of a poorly defined measurand below an arbitrary set limit.

  11. Cowtan et al are not unmotivated dispassionate observers and hence adjustments introduced (ie, Cowtanization) are necessarily suspect. One shouldn’t have to split hairs to this extent to produce coherence with the mean.

    • I agree Shub, this level of data massaging is always questionable. As this post points out, in more localized measures, the result is clearer and shows that models overstate warming. It would be a surprise if the global averages didn’t also show this. This observation casts doubt on the data massaging.

      • Also, the hindcast portion of the model runs don’t seem to need this much hand-holding and babysitting to match the observations. Talk about apples to apples, or whatever fruity analogy you can think up.

      • Also, the hindcast portion of the model runs don’t seem to need this much hand-holding and babysitting to match the observations.

        With regard to forcing, this is apples-to-apples.
        For the past, we generally know the real-world forcings that went into the hindcasts.
        For the present simulations, they can get off if we don’t use the real-world forcings.

        That’s part of the cause of the 1998-2014 temperature “slowdown”. Solar and volcanic forcings were a little cooler than expected. But running a hindcast with the real-world forcings still gives pretty good estimate of the resulting surface temperatures.

      • David Springer

        Sorry Benjamin but estimated ECS is still 1.5C – 4.5C @ 95% confidence. By definition that’s a constant forcing. That range hasn’t been improved in 60 years of climate “science”. The low end of that range is yawn-worthy and the high side is alarming. Observed ECS is near or below the low number.

        Wake me up when climate science makes some progress in the science part of it. The narrative elements pushed by ideologues can be pushed where the sun don’t shine in the meantime, if you get my drift.

      • David Springer

        @Winchester

        We’ve been repeatedly informed by the global warming consensus that variation in solar “constant” is insignificant. The same have informed us that the CLOUD experiment has discounted any climate effect caused by solar magnetic field variation. Please provide citations which you believe support a case that solar variation caused observed warming to be less than anticipated.

        I’m not aware any greater-than-average volcanic sulfates in the stratosphere that would dampen warming more than anticipated. Please provide citations which corroborate that claim as well.

    • Cowtan et al are not unmotivated dispassionate observers and hence adjustments introduced (ie, Cowtanization) are necessarily suspect.

      /shrug. It’s science. All research is suspect and deserves skepticism. The question is whether the work is good or not.

      Applying a mask to compare apples-to-apples seems solid to me. Do you have a particular issue with their methodology, or is this a “I’m just suspicious of it” kind of objection?

      One shouldn’t have to split hairs to this extent to produce coherence with the mean.

      And yet… both the masking research and the forcings research seems to show that we do.

      I doubt the latter matters much, probably less than a tenth of a degree. But when we’re trying to “predict” climate, typically defined over 30 years or more, on 10-20 year timespans, even small forcing changes end up mattering. It’s a small margin of error.

  12. Pingback: Climate Models | Transterrestrial Musings

  13. “Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.” http://www.pnas.org/content/104/21/8709.full

    This post is bog standard climate blog fare. Tinkering at the edges of climate talking points with little to no rationale or resolution in sight. The more fundamental question to ask is on the theoretical justification – or lack thereof – of opportunistic ensembles such as CMIP5.

    https://watertechbyrie.files.wordpress.com/2017/04/slingo-and-palmer-e1506451524627.png
    http://rsta.royalsocietypublishing.org/content/369/1956/4751

    The ‘systematically designed model family’ here is a schematic of a perturbed physics model. Potentially 1000’s of non-unique solutions for any model due entirely to sensitive dependence of nonlinear equations at their core. There are no unique solutions – and no methodology other than it matching expectations for warming – for choosing one solution over any other. Arbitrarily choosing a solution as a comparison with other models seems essentially a ludicrous pursuit with the sole rationale being continued public funding. Should we expect warm choices due to subjective bias? And then they construct a statistics over the motley collection based on an impossible convergence assumption.

    “Pure fluid dynamics may be at the core of the AOS modeling problem, but nature combines fluid physics with other processes, and we must look to more comprehensive model formulations to be able to assess simulation accuracy against the relevant empirical reality. Thus, we can consider the many comparison studies that show a substantial spread among the results from AOS models created by different groups, as well as in the degree of correspondence with observations. Because each of the models is created independently, such model ensembles are more opportunistically assembled than systematically designed. Furthermore, the compared models are typically being reformulated by their creators faster than they can be compared with each other. So the comparisons are more like snapshots of model differences than careful, enduring assessments…

    In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?

    Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence…

    AOS models are members of the broader class of deterministic chaotic dynamical systems, which provides several expectations about their properties (Fig. 1). In the context of weather prediction, the generic property of sensitive dependence is well understood (4, 5). For a particular model, small differences in initial state (indistinguishable within the sampling uncertainty for atmospheric measurements) amplify with time at an exponential rate until saturating at a magnitude comparable to the range of intrinsic variability. ” op. cit

    Models are powerful tools for exploring process in climate – but projections of surface temperature are egregiously and fundamentally hinkey. And this is before we get into questions of the temporal chaos of models versus the spatio/temporal chaos of the Earth system. Reality is that prediction is not possible and therefore whether the opportunistic ensembles are running warm or not is fundamentally irrelevant. I suggest just not going down the rabbit hole with anyone on this.

  14. There’s only one small flaw – like all academics this ridiculous forecast starts with the false assumption that any change that already occurred must be human caused. The simple fact – is yes we did change the global temperature – but not through CO2.

    At which point the whole rotten prediction is as useless as a limp lettuce.

    But it gets worse – because academics and those like NASA have been falsifying warming that doesn’t exist. So not only is the warming not due to CO2, but much of the warming is pure fiction in the first place.

    This whole climate nonsense makes many Monty Python sketches seem sane.

  15. IPCC AR4:
    Best estimate for a “high scenario” is 4.0 °C (0.4C/decade)
    Best estimate for a “low scenario” is 1.8 °C (0.18C/decade)

    Most of the observations, particularly those of thirty years length or more, are around the “low scenario”.

    The models aren’t necessarily overstating the warming,
    more like the advocates are overstating the warming.

  16. Nice guest post. Denizens may also want to refer to John Christy’s March 29 2017 written congressional testimony, which reaches Dr. McKitrik’s conclusions using somewhat different methods. Christy goes further, showing that most CMIP5 models are actually now statistically invalidated, period. The exception is Russian INM-CM4, which has higher ocean thermal inertia, lower water vapor feedback, and low ECS.

    • Rud, Do you know if any one has an explicit refutation of Christy’s analysis? I would like to see what people say about it who don’t agree.

      • Dpy, I am aware of none, yet, in the literature. (Disclaimer, am not a full time climate junkie). His congressional arguments are pretty bullet proof. So I think mostly just ignored. IMO no way to refute him in peer review based on present observational history.

      • Christy is pointing out ( which he has for 25 years now ) that the hot spot is not occurring as modeled. This is pretty much a matter of observation.

        This does not ( as Christy has also said for 25 years or so ) indicate that global warming is not occurring at the surface in the mean. Of course, Christy has also pointed out that increasing mean temperature doesn’t necessarily mean increasing extreme temperature, and in fact, has occurred with decreasing extreme temperatures in the US.

      • First hit:

        Daily normals linearly interpolated from pentad normals, created by harmonic synthesis of the monthly climatology, were used to quality control the observations within MOHMAT4N: see Appendix I of Parker et al. [1995b]. These data were then corrected for systematic changes in the height of ships’ decks, which have risen over time. All data were adjusted to the local average height over the 1961–1990 climatology period. The local height was assumed equal to the global average (Figure 10a) through 1970, but geographically varying heights were used thereafter based on 5-year average fields of deck heights centered on 1982 through 1995. Heights were linearly interpolated in 1971–81 and augmented at 0.14m/year after 1995 following global average trends based on the “height of the observing platform” field of WMO No. 47 [WMO, 1955–1999]. So, we differ from Bottomley et al. [1990] and Parker et al. [1995b], by applying continued adjustments up to the present (Figure 10b), entailing a global average adjustment of +0.05°C to values for the late 1990s, relative to 1961–1990 climatology. The adjustments are based on boundary-layer similarity theory [Fairall et al., 1996; A. Grant, personal communication, 2001]. They are smaller than the cooling in tropical NMAT, relative to SST, reported by Christy et al. [2001] using data without this deck-height adjustment. The new adjustments add about 0.1°C to the overall warming of NMAT since the 1860s (Figure 10b) and reduce the divergence between Southern Hemisphere SST and NMAT trends in the most recent decade (section 6.5.1).

        http://onlinelibrary.wiley.com/doi/10.1029/2002JD002670/full

      • Its proper to adjust data sets, provided that the adjustment value be applied to all previous years back to 1850. What’s right for the current goose is proper for the previous gander. In this fashion adjustments simply become a more real number (if the adjustment is proven correct) and in no way changes the global temperature up or down. However, advocates of warming would lose an all important tool.

  17. I have not had time fully to read, let alone digest, this enormous and very scholarly report, BUT:

    There seem to be several hidden assumptions behind it, one being that anthropogenic CO2 has been proven to be the culprit in AGW ACC(anthropogenic climate change).

    It has not, and good cases can be made that:

    * we need more CO2, since it is a vital component of all life and,
    * pre-historically, there have been eras in which high CO2 has corresponded with vegetation much more lush than now, supporting enormous herbivores whose caloric demands could not be supported today, and
    * there have been episodes in which warming preceded CO2 rise.

    Being only an observer, I have no idea what would happen to the models and to your paper of any of these were true?

    Frank Gue,
    Professional Engineer,
    Burlington, Ontario

    • Frank Gue,

      I agree with you on this and add some more reasons.

      It seems there is a lack of valid evidence to suggest global warming would be harmful. On the other hand, there are many lines of evidence that seem to suggest global warming would be net-beneficial, not detrimental, for life on earth, human well-being and the global economy.

      1. The planet is currently in about the severest coldhouse phase it has been in since complex life began (around 650 Ma ago) (Scotese, 2016, Figure 15) https://www.researchgate.net/publication/275277369_Some_Thoughts_on_Global_Climate_Change_The_Transition_for_Icehouse_to_Hothouse_Conditions .

      2. According to Scotese, the average global temperature was around 7 °C warmer than now for the past half billion years (Scotese, 2016)

      3. Life thrived when the planet was warmer, but struggled when colder

      4. About 10 times more C is tied up in the biosphere now than at the last glacial maximum (IPCC AR4 WG1 Chapter 6)

      5. Tol, 2013, Figure 3 (bottom panel), projects the economic impacts of global warming would be beneficial up to around 4 °C warmer than now, if the projected negative impacts of energy consumption are excluded. Empirical evidence does not appear to be consistent with the projected negative impact of GW on energy consumption – GW is likely to also be positive for the global economy.
      https://link.springer.com/article/10.1007%2Fs10584-012-0613-3#page-1
      Or (Free access to working paper version): http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf

    • “anthropogenic CO2 has been proven to be the culprit in AGW”

      Given the uncertainties, probably never will be proof.

      However, doing a thought experiment: put a spherical shell around earth at the tropopause and consider the change of net energy flux at the shell, and increased CO2 does very likely increase mean surface temperature.

      But as you indicate, there are benefits to increased CO2. This includes, as you mention, CO2 fertilization, but also the climate effect of increased CO2 is thought to reduce temperature variability and the kinetic energy of storms. It would seem that for some time to come, increased CO2 is an overall benefit to humanity.

    • franksgue: There seem to be several hidden assumptions behind it, one being that anthropogenic CO2 has been proven to be the culprit in AGW ACC(anthropogenic climate change).

      I don’t think Ross McKitrick makes that assumption anywhere.

  18. It is interesting to see the enormous range in energy fluxes in the CMIP5 models (CMIP5 =Climate Model Incomparison Project) used by IPCC in their Assessment Report 5. The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models – Wild et al 2014

    Here are some examples of the range of energy fluxes from the models (See Table 2: Simulated energy balance components averaged over land, oceans and the entire globe from 43 CMIP5/IPCC AR5 models at the TOA, atmosphere, and surface)

    Surface (All units: W/m2):
Solar down: 18.6 W/m2
    Solar up: 10.5
    Solar net: 17.2
    Thermal down: 18.5
    Thermal up: 11.8
    Thermal net: 15.7
    Net radiation: 17.2
    Latent heat: 13.9
    Sensible heat: 13.1
    (Averages are taken over the period 2000–2004)

    The level of current energy accumulation on earth is estimated from observation of ocean warming to be 0.6 W/m2 (Ref.: “considering a global heat storage of 0.6 W m–2» ref.: IPCC;AR5;WGI;page 181; 2.3.1 Global Mean Radiation Budget).

    By the range of energy fluxes compared to the energy accumulation, I think it is reasonable to assume that the models would have been all over the place if not constrained by heavily tuning to previous observations like for example the warming from the 1970´s to 2000. That many models may have seemed right for the wrong reasons should be beyond reasonable doubt.

  19. If anthropogenic carbon dioxide is significantly effecting atmospheric CO2 levels, it should show up in the Mauna Loa CO2 data. Using Excel I plotted Mauna Loa data beginning with decimal year 1958.208 thru 2017.375 and got a trend line of 9E-05(X 2)+0.0656X+314.36 with an R sq of 0.99304. Since there has been a year to year fluctuation in anthropogenic CO2 release, if anthropogenic CO2 significantly affected atmospheric CO2 levels I would have expected a much lower R sq. Where is the anthropogenic signal in remarkably smooth Mauna Loa data?

    • This shows accumulated emissions versus accumulated CO2. They are proportional to an extent that can’t be coincidental.
      http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg

      • Source please.
        How do you know that the values on the y axis was not estimated from the values on the x axis in the first place?

      • Because they are proportional not equal, and not perfectly proportional.

      • So you are just guessing.

      • Which part don’t you believe, the emissions or the accumulated CO2 in the atmosphere. Both are easy datasets to check for yourself. Here’s another one. Coincidence that they ramp up at the same time? I think not. I thought this part was settled, but apparently there’s still folks out there who haven’t even looked at data like this before or have some wacko theory about a coincidence occurring.
        https://s3-us-west-1.amazonaws.com/www.moyhu.org/misc/ghg/m1.png

      • It is not about what I believe. It is about what you know.

      • It seems more about what you don’t know about actual numbers. Do these plots come as a complete surprise to you? Have you seen other versions that you have believed? Did you know that our emissions (nearly 2000 GtCO2 by now) account for about double the CO2 increase in the atmosphere (about 900 GtCO2), and that these are both well known numbers? How about that 75% of these emissions, CO2 increase, and temperature rise have occurred since 1950?

      • I just asked for the source. I had some questions about that figure and the basis for it.

      • You can tell from the web page it is from Ferdinand Englebeen, but it looks like standard data that you can find from any source.

      • Yes, you seemed completely taken by surprise by it. Anyway there it is.
        http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1900_2004.jpg

      • I´m not surprised, and I´m not believing anything, I´m just curious. I understand that the figure comes from the sites of Ferdinand Engelbeen, but I still don´t know the basis for it and I can not find the basis for it at his sites. It was a simple question. I just wanted to have a look at the basis for that figure. I just asked for a link to the source for that figure, as I expected that the source might contain traceability for the numbers that are represented by that figure. I now got the figure twice, but I still don´t got the basis for it.

      • There are places you can find the data such as this one, so it is possible to create your own plot to check if you are really interested.
        http://cdiac.ess-dive.lbl.gov/trends/emis/tre_glob_2014.html

      • http://www.drroyspencer.com/wp-content/uploads/mauna-loa-co2-vs-emissions.jpg

        Here are the growth rates in the MLO era of those cumulative emissions. As can readily be seen, the growth rates are all over the place. (the parameters in cumulative emissions graphs are as such that these details don’t show up) Of particular note are the step rises in the carbon growth rate circa 1980 & 2000 that coincide with well known step rises in temperature. Were it not for those step rises, trends would be nowhere near ferdinand’s 53% ratio. So, yes, it’s easy to see that the cumulative emissions graphs ARE mere coincidence. A coincidence that is clearly identifiable as being maintained solely by the rise in temperature…

      • Annual increases are affected by the global temperature, but these don’t have a cumulative effect. The amount emitted is about twice that in the atmosphere. So you may ask why does the annual increase depend on the temperature? It because in warmer years less can be absorbed by the natural processes that remove about half of the emissions. For example, warmer water holds less CO2 than colder water.

      • Afonzarelli, The CO2 rise does seem to like the upper bonds of the temperature chart.

      • The rise in co2 is also coincident with the rise in temperature of the southern ocean over and above an equilibrium state…

      • (the above wft graph shows the calculated growthrate of co2 from temperature going back to 1850)…

      • This is just that Salby stuff of nonsense, isn’t it? He thinks all our CO2 comes from outgassing but doesn’t explain how the ocean acidifies at the same time, which is a neat trick in carbon nonconservation. He is clueless about where all that carbon could be coming from. He shows graphs and charts that fool many, but can’t answer the basic question.

      • This is just that Salby stuff of nonsense, isn’t it?

        No, jim, it’s just data…
        You seem to have a penchant for things that are highly coincidental. Well, it’s highly coincidental that temperature has been tracking with the carbon growth rate since the inception of the MLO data set over half a century ago, the pinatubo years being the lone exception. An examination of the carbon data and emissions data (spencer’s graph) shows only two data sets that trend alike, but don’t look alike. The carbon data and the sst data, on the other hand, actually look alike. THAT, my friend, is highly coincidental. Data shouldn’t be ignored when it runs counter to our biases. i’m not suggesting here that the rise isn’t anthropogenic, only that there seems to be something going on that i’ve never seen adequately explained…

      • The natural carbon sink tracks the temperature for obvious reasons, so there is no surprise there. This means CO2 increases faster in warmer years and less fast in colder years even for a constant emission. So the growth varies between fast and less fast, but growth it is, because of the emissions.

      • Jim D, are you saying emissions have to go to zero before atmospheric CO2 is stabilized? That it will be impossible to prevent further increases in atmospheric CO2 until accumulated emissions ‘flat-line’? Because that certainly is what your plot implies (assuming the proportionality it shows isn’t coincidental).

      • No, the proportionality is because of the exponential growth and the sink trying to keep up. The fraction absorbed would increase if we reduce the emission growth rate below 2% per year. If we stop emission growth, we would eventually reach an equilibrium CO2 level that depends on the emission rate.

      • Jim D, your plot doesn’t really support that assertion. For a claim about the relationship between accumulated CO2 and the exponential growth rate of emissions, you would need to plot the year-over-year increase in CO2 against the year-over-year increase in emissions.

        Taken at face value, your plot (accumulated CO2 vs accumulated emissions) implies emissions have to go to zero in order for atmospheric CO2 to stabilize. The plot shows that CO2 would still accumulate in the atmosphere even if we reduce the emission growth rate below 2% per year.

      • No, the plot implies that the absorption time scale of CO2 is longer than the current emission time scale. If we reduced the rate of emission growth, the percentage absorbed would be higher, but so far the rate has been fixed at a 33 year doubling time scale for the last century or so, so we don’t see any other fractions yet. The way I see it, natural absorption has a time scale and it can be calculated from the fraction.

      • No, Jim D, the plot has no time scale on it and therefore doesn’t imply anything about time scale. Nonetheless you seem to be claiming that the slope of the plot depends on a time scale parameter, the emissions growth rate.

        You assert with no evidence that if the growth rate were to decrease the slope of the plot would decrease. You assert with no evidence that if the growth rate were to decrease below 2%, the slope would go negative. And you also assert with no evidence that the only reason the current plot shows a constant, increasing slope is because the emission growth rate has been rock solid for over a century at a 33 year doubling time scale, which is a highly dubious contention in itself.

        Nothing in the plot supports any of these assertions.

      • I am not asserting this from the plot alone. We emit CO2 with a growth rate time scale of 25 years. The earth absorbs with a time scale of 50 years. The net is the observed CO2 growth with a time scale of 50 years. If the emission time scale and absorption time scale remain constant, you get that linear behavior. Maybe you don’t believe there is either an emission or absorption time scale and that is up to you. I am not sure what leads you to think absorption only stops when emission stops. It doesn’t. It continues until an equilibrium is reached with the ocean and biosphere.

    • Calculations of FF emissions of CO2 are more uncertain than acknowledged. They take fuel consumption from fuel stocks transactions and multiply by emission factors involving many assumptions with considerable varibility depending upon the burning conditions.
      Why not just compare FF consumption with temperatures, since the remedy is supposed to be FF reductions?
      https://rclutz.files.wordpress.com/2017/04/wffc-and-gmtuah2.png

      https://rclutz.wordpress.com/2017/04/09/fossil-fuels-≠-global-warming/

    • Based on the 3 replies, the anthropogenic signal shows up in calculated emissions but not in actual Mauna Loa data. Unless someone wants to challenge the Mauna Loa data I would think the Mauna Loa data says either (or both) the calculations are wrong and/or CO2 atmospheric interchange is poorly understood. I hope data still trumps calculations.

      • Emissions account for double the Mauna Loa increase, so I am not sure what you are trying to account for. Do you need to understand where else the emissions are going apart from what ends up in the atmosphere? There are sinks in the ocean and biosphere.

  20. “a diagram of temperature change versus cumulative carbon emissions since 1870.”
    The problem with their projections likely derive from their reliance on cumulative emissions. Please see:
    https://ssrn.com/abstract=3000932

  21. Fifteen-year trends are somewhat unreliable because in a few decades they are all over the place. For example, look at this below. The models underestimated the trend prior to 1998, probably for the same reason they overestimated it after, because these trends are not stable and predictable but varied by a factor of at least three between these periods, spanning the model trends. On the other hand, thirty year trends are very steady and have been just under 0.2 C per decade since the one centered on 1980. When evaluating models, you need at least 30 years to define a true climate trend. This came up again recently in a posted item, where they say you need 27 years because of the signal to noise ratio for surface temperatures. Models can’t be expected to fit the noise in a 15 year period.
    http://ar5-syr.ipcc.ch/ipcc/sites/default/files/AR5_SYR_Box_1.1_Figure_1.png

    • Fashion models can’t afford to misjudge a trend.

      Back to Gavin’s “discovery” : Apparently models can model anything. The only trouble is that we don’t know in advance what the future will bring.

    • (c)
      That’s great news.
      The first two show a bistable system. This stuff is cool.

    • Now, if we had models that showed the same barbell distributions on 14-15 year timescales.

      • It depends which 15 years you take. This shows the smoothed 15 and 30 year trends since 1950. You will see that the 30-year trend is stable to a few percent, while the 15-year trend is variable to a factor of three in the last 15 years. The 15-year noise is not likely to be predictable and when you average the models, this type of noise cancels out, but you should see this kind of variability in a single model run.
        http://woodfortrees.org/plot/gistemp/mean:240/mean:120/derivative/scale:120/from:1950/plot/gistemp/mean:120/mean:60/derivative/scale:120/from:1950
        Anyway, bottom line, pointless to try to verify 15 year trends because they are not climate, just noise, and only represent the particular 15 years they were measured, not the next or previous.

      • Jim D can you explain what you are plotting here and why you started at 1950?

      • HAS, it shows the derivative of smoothed 15-year and 30-year temperatures to show how much variability you have in 15-year trends that don’t show up in 30-year trends. The smoothing is done with a triangular filter that removes end effects in the derivatives. The derivative is scaled to a decadal trend, e.g. 0.2 C per decade. Since 1950 you can see the differences in recent decades. If you look before 1950, 15-year trends show some massive swings especially around 1940 that dominate the scaling, and further reinforce how unhelpful they are for climate trends, so I focused in on more recent decades which is what the discussion was about.

  22. Harry Twinotter

    Well I guess discussion on how well the climate model projections match actual observations is a step up from global warming denial.

  23. 
    Are Climate Models Overstating Warming?

    Has the output of Climate Models been so ideologically motivated and politically dictated as to be materially biased and wholly unreliable? Yup! Global warming is a Left versus right issue, not science. Given their prejudices and preconceptions, the Western academics of CAGW can no longer be trusted; they believe history and the scientific method are trite. Reality is not a, collection of platitudes. We cannot ignore the truth embodied in nature and the lessons of the past by simply dismissing them as trite.

  24. Pingback: Ross McKitrick: Despite Denial, Climate Models Are Running Too Hot | The Global Warming Policy Forum (GWPF)

  25. So, as regards surface temperatures, the models do indeed overestimate warming and even the occurrence of strong El Nino warming in 2015/16 has failed to bring them into line with reality. Zeke’s graph was a ‘statistical trick’, a conclusion which he appears not to have directly challenged.

    If the oceans come off the boil in the next decade or two and we see a preponderance of La Nina events perhaps, AMO turns down and Arctic sea-ice increases, it’s going to be nigh on impossible for climate scientists to convince us that urgent reductions of CO2 emissions are justified by climate projections which are increasingly divergent from reality, however many statistical sleights of hand are used to demonstrate that the models are still performing reasonably well. Will that be enough to derail a multi-trillion dollar green industry? Somehow I doubt it, sadly.

    • Anybody can play if.

      If the PDO remains positive and El Niño events come even close to matching La Niña events, models would eventually appear to be running very cold.

    • Jaime,
      Was it evah about the science?

      • This whole discussion is about the wrong set of observations.
        They observed that celebrating a Paris agreement that gives a pass to the world’s largest emitter of CO2 was intellectually indefensible.
        They observed that the European policy of subsidizing wind/solar and shutting down nukes created cost without benefit while the American approach of natural gas and keeping nukes created both economic and environmental benefit.
        They observed that the American pull out of Paris meant the EU proposal to be generous with American tax dollars wasn’t going to happen.
        They observed that all these political problems go away, and you can keep the grant dollars flowing to climate research, simply by saying it isn’t as bad as they thought (but might be one day).
        So, science and observations are now in alignment.

  26. The answer to the post question is yes. In fact, CMIP5 models forecast future warming at 5 times the rate of past warming. And as Rud says, the one model closest to observations does not project alarming warming.

    https://rclutz.wordpress.com/2015/03/24/temperatures-according-to-climate-models/

  27. Betteridge says no.

  28. The earth is the receptor of radiation. The sun is the emitter. Please discuss the variations in emitted radiation.

  29. Lies, Damn Lies, and StatisticsClimate Models

  30. Does it really matter whether the models are spot-on accurate or not? Bottom line is that all the charts show the earth getting warmer, which is not a good thing ( see reports of barrier reef dying, extreme weather etc.) Equally if there is some doubt about effectiveness of approaches to fixing the problem, surely it is safer to try things like reducing co2 emissions? After all, at best we fix warming, at worst we get longer before fossil fuels run out and have a less polluted environment.

    • Dave

      If you could have had news reports from the past they would have shown the same in the 1730’s, the 1500’s, much of the 1300’s, much of the period 850 to 1150, much of the Roman Period and during the Bronze Age. it is nothing new.

      A good read is ‘times of feast times of famine’ by le Roy Ladurie or for a more scholarly analysis ‘climate history and the modern world’ by Hubert Lamb who established UEA..

      tonyb

      • If you could have had news reports from the past they would have shown the same in the 1730’s, the 1500’s, much of the 1300’s, much of the period 850 to 1150, much of the Roman Period and during the Bronze Age. it is nothing new.

        This isn’t correct. Paleoclimate data doesn’t show warming during these periods of the same rapidity and magnitude as today.

      • I trust you are using the same temp proxies to measure both when you say that.

      • Benjamin

        paleoclimate provides an extremely coarse filter, due to data that provides coarse resolutions often spanning many years. They do not show the rapid annual or decadal changes which often occur

        tonyb

      • They do not show the rapid annual or decadal changes which often occur

        And you know they “often occur” because they’re not shown, right?

        And you can provide figures for how large and rapid these “not shown” decadal changes are in order to back up your claim that current changes are “nothing new”?

      • Benjamin and Tall Guy, You can see the fast changes in the temperature data from 1910 to 1940.

      • Or from winter to summer…

      • Benjamin and Tall Guy, You can see the fast changes in the temperature data from 1910 to 1940.

        1910 to 1940 was about 0.3 degrees at 0.1/decade.

        1960 to present was about 0.9 degree at 0.16/decade. Three times larger in total, 60% faster.

        Present warming is both faster and at a total greater magnitude.

        Climatereason: as you point out, the paleoclimate proxies currently do not have the resolution to show past warming this fast (though, maybe this large), which directly refutes your argument that the warming in the past was just as big and fast.

        You can’t have it both ways. Your argument is self-contradicting.

      • Benjamin, where are you getting your data from? (most data sets peg early twentieth century warming at .5C)…

      • Alforonzi, GISS temperature trend from 1910 to 1940.

      • Benjamin,

        If you look at Hadcrut4 it looks like about 0.5 degrees from 1910-1940 and about 0.5 to 0.7 from 1978-2008 or 1970-2000. Might be why the IPCC believes that more than half of the warming is anthropogenic. Not 3/4. And gee that’s about 0.16 C/decade for 1910-40. And remember there are error bars on all these. That’s why I am a lukewarmer rather than an alarmist.

      • Proxies do reveal inter-decadal changes, for example sharp cooling from solar minima during the MWP from the 790’s and from the 1120’s.
        https://wattsupwiththat.files.wordpress.com/2014/12/esperetal2014b.jpg

    • Bottom line is that all the charts show the earth getting warmer, which is not a good thing ( see reports of barrier reef dying, extreme weather etc.)

      Maybe it is a good thing.

      Benefits: use of the energy with existing infrastructure, CO2 enriched plant growth and crop yields, CO2 increased drought tolerance of plants, CO2 increased heat tolerance of plants.

      Further, increased mean temperature means reduced temperature variability. It also means less violent storms because of reduced kinetic energy.

      At some amount, which we’re probably at, global warming doesn’t matter much. At some amount beyond that, global warming may be positive. At some amount beyond that, the net benefit might phase into net detriment. But those that have fallen in love with the solution rather than the problem are ready to declare catastrophe to suit their agendas.

    • Harry Twinotter

      Dave Robinson.

      Well said. The models predict warming, and warming is what is being observed. That is certain. The model nit-pickers ignore that simple fact.

      The rational thing to do is reduce CO2 pollution.

      • Harry, it’s also a simple fact that the earth is generally either warming or cooling. So on the basis of this simple fact, models by your definition have a 50/50 chance of being correct. (even worse, it appears that climate models have been superimposed on a hundreds year period of natural warming)…

      • Harry, it’s also a simple fact that the earth is generally either warming or cooling

        It’s impossible for the Earth’s temperature to stay the same?

        Paleoclimate records show that the Earth’s temperature generally seems pretty stable. We don’t see any warming in the past 5k years as fast and as large as today’s.

      • It would seem that the rational thing to do would be to begin to protect or minimize the population’s exposure to negative consequences. Not to spend multiple lifetimes and the world’s GNP guessing at what brought about the negative consequences. There’s ample evidence (not computer models) all around us of consequences that happened in just the recent past. The Sangamon, Wisconsin, and even the current Holocene have left evidence of negative consequences of something we don’t understand other than our African ancestors having nothing to do with it. Sea-level rise and fall multiple times in just the last 120,000 years. The Florida Keys formed less than 100,000 years ago from sand pushed onto “dead coral” and the formation of limestone (see history of Windley Key). And an entire 50,000 year old bald cypress forest discovered 15 miles out in the Gulf south of Gulf Shores, AL. The only thing totally clear about attribution is that we and our ancestors had nothing to do with it and very likely can never “fix” or control it in the next thousand years. Such a waste of money and time that would be better spent developing plans and policies to minimize loss of life and property (adaptation) for the many Miami, Houston, etc episodes to come.

      • Harry Twinotter

        afonzarelli.

        I don’t see what your point is. The earth does receive about the same amount of energy from the sun year to year. But that is not what the issue is, the issue is how warm does it get on the surface and lower atmosphere. Green house gases change the distribution of the heat.

        50/50 chance of being correct? No. It is not a coin-toss. The climate models predict a trend of rising temperatures at the surface and lower atmosphere with rising green house gases, and this is exactly what is happening. “Warming or cooling” is not what is being observed, unless you are referring to natural fluctuations. But fluctuations are not a trend.

        “a hundred year period of natural warming”, that is an interesting hypothesis. I don’t see where you are getting the evidence for that. Also, if you claim a hundred years of natural warming, you might want to tell us what you think is causing the natural warming.

      • Harry Twinotter

        Phillip Parrish.

        “The only thing totally clear about attribution is that we and our ancestors had nothing to do with it and very likely can never “fix” or control it in the next thousand years”.

        No, just no. There is plenty of evidence that rising greenhouse gases do indeed change the climate. A 40% increase in CO2 in 200-300 years is huge, and I do mean huge.

      • Harry Twinotter

        Benjamin Wichchester.

        “It’s impossible for the Earth’s temperature to stay the same?”

        Correct. It is possible for the Earth’s temperature to stay the same, once the natural variability has had enough time to cancel out. A change requires a forcing.

      • David Springer

        It’s too cold for growing crops in much of the northern hemisphere.

        The rational thing to do is keep adding CO2 to the atmosphere so all the frozen wasteland becomes productive farm land.

      • Reply to David Springer:

        Dead-right, David: add CO2! Most commentators will persist in conflating CO2 with contamination, pollution, and garbage. But no, CO2 alone will PROMOTE pant (and therefore animal) life.

        Further, there apparently have been times in prehistory when temperature rise PRECEDED CO2 rise, and also times when the temperature rise reached 9 deg. C from today’s.

        Further, it is a sad facts that I can sit here and invent, if I choose, a “series” of something-or-other data, credit it to the Environment Ministry of Lower Bullroarvia, and start a big argument.

        Frank Gue, Professional Engineer, Burlington, Canada.

        >

      • David Springer

        Harry Twinotter | September 29, 2017 at 1:02 am |

        “A 40% increase in CO2 in 200-300 years is huge, and I do mean huge.”

        A 40% increase in white blue-eyed alligators is huge, and I do mean huge.

      • Reply to David Springer and so many others:

        CO2 IS NOT A POLLUTANT.

        Frank Gue Professional Engineer, Burlington, ON Canada >

      • Nice spin job, harry, you misconstrued everything i said. As well you misquoted me. a hundreds year period of natural warming (as in 300 years)…

      • CO2 is not a pollutant.

        Frank, Burlington

        >

      • what you think is causing the natural warming.

        Natural warming was caused by ice extent decrease since the little ice age. The warming paused because overall ice extent decrease paused.
        Albedo decrease has paused, look at earthshine and CERES data

      • Harry Twinotter

        “Natural warming was caused by ice extent decrease since the little ice age.”

        If true, what do you think the natural forcing is?

        “Albedo decrease has paused, look at earthshine and CERES data”

        If you have a link to the chart or whatever, I will have a quick look.

      • David Springer

        “If true, what do you think the natural forcing is?”

        Changes can happen without external forcings.

        Pope is describing a feedback loop that results in an oscillation over many decades. The feedback is change in albedo. When the ocean is warming it evaporates faster and that causes more snow to fall. As more snow falls albedo increases as it accumulates causing the ocean to cool. As the ocean cools less snow falls and albedo starts dropping causing the ocean to begin warming again.

        There is a decades long cycle to snow advancing and retreating. No external change in forcing required to keep it going.

        Feeback loops are common in both nature and technology.

    • This isn’t correct. Paleoclimate data doesn’t show warming during these periods of the same rapidity and magnitude as today.

      That is because the past warming records are actually real data. When time passes and current warming forecasts are recorded as real data, it will be well inside the bounds of past warming.

  31. Ross McKitrick, thank you for the essay. I think it was good.

  32. Dave, Does it really matter whether the models are spot-on accurate or not?

    Your wording is bad. “Spot on” is not what is being discussed. The problem is that the error is consistent, and large enough that the forecasts (scenarios, predictions, expectations, etc) are undependable for formulating public policy.

    • GA, all.

      Why ON EARTH are we spending so much time and energy on anthropogenic climate change (ACC), which is the biggest “maybe” ever to arrive on the Internet?

      -especially when anthropogenic garbage accretion absolutely, without any doubt whatsoever, visible to our own eyes, easily measured, readily proved to be human-caused, right now is killing millions of food fish and other creatures and is threatening our own medium-term survival?

      For the big picture, look at the Pacific Gyre, thousands of square kilometers circulating slowly in the Pacific Ocean, much of it debris from those confounded, unnecessary personal water bottles.

      For the little picture, look at your own municipal land fill, which is probably, like ours, overflowing and causing your officials to hunt desperately for more places to throw things away, when there actually isn’t any place called “away”.

      We should quit arguing about ACC and get serious about things like Energy From Waste (EFW) and proper pricing of garbage disposal.

      – all the while trying to avoid all those hidden agendas.

      Frank Gue Professional Engineer.

      >

  33. to whoever commented that Ross didn’t say or imply anything about AGW or ACC:

    No, and I didn’t say he did. Read my post more carefully. If it can be interpreted that way, I apologize. Ross is a careful, conservative (no, I don’t mean Conservative!), and dependable investigator.

    Frank Gue

  34. This is a perturbed physics ensemble. All of thousands of solutions of the model – are first of all constrained to proximity to observed surface temps over a training period. The best that can be said is that it shows a ‘broader range’ of solutions than the IPCC opportunistic ensembles.

    https://watertechbyrie.files.wordpress.com/2014/06/rowlands-fig-1.png
    http://www.nature.com/ngeo/journal/v5/n4/abs/ngeo1430.html?foxtrotcallback=true

    To create the IPCC ensemble – modelers pick a solution from many feasible and this is graphed with arbitrarily chosen solutions from other models. So how is the one solution chosen by modelers?

    “AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.” http://www.pnas.org/content/104/21/8709.full

    That’s right. They pull it out of their arses.

    • The exam (CMIP5) may give a clue about which models were selected:
      «RCP8.5 is a so-called ‘baseline’ scenario that does not include any specific climate mitigation target. The greenhouse gas emissions and concentrations in this scenario increase considerably over time, leading to a radiative forcing of 8.5 W/m2 at the end of the century. While many scenario assumptions and results of the RCP8.5 are already well documented, we review in this paper some of the main scenario characteristics with respect to the relative positioning compared to the broader scenario literature. In addition, we summarize main methodological improvements and extensions that were necessary to make the RCP8.5 ready for its main purpose, i.e., to serve as input to the Coupled Model Intercomparison Project Phase 5 (CMIP5) of the climate community. CMIP5 forms an important element in the development of the next generation of climate projections for the forthcoming IPCC Fifth Assessment Report (AR5).»
      https://link.springer.com/content/pdf/10.1007%2Fs10584-011-0149-y.pdf

      That is not exactly a blind test, the expected answer was pretty much known upfront. Obviously the models that didn´t provide something in the vincinity of the expected output would not even be suggested as a candidate for the Intercomparison Project.

      • Representative concentration pathway 8.5 is a scenario for forcings used as inputs to the models, not the models themselves. The models are developed by different modelling teams around the world and because national pride is involved there is no real sign that survival of the fittest applies. All the models use the same standard rcps.

        The rcp8.5 scenario was developed to represent the high emissions scenarios used in the literature at the time.

        The particular real world developments to get to this scenario are well nigh impossible, so don’t use it for anything other than giving an upper bound.

      • By the following paper, both RCP 8.5 and RCP 6.0 seem to be wildly exaggerated: The implications of fossil fuel supply constraints on climate change projections: A supply-driven analysis

        From the abstract: «The emission scenarios used by the IPCC and by mainstream climate scientists are largely derived from the predicted demand for fossil fuels, and in our view take insufficient consideration of the constrained emissions that are likely due to the depletion of these fuels. This paper, by contrast, takes a supply-side view of CO2 emission, and generates two supply-driven emission scenarios based on a comprehensive investigation of likely long-term pathways of fossil fuel production drawn from peer-reviewed literature published since 2000. The potential rapid increases in the supply of the non-conventional fossil fuels are also investigated. Climate projections calculated in this paper indicate that the future atmospheric CO2 concentration will not exceed 610 ppm in this century; and that the increase in global surface temperature will be lower than 2.6 DegC compared to pre-industrial level even if there is a significant increase in the production of non-conventional fossil fuels. Our results indicate therefore that the IPCC’s climate projections overestimate the upper-bound of climate change.»

        Figure 2 and figure 4 in the linked paper indicate that both RCP8.5 and RCP6.0 may be totally unrealistic ( SD-PC = supply-driven peak conventional fossil fuels’ scenario / SD-PCU = supply-driven peak conventional & non-conventional fossil fuels’ scenario )

  35. Pingback: Are Climate Models Overstating Warming? | Watts Up With That?

  36. There is no global warming due to CO2/GHG effect. So what the models are or not doing amounts to nothing but noise because all the pillars AGW is based on have not come to be.

    They are the lower tropospheric hot spot, a call for the evolution of a more +AO/NAO , and the call for a decrease in OLR.

    It can be shown that it is given solar activity tied into oceanic temperatures which govern the global temperatures. Not to mention the likelihood of an increasing albedo during very low solar periods of time due to an increase in clouds, snow cover and major volcanic activity.

    I have said year 2017 is the year things change, because it is the first year which my two qualifiers have come into play for solar to have a cooling effect upon the climate.

    Those being 10 + years of sub- solar activity in general followed by low average value solar parameters which I have listed many times as to what they are.
    The data and climatic history does not support AGW. I will send a graph which shows what correlates and will continue to correlate to global temperatures.

    If one is following overall global temperatures the trend over the last few months has been down and without the aid of favorable oceanic temperatures kiss AGW goodbye.

  37. http://hockeyschtick.blogspot.com/2010/01/climate-modeling-ocean-oscillations.html

    This is how you evaluate given solar activity versus global temperatures.

    It is the ocean cooling effect tied into solar activity which gives you a given global temperature. CO2 has nothing to do with it.

    The GHG effect is a by product of the climate/environment.

    We will know much more starting now and going forward but I am confident this correlation will hold up going forward.

  38. In my previous post I meant to say the overall sea surface temperature trend has been down for the last few months .

  39. The answer is yes, for many years. I have been calling for the modeling system to be shut down because scientific prediction method that gives wrong answers consistently is clearly wrong science. But no, they keep using wrong predictions to justify billions spent on Kyoto, Copenhagen, Paris and others. This perverse insistence upon wrong methods makes climate science into nothing more than a pseudo-science.

  40. I would not even begin to question the theoretical or calculated amounts of CO2 that humans are putting into the atmosphere, but I would point to 71% of the globe, the oceans, (95% unexplored) that might hold some staggering potential contribution. Volcanoes are typically laughed at and quickly dismissed, but what about those submarine volcanoes and vents in the 95% unexplored areas of the oceans? We have small amounts of simple data from thousands of ARGO floats down to 2000 meters, but the average depth of all the oceans approaches 4000 meters. And even if ARGO floats could listen, water can’t boil (no bubbles) at depths below ARGO. So what is the real CO2 contribution here and is there a measureable heat contribution also over long time scales that would promote release into the atmosphere? I’m not even convinced that CO2 from all sources (in the atmosphere) is being accurately captured/measured. If not, then the models are more broken than anyone suspects.

    • The oceans are not releasing CO2, they are absorbing it. CO2 from submarine volcanoes gets dissolved and transformed.

      • Full fathom five…

         Nothing that
        doth fade,
         But doth suffer
        a sea-change
         Into something
        rich and strange.

      • So there is no CO2 in the ocean and the notion that there is 60 times more CO2 in the ocean than the atmosphere is false. The ocean eats and transforms CO2 just as it ate the model predicted elevated warming over the past 20 years. OR.. The ocean can distinguish between CO2 that falls out of the atmosphere and that belched from a submarine volcano or vent. Sounds plausible to me. That shoots holes in my notion that transfer might actually happen and be quite different from one region to another. Guess I should run along and pick up some salad dressing to pour over my green leafy CO2 pollution.

      • No. We are talking about net transfer. The ocean is a net carbon sink.

        Most of the carbon in the ocean is not as CO2, but bicarbonates and carbonates.

        Being a net sink means more CO2 enters the oceans than leaves them, and more CO2 is made into bicarbonates than the opposite.

        During warming periods usually the oceans release CO2. This is a special warming period, because the oceans are taking CO2 instead of releasing it. The reason is partial pressure difference dominates over temperature-driven gas solubility changes. After all the oceans are warming very little.

      • Dang, Javier. Looks like we might be 70% or more in agreement after all. It’s amazing what you can entice out of someone by dragging some silly bait in front of them to draw them out of the now normal 140 character limitation mode. Thanks for the response.

      • Javier, even if aco2 was entirely absorbed by carbon sinks (and the entire rise was natural), then nature would still be a net sink for carbon. Thus, the observation that nature (or the ocean for that matter) is a net sink for carbon tells us nothing about attribution…

    • There is a report somewhere in this monstrous (and mostly pointless) file to the effect that Mount Pinatubo, in four days, emitted more CO2 than all of humankind for all the time we humans have been here. Similarly with that volcano in Iceland that stopped all passenger jets for several days. There are hundreds of volcanos on earth, most far under water. Then there is the Great Atlantic Rift …….

      Good grief. We humans are a pimple on the pimple of CO2 generation. Let’s spend our energies on something useful.

      Frank, Burlington Ontario

  41. A point, that someone may have already made (apologies if so) is that the Millar et al. paper is really looking at how much we have warmed for a given total emission, not for a given total change in forcing. Normally we represent climate sensitivity as a relationship between temperature changes and changes in forcing, not temperature changes and changes in total emission. It is possible (although this is not obvious) that we have warmed less than expected, given how much we’ve emitted. This, however, does not mean that climate models are warming too much relative to the change in forcing.

    • …’This, however, does not mean that climate models are warming
      too much relative to the change in forcing.’

      Doth it not?

    • It means we have been sold a 2-3°C warming that is not going to take place.
      I don’t like being shortchanged. I demand the full amount of warming that my emissions deserve.

      • Even with the Paris agreement, 2-3oC of warming is still quite likely.

      • That’s what the models say, but we already know that the models are wrong and overheating. The whole thing has been built over shaky assumptions, and then the assumptions assumed to be facts. We have seen this process many times since Malthus. Population bomb, limits to growth, acid rain, ozone hole, global warming. All built over assumptions that turn out to be incorrect. And we even see the same people moving from one false scare to the next.

      • The whole thing has been built over shaky assumptions, and then the assumptions assumed to be facts.

        Sounds like the Javier Conjecture.

        https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/#comment-813217

      • Never my conjecture. In my articles I don’t add anything to what has been published by climate scientists. 100% science based and fully referenced.

      • I am with verytallguy all the way, especially with regard to

        ” I demand the full amount of warming that my emissions deserve.”

        Frank

      • In my articles I don’t add anything to what has been published by climate scientists.

        Except when you claim that Haigh supports your position when she explicitly rejects it, of course.

      • I don’t care much about scientists opinions or claims. 99% of scientific hypothesis proposed by scientists are eventually found to be incorrect and rejected. They are 2 cents a piece, like everybody else’s opinion. What I care about is the data and the evidence presented. If the scientist works well, the data is correct forever. It can always be interpreted to support a new hypothesis hopefully more correct.

        So I don’t care much what Haigh thinks or supports. I care a lot about the evidence she is uncovering and showing in her articles. As every scientist, she will be proven right on some things and wrong on others.

        I let faith and priests that interpret the scriptures to religion.

      • I let faith and priests that interpret the scriptures to religion.

        Gotta admit Javier, for someone maintaining your dogma over the views of the very people whose science you claim to represent, that’s pretty funny.

      • No dogma, here. Just sticking to the evidence as much as I can. If the evidence changes I’ll change.

      • People, people, people!

        It’s SO MUCH simpler than we are making it.

        The four-step scientific method (supplemented by Frank) has been used for centuries and ALWAYS WORKS:

        0. Frank’s prelude: Agree on the question we are trying to answer.

        1, Gather and organize data.

        2. Form an hypothesis that MIGHT account for these data.

        3. Test the hypothesis. (if there is ONE failure, the hypothesis fails – Albert Einstein)

        4. Write a law. (Ohm’s Law: current = voltage / resistance – works every time)

        5. Frank’s supplement: Challenge the law constantly by obeying the adage: Skepticism is the soul of science.

        I won’t plug my numbers into this as it would be a waste of MORE time. If anyone else wants to – be my guest. The first question such a brave person must ask is: Where in the four steps are we, cuz we sure as heck ain’t close to the 4th or 5th. I am no longer even sure what the question is.

        Don’t overlook the possibility that we cannot do one or more of the steps.

        G’night. Shadrack, Meshak, and Tobedwego.

        Frank Gue, B.Sc., MBA, P.Eng. Professional Engineer

        >

      • Harry Twinotter

        franksgue.

        “Test the hypothesis. (if there is ONE failure, the hypothesis fails – Albert Einstein) ”

        Albert Einstein did not say that.

        Anyway name-dropping and quote-mining is not a scientific argument, it is a rhetorical argument ie value to science zero.

      • Note to Harry Twinotter:

        No, Einstein did not say those precise words. In more precise words, if memory serves me, he said:

        “Why 100 contradictions? One would have sufficed.”

        Now, if you wish to pick out one verb or one comma or one question mark that is not precisely what he said, be my guest.

        Or, in this day of fake news and truthiness, deny the whole thing. Neither you nor I can personally verify anything.

        Frank Gue.

        >

      • > In my articles I don’t add anything

        Of course not:

        Under this interpretation temperatures do not follow solar activity.

        https://judithcurry.com/2016/09/20/impact-of-the-2400-yr-solar-cycle-on-climate-and-human-societies/#comment-812838

        Javier does not always add anything, but when he does, it’s usually interpretations he finds consistent with the data he cherrypicks.

  42. … “I know I’ve made some very poor decisions recently, but
    I can give you my complete assurance that my work will be
    back to normal. I’ve still got the greatest enthusiasm and
    confidence in the mission. And I want to help you.”

  43. Perhaps I should make a modification of my (too-)oft repeated question: “What Are They Really Counting?” to “What Are They Really Modelling?”.

    Honestly, the seemingly endless efforts to a two (or two or three or four) non-physical “measurements” match or diverge from a portmanteau of (stochasticly-generated) modelling options leaves me mystified. There simply cannot be any right answer — only preferred answers.

    • ….endless efforts to a (or two or three or four) non-physical “measurement” to match …..

    • good morning fingers — type what I tell you!

      “…..the seemingly endless efforts to get a (or two or three or four) non-physical “measurement” match or diverge from …..

  44. It makes no physical sense to calculate trends across the millennial temperature peak and inflection point at 2003-4
    See Fig 4 from
    http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
    https://2.bp.blogspot.com/-ouMJV24kyY8/WcRJ4ACUIdI/AAAAAAAAAlk/WqmzMcU6BygYkYhyjNXCZBa19JFnfxrGgCLcBGAs/s1600/trend201708.png
    Fig 4. RSS trends showing the millennial cycle temperature peak at about 2003 (14)

    The RSS cooling trend in Fig. 4 and the Hadcrut4gl cooling in Fig. 5 were truncated at 2015.3 and 2014.2, respectively, because it makes no sense to start or end the analysis of a time series in the middle of major ENSO events which create ephemeral deviations from the longer term trends. By the end of August 2017, the strong El Nino temperature anomaly had declined rapidly. The cooling trend is likely to be fully restored by the end of 2019.
    Forecasts and trend calculations which ignore this turning point are clearly useless see Fig. 12
    https://4.bp.blogspot.com/-iSxtj9C8W_A/WKNAMFatLGI/AAAAAAAAAkM/QZezbHydyqoZjQjeSoR-NG3EN2iY93qKgCLcB/s1600/cyclesFinal-1OK122916-1Fig12.jpg

    Fig. 12. Comparative Temperature Forecasts to 2100.

    Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (red harmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed.

  45. Let’s just pick a solution in the ‘observational’ range of TCR and and see how long we can get away with it for? Then they can be ensembled with others modelers choices within the ‘observational range’… and a statistics of angels on pinheads produced.

    https://watertechbyrie.files.wordpress.com/2017/04/slingo-and-palmer-e1506451524627.png

    “In 1963, Lorenz published his seminal paper on ‘Deterministic non-periodic flow’, which was to change the course of weather and climate prediction profoundly over the following decades and to embed the theory of chaos at the heart of meteorology. Indeed, it could be said that his view of the atmosphere (and subsequently also the oceans) as a chaotic system has coloured our thinking of the predictability of weather and subsequently climate from thereon.

    Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

  46. Here is what it comes down to which is will the oceanic temperatures fall as we move forward and will there be a slight uptick in albedo? I am talking 1/2 of 1% for albedo increases..

    Next is what determines or influences albedo /oceanic temperatures?

    Is it solar or the GHG effect?

    I say solar and if this is correct then oceanic temperatures should continue to fall while the albedo increases and there goes AGW.

    I will send the most recent data on oceanic temperatures and if one is for cooling the past three months look quite promising.

    I have been calling for this for quite a long time.

    Without the aid of favorable oceanic temperatures kiss global warming good bye.

  47. Another year has gone by, it’s now the Fall of 2017, and it’s time once again to put up ‘Beta Blockers Parallel Offset Universe Climate Model’, a graph first posted here in the summer of 2015.

    http://i1301.photobucket.com/albums/ag108/Beta-Blocker/GMT/BBs-Parallel-Offset-Universe-Climate-Model–2100ppx_zps7iczicmy.png

    The above illustration is completely self-contained. There is nothing on it which can’t be inferred or deduced from something else that is also contained in the illustration.

    For example, for Beta Blocker’s Scenario #1, the rise of GMT of + 0.35 C per decade is nothing more than a line which starts at 2016 and which is drawn graphically parallel to the rate of increase in CO2 which occurs in the post-2016 timeframe. Scenario #1’s basic assumption is that “GMT follows CO2 from Year 2016 forward.”

    Beta Blocker’s Scenario #2 parallels Scenario #1 but delays the start of the strong upward rise in GMT through use of an intermediate slower rate of warming 2025 – 2060 that is also common to Scenario #3. Scenario #2’s basic assumption is that “GMT follows CO2 but with occasional pauses.”

    Beta Blocker’s Scenario #3 is simply the repeated pattern of the upward rise in GMT which occurred between 1860 and 2015. That pattern is reflected into the 2016 – 2100 timeframe, but with adjustments to account for an apparent small increase in the historical general upward rise in GMT which occurred between 1970 and 2000. Scenario #3’s basic assumption is that past patterns in the rise of GMT occurring prior to 2015 will repeat themselves, but with a slight upward turn as the 21st Century progresses.

    That’s it. That’s all there is to it. What could be more simple, eh?

    All three Beta Blocker scenarios for Year 2100 — Scenario #1 (+3C); Scenario #2 (+2C); and Scenario #3 (+1C) — lie within the IPCC AR5 model boundary range; which it should also be noted, allows the trend in GMT in the 2000 – 2030 timeframe to stay essentially flat while still remaining within the error margins of the IPCC AR5 projections.

    Scenario #3 should be considered as the bottom floor of the three scenarios; and it is the one I suspect is most likely to occur. The earth has been warming for more than 150 years and it isn’t going to stop warming just because some people here and on WUWT think we are somehow at or near the top of a long temperature fluctuation cycle.

    If I’m still around in the year 2031, I will take some time to update the above illustration to reflect the very latest HadCRUT numbers published through 2030, including whatever adjusted numbers the Hadley Centre might publish for the period of 1860 through 2015. In the meantime, I’ll see you all next year in the fall of 2018 when the topic of ‘Are the models running too hot’ comes around once again.

    • “The above illustration is completely self-contained.”
      Fiction then, as pure as it gets?
      Nice illustration though.

      • SorF, as long as GMT as tracked by HadCRUT4 and its eventual successors continues to rise in a statistically significant way, however slowly it may be rising, then my illustration contains an element of truth.

        Along about 2030 or so, enough time will have passed so that we can then make a proper assessment as to just how much truth my illustration does actually contain, if any. And if I’m not around at that time to write the assessment, you have my permission and encouragement to write your own.

    • No projections for carbon being reduced from the atmosphere.

      I’m here to tell you that self-reproducing robots the size of bacteria powered by sunlight will be using the carbon in the atmosphere to produce every carbon compound you can name, programmatically assemble it into structures as big & complex as anything physically possible, and it will cost essentially nothing because the robot workforce is self-reproducing.

      So I predict that before CO2 in the atmosphere builds up to any detrimental level that emerging technology will be using it as a carbon source to produce both fuel and durable goods and we’ll need international agreements on limiting CO2 removal from the atmosphere. In other words atmospheric CO2 is a valuable resource for future manufacturing and we want plenty of it.

      • In other words atmospheric CO2 is a valuable resource for future manufacturing and we want plenty of it.

        Atmospheric CO2 is a valuable resource for growth of green things and making better use of water and we want plenty of it.

  48. Beta Blocker See 12:14 post above. and
    http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
    Why do you think the earth was created in 1860.?

  49. I have read with interest and applaud the work by Cowtan and Hausfather in papers that deal with making better comparisons between CMIP5 model and observed GMST. I have been attempting to generate interest among those authors whose papers show an interest and innovation in analyzing these temperature series in an analysis I have been doing. I have been most recently interested in the global mean surface temperature series (GMST) and attempts to decompose those series into a secular trend, cyclical and noise (red and white) components. That interest was piqued by an email exchange with Tom Karl and some of the authors of their paper “Possible artifacts of data biases in the recent global warming hiatus” (Karl 2015) which can be found at the following link:

    http://science.sciencemag.org/content/348/6242/1469/tab-pdf
    During those exchanges we discussed using linear trends from regression versus other available methods for deriving tends and decomposing series that can handle non-linear trends and non-stationary series such as Singular Spectrum Analysis (SSA) and Empirical Mode Decomposition (EMD). The Karl authors brought up the use of EMD for a future paper as a sequel to their original Karl 2015 paper. We were comparing results of SSA and EMD. I was more familiar with SSA, but I noted after doing a quick survey of the literature on EMD that the Karl authors were using the combined trend and most significant cycle component as the trend without acknowledging the separation of these components by EMD. That is where the exchanges between the Karl authors and me stopped.

    I continued my work analyzing both observed and CMIP5 RCP 4.5 model GMST series. I used the combined SST (for ocean) and SAT (for land) temperatures for observed and modeled GMST so that I would avoid the apples to oranges comparisons that are sometimes seen in the literature where the observed ocean temperatures (SST) are used in a comparison with models where the ocean (SAT or tas and not tos) are used. I initially had some consternation about the SSA method not readily separating any cyclical components until I began looking at simulated series where I combined known trend, cyclical and noise series and found that the heuristics of the SSA method could not make a clear separation of cyclical components that would be expected from a GMST series – whereas EMD could.

    In my investigations of the use of EMD I read the Huang article linked here: http://www.academia.edu/14188939/On_the_time-varying_trend_in_global-mean_surface_temperature that uses EEMD to decompose an observed GMST series. I have read other related articles on the use of EEMD and CEEMD in decomposing time series and the use of plots of log(variance) versus log(period) for the series components and with the confidence intervals for the series noise to determine the significant components. I have been using EEMD and CEEMD from R and the library hht.
    I am of the view that the recent global warming is caused partly by anthropogenic and partly by natural effects and that the big question going forward is the relative portions from these two sources. Estimations of the sensitivity of temperature to radiative forcings, and particularly anthropogenic sourced green house gases, from observations and for determining the validity of climate model output depend critically on estimating these portions.

    If there are methods available for demonstratively improved separation of the global warming series components and to thus better estimate these portions – such as EEMD/ CEEMD – then one has to wonder whether the seeming lack of interest by the climate science community to this newer approach is due to unfamiliarity with that method. Making no claims of originality, novelty or propriety for my work I have put it out to innovative authors noted above with hopes of getting the climate science community interested in looking further into this approach and its application to temperature series. Hausfather and Cowtan were among those from whom I solicited a response on my work. Thus far the I have received no responses from any of those authors I solicited.

    The trends in degrees C per decade I found for the critical 1975-2016 GHG period for the means of 102 CMIP5 RCP 4.5 runs was for EEMD 0.115, for OLS with breakpoints was 0.181 and for the OLS for the 1957-2016 period was 0.192 while for the GHCN GMST it was 0.101 for EEMD, 0.165 for OLS and breakpoints and 0.171 for OLS for the 1975-2016 period. These results indicate that by taking reoccurring low frequency cyclical components out of the climate model and observed series lesser differences between models and observed series are produced, but it reduces, in both cases, the 1975-2016 period trend by approximately 40%.

    • The most successful decompositions involve a fit to temperature changes expected from an exponentially growing anthropogenic CO2 component. Here is one from Vaughan Pratt.
      http://clim.stanford.edu/GC23C-1085.pdf
      Cyclic functions make no sense unless you take the background exponential out properly.

      • The advantage of Ensemble Empirical Mode Decomposition (EEMD) is that no assumptions for parameters need to be made a prior – it is an empirical method as the name implies.

        Interesting that from EEMD analysis the CMIP5 models show cyclical components in the historical period where the modelers are inclined to match or attempt to match the observed GMST. The periods of the model and the observed are not necessarily the same but have the same effect of reducing the trends that might otherwise be attributed to a secular trend caused by GHGs in the critical 1975-2016 time period.

      • Cyclic behavior is an assumption that is often wrong in systems, especially growing ones.

      • Most all of the CMIP5 models (as well as the observed series) show cyclical modes in EEMD analysis of GMST and of varying frequencies within an individual model.and observed series.

        I have had one poster speculate that the climate modelers require the presence of these cycles in order to match the observed series in the historical period.

        I have not to date found a published detailed analysis disputing the findings of EEMD cyclical components in GMST series.

      • Cyclic behavior is a poor approximation to exponential behavior. In fact it doesn’t work at all. There is exponential forcing in the last century or so, so it has to be included. It has also been poorly approximated as a linear secular trend, for example in the AMO definition on woodfortrees, and in some papers who do linear detrending when trying to analyze long-term cycles without the background trend.

      • Jim D, do you have a good understanding of how the method EEMD decomposes a series into periodically reoccurring, trend and noise components -and further how the results can be used to determine statistical significance? Do you know what the separated components look like? Do you have any references that critique the application of EEMD in analyzing observed and CMIP5 modeled GMST series?

      • No, but have you tried applying that method to a simple exponential function to see what it gives? I think it is not fit for purpose.

      • > I have not to date found a published detailed analysis disputing the findings of EEMD cyclical components in GMST series.

        I have yet to find any explanatory power given to EEMD, e.g.:

        The quantitative definition and predictions presented in this paper are based on an intrinsic MDV decomposed by EEMD, getting rid of interference from the secular trend and high-frequency variability to show clear physical meaning. Using an ensemble of data, this definition reduces the effects of a particular dataset and also avoids arbitrary starting and ending points or a time span, making it more reliable. An ensemble of data can be generated by combination of various data from the same or different sources. However, sometimes there is only one time series. In this case, a useful method to produce ensemble results using a single sequence is available in Huang et al. (2003).

        http://journals.ametsoc.org/doi/pdf/10.1175/JAS-D-14-0296.1

        Disputing definitions may not be to every scientist’s taste.

      • I have tested EEMD using many different simulations with various known reoccurring periodic, trend and noise components with good results. An exponential function, in and by itself, in a series would be decomposed as a trend.

        A series with a high level of autocorrelation can show as a reoccurring periodic component but not as part of the trend – and it is, of course, the trend that is the critical component to cleanly separate in these series.

      • It has been an increasing trend for a century, and unless you have a suitable function, you would be deceiving yourself.

      • Willard, from that article you linked above (which I have previously read) the author states in the conclusions what my initial post here on this topic is asking, i.e. more attention to the MDV.

        As the trend has been steady since the early twentieth century, the MDV becomes a strong predictor for climate multidecadal change and it deserves more attention.

      • Cyclic behavior is an assumption that is often wrong in systems,

        When Cyclic behavior appears in all of the past fifty million years you can be sure it will appear in future behavior.

      • > the author states in the conclusions what my initial post here on this topic is asking, i.e. more attention to the MDV.

        The authors also admit in the introduction that there’s no real explanation for the MDV to date. Could be ENSO. Could be Jupiter and Saturn.
        Lots of theories.

        They don’t mention stadium waves, however.

        My point was to recall that EEMD’s just a tool. Just like any tool, it has limitations. You could go for MEEMD for all I care:

        The EMD/EEMD has found wide applications in one-dimensional data analysis. However, many physical systems, including the Earth’s climate system, are spatio-temporally four dimensional. Such physical systems are often not static or periodic in spatial or temporal domains but rather evolve with different levels of spatio-temporal coherence on different timescales. An analysis method for one-dimensional data, such as EMD/EEMD, has limitation in extracting information of spatio-temporal evolution in multidimensional data. In climate science, currently there exist some matrix-based eigenvalue–eigenfunction calculation methods to deal with spatio-temporally coherent structures, such as EOF analysis, which in many other fields is called PCA, and principal oscillation pattern analysis. In these methods, spatial structures and temporal evolutions are assumed separable and the spatial structures remain unchanged throughout the climate system evolution. However, the high sensitivity of the results obtained using these methods to both spatial and temporal domains cast shadows on the validity of this assumption. Indeed, such methods often meet difficulties in interpreting physical meanings of the results.

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4792406/

        There might be other limitations.

        As far as we’re interested in a scientific explanation, it’s the MDV that matters.

      • Willard, you make two points that are obvious and often stated in papers on these subjects but should be repeated and those being (1) the theoretical basis and physical meanings for periodically occurring components of climate model and observed GMST series have been subject to speculation but not hard proofs and (2) the EEMD analysis is limited to temporal series. I do not think either of these points detracts from these analyses results from EEMD or the thoughts that more work by the mainstream climate science community is required in this area.

        MEEMD may be able to overcome some of the problems that can occur with EOF/PCA as the excerpt from your link says here: “EOF analysis, which in many other fields is called PCA, and principal oscillation pattern analysis. In these methods, spatial structures and temporal evolutions are assumed separable and the spatial structures remain unchanged throughout the climate system evolution. However, the high sensitivity of the results obtained using these methods to both spatial and temporal domains cast shadows on the validity of this assumption. Indeed, such methods often meet difficulties in interpreting physical meanings of the results.” Certainly better analysis methods of spatial components could lead to a better understanding of temperature changes and perhaps insights into temporal evolution of these changes.

        I would much prefer looking at temperatures with EEMD and perhaps MEEMD than taking the conventional approach of using linear OLS trends without regards to what is GHG and naturally caused or whether the series trends might well be non linear.

      • Steven Mosher

        Ken.
        No amount of statistical analysis will show the physical thing you want to show. Physics will show that. Trends and cycles..are…not..in..the…data. they are in the model of the data.

      • Jim D: The most successful decompositions involve a fit to temperature changes expected from an exponentially growing anthropogenic CO2 component. Here is one from Vaughan Pratt.

        All of these fitted models, that by Vaughan Pratt, the EEMD results, etc, entail forecasts/projections/models/expectations/predictions/etc for the future. Some like Pratt’s are dependent on the future evolution of CO2 concentration, some are not. None should be considered successful before a few decades of (future) out of sample data have been obtained. Right now, you can get high model fits with or without CO2 in the model. Every model that fits well has at least a quasi-periodic component, either in the estimate of the signal or in the spectral decomposition of the estimate of the residual (“noise”). If there is a cause for the cyclical component, it isn’t very completely or accurately characterized.

        I am looking forward to updates of model forecasts (when dependent on CO2 evolution, as is Pratt’s model), and on updates of model to data comparisons as time goes by. I also look forward to newer models. I expect to have to wait a couple decades before I see a model that has been accurate for a long enough time to produce confidence in its predictions for what comes after that.

        Meanwhile, model forecasts are highly dependent on modeling assumptions that have not been “stringently” tested.

      • Pratt did not model the future. The exponential function of CO2 level is a fit, and a good one, to the past. It is needed to do a better detrending than some of the linear efforts, or just plain ignoring trends, and a correct detrending helps to show the non-GHG part of the variation. Mann also does this in his PMO paper.

      • Steven Mosher | October 3, 2017 at 10:42 pm |

        No amount of statistical analysis will show the physical thing you want to show. Physics will show that. Trends and cycles..are…not..in..the…data. they are in the model of the data.

        If the physics can clearly explain the trends and cycles of observed and climate model GMST series from first principals, I would agree with your implication that the physics stands alone. If the data were intractable to modeling like we see in many attempts to model economic data, I would agree that we need some a prior insights in analyzing climate phenomena like temperature change. The present state of climate science lends itself to (better) analyses of evidentiary data in attempts to confirm or reject general suppositions from the realms of physics. That we can all agree by first principals of physics that GHGs in the atmosphere will lead to global warming is not sufficient in revealing how much warming.

      • > I do not think either of these points detracts from these analyses results from EEMD or the thoughts that more work by the mainstream climate science community is required in this area.

        More work is always required in most if not all areas, KenF, even those that would encounter technical difficulties. In this area, I suspect we’ll never really get “hard proofs,” but maybe it’s just a vocabulary thing. In any case, the social network of those who use EEMD still looks rather small.

        Suppose we got an MDV signal that would bigly explain a generalized MWP. What would that entail regarding climate sensitivity?

      • In any case, the social network of those who use EEMD still looks rather small.
        Suppose we got an MDV signal that would bigly explain a generalized MWP. What would that entail regarding climate sensitivity?

        Willard, your first comment above is part of the puzzlement of mine on this issue. It is not so much that the practitioners form a small group but rather that those in the main stream climate science community have not bothered to critique the method or even discuss it informally. Recall in my original post that it was the Karl (2015) authors of the paper “Possible artifacts of data biases in the recent global warming hiatus” that were going to use EMD as part of an analysis in a follow on paper of this original paper. When I pointed out that they were using the combined trend and MVD as trend our email correspondence stopped. I have not seen that follow on paper published to date.

        I have not ventured into the what the removal of the MVD per EEMD would do for climate sensitivity estimates. It would depend on the starting and ending points used in the estimation and where those time periods were in the MVD cycle.

        Also recall that in one of my above posts that the argument of this thread, at least for the historical period, would be rendered somewhat different since removal of the MVD per EEMD puts the trends for the observed and ensemble of climate models in close proximity. It should also be noted that under EEMD analysis some of the CMIP5 models trends in the critical 1975-2016 period are significantly different from one another (where multiple runs of the individual model were performed) and that in some cases the observed trends would be out of the 95% CIs of a model with multiple runs. Comparing individual models to the observed is my preferred mode of analysis.

      • Jim D: Pratt did not model the future.

        Pratt’s model entails a model of the future. The only really good test of model fitting is how well the resultant model predicts out of sample data, in this case future global mean temperature, conditional on the evolution of atmospheric CO2. Without “passing” the test, the model has no basis for being relied upon for guiding policy — it’s just another model, not necessarily better than a high-order polynomial, where those are known to be absurd for out of sample predictions.

      • Clearly you did not see Pratt’s AGU work on this. The future is not mentioned, and to extrapolate emissions based on past growth is not a good way to predict a scenario. Pratt’s paper was to fit past temperatures to millikelvin accuracy. A large part of the variation was removed by the assumption of exponential emission growth and the log response to it in temperature.

      • > It is not so much that the practitioners form a small group but rather that those in the main stream climate science community have not bothered to critique the method or even discuss it informally.

        That was quite clear from the start, Ken. Here’s also quite clear: those who are into “intrinsic MDV” are into cyclology. From the paper you read:

        Multidecadal variability (MDV), with an approximate period of 60–70 years (Schlesinger and Ramankutty 1994), in the global climate system drew the attention of climate researchers 20 years ago. MDV may be a quasiperiodic climate variation related to the AMO, which is internally caused by the thermohaline circulation variability (Knight et al. 2009; Zhang 2010) or externally forced by aerosols (Nagashima et al. 2006; Booth et al. 2012), or it may be a natural oscillation synchronized to
        the orbital periods of Jupiter and Saturn (Scafetta 2010, 2012, 2013).

        http://journals.ametsoc.org/doi/pdf/10.1175/JAS-D-14-0296.1

        Also note that the authors are even talking about “the so-called accelerated warming at the end of the twentieth century”!

        I mean, come on.

        Since I need to hone my Contrarian Matrix, I care about Scafetta’s astrology. Not all researchers have the luxury of studying farfetched ideas. They can safely ignore most of this crap.

        Science is more a race than a boxing match.

        ***

        Here’s Richard Alley on the MWP:

        https://neverendingaudit.tumblr.com/post/45187548866

      • Jim D: The future is not mentioned, and to extrapolate emissions based on past growth is not a good way to predict a scenario.

        That’s why I wrote that the forecasts are conditional on the CO2: as CO2 evolves we can calculate what Pratt’s model gives for that time and concentration.

      • The only parameter to possibly use going forwards would be 2.83 C per doubling that he used to fit the last century or so of temperature trend to the exponentially growing CO2 additions.

      • The only parameter to possibly use going forwards would be 2.83 C per doubling that he used to fit the last century or so of temperature trend to the exponentially growing CO2 additions.

        Basically, Yes. But there is also his model for the residual process.

      • That would assume a repetition of the pattern after 150 years, which would not be easy to justify. However, Wally Broecker did something similar in his global warming paper and turned out not far wrong.

      • Jim D: That would assume a repetition of the pattern after 150 years, which would not be easy to justify.

        What is that about?

        The hypothesis that would be tested is that the model is an accurate representation of the process that generated the data — in this case the quantitative effect of CO2 accumulation.

      • The residual non-CO2-proportional part is not predictable, so the main process can only project to +/-0.1 C, and only that if we know how much CO2 will be emitted in the intervening time. That is, if you can tell me the CO2 level in a given year, this model can project its suitably smoothed (see Pratt) temperature to within 0.1 C. This works for any year in the past as Pratt showed. However, if the growth of emission deviates much from the past’s 2% per year, I expect that there would be some added uncertainty.

      • Willard can you provide some details on Richard Alley’s excerpted comment you provide on how the subtracting a 60-70 year cycle from the current modern warming period trend that some have attributed mainly to increasing GHG levels in the atmosphere will suggest greater future variability.

        The part where Alley states: “..that would suggest stronger amplifying feedbacks and larger future warming from rising greenhouse gases than originally estimated” is an interesting comment given that we are unsure of the cause of the cyclical portion of the temperature series. Would not there need to be presented some potential natural sources with at least hypothetical feedback processes?

  50. I have read with interest and applaud the work by Cowtan and Hausfather in papers that deal with making better comparisons between CMIP5 model and observed GMST. I have been attempting to generate interest among those authors whose papers show an interest and innovation in analyzing these temperature series in an analysis I have been doing. I have been most recently interested in the global mean surface temperature series (GMST) and attempts to decompose those series into a secular trend, cyclical and noise (red and white) components. That interest was piqued by an email exchange with Tom Karl and some of the authors of their paper “Possible artifacts of data biases in the recent global warming hiatus” (Karl 2015) which can be found at the following link:

    http://science.sciencemag.org/content/348/6242/1469/tab-pdf

    During those exchanges we discussed using linear trends from regression versus other available methods for deriving tends and decomposing series that can handle non-linear trends and non-stationary series such as Singular Spectrum Analysis (SSA) and Empirical Mode Decomposition (EMD). The Karl authors brought up the use of EMD for a future paper as a sequel to their original Karl 2015 paper. We were comparing results of SSA and EMD. I was more familiar with SSA, but I noted after doing a quick survey of the literature on EMD that the Karl authors were using the combined trend and most significant cycle component as the trend without acknowledging the separation of these components by EMD. That is where the exchanges between the Karl authors and me stopped.

    I continued my work analyzing both observed and CMIP5 RCP 4.5 model GMST series. I used the combined SST (for ocean) and SAT (for land) temperatures for observed and modeled GMST so that I would avoid the apples to oranges comparisons that are sometimes seen in the literature where the observed ocean temperatures (SST) are used in a comparison with models where the ocean (SAT or tas and not tos) are used. I initially had some consternation about the SSA method not readily separating any cyclical components until I began looking at simulated series where I combined known trend, cyclical and noise series and found that the heuristics of the SSA method could not make a clear separation of cyclical components that would be expected from a GMST series – whereas EMD could.

    In my investigations of the use of EMD I read the Huang article linked here: http://www.academia.edu/14188939/On_the_time-varying_trend_in_global-mean_surface_temperature that uses EEMD to decompose an observed GMST series. I have read other related articles on the use of EEMD and CEEMD in decomposing time series and the use of plots of log(variance) versus log(period) for the series components and with the confidence intervals for the series noise to determine the significant components. I have been using EEMD and CEEMD from R and the library hht.
    I am of the view that the recent global warming is caused partly by anthropogenic and partly by natural effects and that the big question going forward is the relative portions from these two sources. Estimations of the sensitivity of temperature to radiative forcings, and particularly anthropogenic sourced green house gases, from observations and for determining the validity of climate model output depend critically on estimating these portions.

    If there are methods available for demonstratively improved separation of the global warming series components and to thus better estimate these portions – such as EEMD/ CEEMD – then one has to wonder whether the seeming lack of interest by the climate science community to this newer approach is due to unfamiliarity with that method. Making no claims of originality, novelty or propriety for my work I have put it out to innovative authors noted above with hopes of getting the climate science community interested in looking further into this approach and its application to temperature series. Hausfather and Cowtan were among those from whom I solicited a response on my work. Thus far the I have received no responses from any of those authors I solicited.

    The trends in degrees C per decade I found for the critical 1975-2016 GHG period for the means of 102 CMIP5 RCP 4.5 runs was for EEMD 0.115, for OLS with breakpoints was 0.181 and for the OLS for the 1957-2016 period was 0.192 while for the GHCN GMST it was 0.101 for EEMD, 0.165 for OLS and breakpoints and 0.171 for OLS for the 1975-2016 period. These results indicate that by taking reoccurring low frequency cyclical components out of the climate model and observed series lesser differences between models and observed series are produced, but it reduces, in both cases, the 1975-2016 period trend by approximately 40%.

  51. Kenfritsch See post at 12:!4 pm above
    “It makes no physical sense to calculate trends across the millennial temperature peak and inflection point at 2003-4
    See Fig 4 from
    http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html

  52. Geoff Sherrington

    Many of the model/observation graphs contain projections that increase with time. Most people would know that most of Nature is bounded; there are increases with time that do not exist because the upward curve becomes an inflection, a downturn.
    It has never been clear to me why these models have not included this concession to reality, observation and conservatism by projecting an eventual downturn. If it is fossil fuel consumption that causes concern, there are ways to predict a downturn and there is reality that one will happen in time. It will be approximate, it will have errors, but it is surely better than a phantom ongoing increase.
    Where would Denizens here put the global temperature downturn on these model graphs if they were asked to do this? Reasoning? Geoff.

    • Good buckets don’t stop filling to capacity until something happens to the components, like: one, the source of water is stopped; two, the water level reaches the top of the bucket; three, something tips the bucket.

      I listen and I listen, and all I hear is unicorns and the laughable “it’s the El Niño” nonsense.

      • JCH: I listen and I listen, and all I hear is unicorns and the laughable “it’s the El Niño” nonsense.

        One of the good things about now is that there are many projections/predictions/expectations/etc for the next decades.

        Exactly which claims about the El Niño do you call “nonsense”, and which do you call “sense”? The 1997-1998 El Niño was followed by a dramatic transient downturn in global temperature, then a step change (increase) in global mean tempt; what follows the 2015-2016 El Niño isn’t known yet. Model ensenble mean is close to the 2015-2016 El Niño peak, but not what came before; how close it came to the 2017 – 2026 decade isn’t know yet.

      • Despite your crutch, the kimikamikaze, 1998’s GISS anomaly was equaled or exceeded 12 times: 2002; 2005; 2006; 2007; 2009; 2010; 2012; 2013; 2014; 2015; 2016;and 2017.

        Pray for wind.

      • JCH: Despite your crutch, the kimikamikaze, 1998’s GISS anomaly was equaled or exceeded 12 times: 2002; 2005; 2006; 2007; 2009; 2010; 2012; 2013; 2014; 2015; 2016;and 2017.

        What is that about?

      • The period was listed by anomalously strong winds. Stronger than normal.

        Maybe they’ll come back.

      • The 1997-1998 El Niño was followed by a dramatic transient downturn in global temperature, then a step change (increase) in global mean tempt; what follows the 2015-2016 El Niño isn’t known yet.

        These are just fluctuations of the surface temperature, caused by the ocean surfaces warming or cooling based on the distribution of the Pacific Warm Pool.

        Meanwhile, the ocean warming continues unabated.

        ENSO is the largest source of short-term variability, but it’s just that: short-term variation. It doesn’t actually cause any long-term changes. It’s not a forcing. Instead, it’s variations around a mean.

        If you want to look at the real trend, either look at ocean heat content or look at the longer-term surface temps.

      • Benjamin Winchester: These are just fluctuations of the surface temperature,

        The models have been running hot. The most recent El Niño fluctuation has closed the gap, possibly transiently.

      • “ENSO is the largest source of short-term variability, but it’s just that: short-term variation. It doesn’t actually cause any long-term changes. It’s not a forcing. Instead, it’s variations around a mean.”

        El Nino events occurring during slow solar wind periods could explain why their frequency increases during solar minima.
        https://snag.gy/dZ00l.jpg

    • Any kind of downturn in fossil fuel use requires some form of mitigation because population growth and development between them require at least a 1% growth rate in emissions for business to remain as usual. If fossil fuels can’t sustain a 1% growth, that would require mitigation and finding ways to replace that dwindling energy source as its costs soar. People should be skeptical that fossil fuels can be cost-competitive for the next few decades given their self-limiting nature which is often overlooked. That’s a conversation that the skeptics don’t seem to want because we don’t see it here for sure. Maybe because it is another argument for replacement.

  53. Pingback: New Boost For Healthy Climate Scepticism | Principia Scientific International

  54. I’ve been in this global warming debate going on 20 years now. The only way any of us stop arguing about it is when we die. Nothing has been settled in all that time. ECS still the same. Inability to predict regional weather or climate has not improved. No alarming rate of sea level rise where cost of dealing with it exceeds the benefit of abundant low cost fuel that keeps the industrialized world functioning.

    The planet however has gotten measurably greener in those 20 years according to NASA. Crop production has never been better. Energy intensive fish farming, made less expensive by low cost fuel, relieves pressure on wild seafood populations. Fewer killer frosts for agriculture. Water needs for plants reduced in higher CO2 means less irrigation needed. Growing seasons getting longer. Arable land increasing through warming exceeds arable land lost to sea level rise.

    I’m just not seeing a net downside to fossil fuel use. The only downside to it is it’s not in infinite supply. Sunlight however is in great abundance and there will arrive a technological solution to inexpensive harvesting and storage via gas & liquid hydrocarbon fuels compatible with existing infrastructure. The atmosphere is the carbon source for those fuels. The more carbon in the atmosphere the more efficient it is to pull it out to produce synthetic fuels.

    I’m not alarmed by the energy future. Synthetic biology is almost certainly within 50 years of making hydrocarbon fuel production cheaper than fossil fuels ever were. Organisms that thrive in brackish water fertilized with municipal wastewater able to dominate natural competitors in open ponds whose metabolisms are tweaked so they excrete alcohol and fuel oils are close at hand (one or two human generations away at most I’d say).

    I’m glad the adults finally got put in control of energy & environment in the USA. The future has never been brighter.

    • Amen, David.

      Frank Gue

      >

    • Nice work, Springer. Judith can shut this thing down now. I told you all way last year (I think it was early Spring) that Trump Rules! would render this interminable climate argument moot.

      • David Springer

        I’ve been on the Trump train since it became clear he was sweeping the primaries. I still have a recurring donation going to his campaign and spend a lot of time on Twitter in support. November 8th 2016 was probably the best evening of my life surrounded by cheering conservative friends all of us in a state of disbelief over what had happened. When he quickly put Rick Perry in charge of energy I knew good times were ahead. I couldn’t possibly be happier with the job he’s doing. I’m pretty unhappy with congress along with 85-90% of everyone else according to congress approval polls. Thanks for all you’ve done, Monfort, you called it early and my hat is off to you.

      • Glad to see you back, Don. I scan the blog for your posts. Someone needs to keep yimmyd honest. He has run amok.

      • Serf’s think it’s not over till the fat lady sings,
        and when might that be? The lo-o-ng war betwixt
        open and closed society never ends. Art has form,
        your Grecian Urn, or Hamlet in Three Acts, whereas
        life does not, geological, historical … herewith political,
        e,g, Gramsci’s long march, tramp… tramp…tramp
        through the institutions, par example, in the serf’s Great
        Southern Land, the retreat from reliable ol’ King Coal
        fossil fuel that released us serfs; from slavery, with
        18C Law restraints on free speech, Shukla-wise,
        K-12 education role-play-mind-arson, skills kaput!
        Little choice betwixt political parties’ renewable energy
        future, climate science? ‘Twas never about the science.

        Like David Springer says at 11..27 pm, nothing’s been settled,
        (this despite conn-sens-suss certainties.

      • The point I’m making, Judith Curry’s Climate Etc,
        an e-salon, or messy city square, versus the CNN,
        BBC, ABC, Guardian, E.U. UN, bureaucracy-approved
        message.

    • I’m still pinching myself.

      If not for all the noise, watching the left in convulsions would be hilarious. My FB news feed is full of anti-Trump hate. Amazing how hateful the people who supposedly hate hate are. But they are blind.

      Now if we could only get the nukes back online I’d feel much better, but it’s good anyway.

      • The WH is very entertaining. We see him losing cabinet members and WH staff at the same rate as in The Apprentice, except instead of firing Meatloaf, he now fires people like his health secretary. The left is amused by comments like when he says Puerto Rico is surrounded by big water. Yes, heap big water, also known as the sea. What a tool.

      • jimd

        Let me help you. The president already has 2 big accomplishments.

        1. He beat Hillary. The significance of this event cannot be underestimated.
        2. Gorsuch

        He could do nothing for the next 3 years and I would be satisfied.
        The dems are really hurting: all those state legislative seats, governors, the house, the senate, and now the POTUS.
        Fossil fuels are back, and economic growth will follow.

        Meanwhile, the dems keep shooting themselves in the foot with identity politics instead of getting their act together. It’s wonderful to watch. The only thing worse for you is if RBG kicks the bucket. I can’t wait for football Sunday. It’s demoncat suicide day! :)

      • He’s well on track to do nothing, unless he manages to start a nuclear war. Congressional Republicans, as the gang who can’t shoot straight, are not doing much either, which is a good thing too.

      • Let’s not forget:

        Saving thousands of young men from being persecuted by academic kangaroo courts.

        Getting GDP above 3%.

        Getting NATO to pay their fair share.

        Withdrawing from the Paris agreement.

        Beginning the process of downsizing the State Department and the EPA.

        Getting the NFL to cry uncle.

        Getting China to REALLY lean on North Korea. This has never been done before and may be the biggest diplomatic coup in a hundred years.

        Telling a bunch of socialists at the UN to go $@% themselves.

        Appointing multiple constitutionalists to Federal district courts.

        Revisiting our relationship with Cuba.

        Did I mention withdrawing from the Paris Agreement?

        Unprecedented exemplary handling of disaster relief and recovery in Texas, Florida, and Puerto Rico.

        Decimating ISIS.

        Changing the rules of engagement so troop on the front line can be successful.

        Rebranding of the military from social experimentation center to fighting force.

        Returning the balance of power to the Federal Government.

        Massive reduction in illegal immigration.

        Returning the White House to behaving within constitutional constraints.

        This is just off the top of my head. So yeah, Jim D, he just hasn’t accomplished anything.

  55. With out the aid of favorable sea surface temperatures AGW will cease.

    Up until recently sea surface temperatures have been quite favorable for warmth which AGW hijacked.

  56. Models all include an arbitrarily chosen positive feedback to the CO2 GH warming. Since atmospheric ‘positive feedback’ to CO2 GH alone is zero, none of the models can be expected to reflect the effect of CO2 GH warming.

  57. As always without el nino, we’d be at the 5% abyss. As always they leave out satellite data sets when comparing tuned models to adjusted data sets

    • Satellite data.

      1. is the most heavily adjusted data.
      2. is actually MODELED data, in some cases relying on GCM output.

      That said THE MOST IMPORTANT DATA, satellite data that tracks
      stratospheric cooling, proves the AGW hypothesis.

      • Yeah – like I said – nuts. The satellites are new and difficult technologies – and cobbled together originally from eqioment not designed or the purpose. They have been adjusted – and every adjustment brings more precision. Nor are climate models used to globally integrate data sets to derive atmosphere temperature.

        On the other hand Mosh fails to understand even the need for humidity adjustments in surface records.

        https://watertechbyrie.files.wordpress.com/2017/10/humidity-temp.jpg

        “The relationship between temperature and specific humidity for global land surface stations. The dot colors represents the heat content trend as in Figure 1. (a) Generally the warmer the annual mean temperature, the higher the increases in humidity. (b) Conversely, examination reveals that the greater the annual mean specific humidity, the lower the temperature trend. Figures 2a and 2b have a line showing an exponential fit to the data with the best fit equation shown along with the RMS error of the residuals given in brackets.”

        Models are useful for process investigation – but projecting nonlinear divergences into the far future and claiming that it shows AGW is mathematically risible. Bu then mosh has so very little grasp in the way of math or modelling.

      • A joke .

        All of the pillars of the pathetic theory have not come to be which are a lower tropospheric hot spot , an evolution to a more +AO/NAO, a decrease in OLR.

        In addition the climate is not even close to being unique.

        As far as stratospheric cooling that has all but stopped for at least 10 years.

        In addition water vapor /volcanic activity play a big role in stratospheric temperatures as well as solar activity.

      • Salvatore del Prete | September 30, 2017 at 8:17 pm |
        A joke .
        Steven is allowed to make a joke like
        “satellite data that tracks stratospheric cooling, proves the AGW hypothesis.”
        Conveniently cherry picking one bit he feels fits his narrative but therefore admitting that the other 97% of the satellite data disproves AGW, otherwise he would have quoted that.
        Worse since he admits it proves him wrong he is forced to attack it by claiming
        “It is the most heavily adjusted data.”
        The data is there, it has calculations made on it to give a temperature outcome. Mosher deliberately chooses to confuse adjustments, which is what torturing the data is called after one has the data, with collecting the data in the first place.
        Something he should be well aware of working in the torturing department of data he does. I am not allowed to call it the data adjusting department because he says they do not adjust data. Again reading between the lines, as one has to do, means the data adjustments have all been done by the person in line in front of him. He just uses it. Denial is a strange bedmate.

      • Salvatore del Prete | September 30, 2017 at 8:17 pm |
        A joke .
        Steven is allowed to make a joke like
        “satellite data that tracks stratospheric cooling, proves the AGW hypothesis.”
        Conveniently cherry picking one bit he feels fits his narrative but therefore admitting that the other 97% of the satellite data disproves AGW, otherwise he would have quoted that.
        Worse since he admits it proves him wrong he is forced to attack it by claiming
        “It is the most heavily adjusted data.”
        The data is there, it has calculations made on it to give a temperature outcome. Mosher deliberately chooses to confuse adjustments, which is what torturing the data is called after one has the data, with collecting the data in the first place.
        Something he should be well aware of working in the torturing department of data he does. I am not allowed to call it the data adjusting department because he says they do not adjust data. Again reading between the lines, as one has to do, means the data adjustments have all been done by the person in line in front of him. He just uses it. Denial is a strange bedmate.

      • Steven Mosher: That said THE MOST IMPORTANT DATA, satellite data that tracks stratospheric cooling, proves the AGW hypothesis.

        The stratospheric cooling supports the AGW hypothesis, but there are other problems that are unresolved, such as whether surface warming actually causes any damages, and whether surface and troposphere warm at the same rate.

        Why is the satellite tracking of stratospheric cooling MORE IMPORTANT DATA than the satellite tracking of tropospheric temperature change? Does stratospheric cooling have any effect on Earth surface life?

      • The latest RSS adjustment has the troposphere tracking the surface.

      • Satellite era, 3.29 mm p/yr.:

        https://i.imgur.com/8HTbOKb.png

        20-year data, 3.32 mm p/yr:

        https://i.imgur.com/0miX0yz.png

        15-year data, 3.54 mm p/yr:

        https://i.imgur.com/5PRbplD.png

        10-year data, 4.24 mm p/yr:

        https://i.imgur.com/0EFyaMZ.png

        5-year data, 4.57 mm p/yr:

        https://i.imgur.com/fuKixTy.png

  58. What happened in the years 1976/77 and 1998/99 in the Pacific was so unusual that scientists spoke of abrupt climate changes. They referred to a sudden warming of the tropical Pacific in the mid-1970s and rapid cooling in the late 1990s. Both events turned the world’s climate topsy-turvy and are clearly reflected in the average temperature of the Earth. Today we know that the cause is the interaction between ocean and atmosphere…

    Prof. Latif cautions against too much optimism regarding short-term regional climate predictions: “Since the reliability of those predictions is still at about 50%, you might as well flip a coin.” https://www.geomar.de/en/news/article/klimavorhersagen-ueber-mehrere-jahre-moeglich/

    Not only are climate models unable to project anything but non-linear divergence – but climate is intrinsically unpredictable – and this is all just totally nuts. I know you all have been thinking about it for a long time – just very badly.

    • Mojib Latif, a climate expert at the Leibniz Institute at Kiel University in Germany, said he “cannot understand” reports that used his research to question the scientific consensus on climate change.

      He told the Guardian: “It comes as a surprise to me that people would try to use my statements to try to dispute the nature of global warming. I believe in manmade global warming. I have said that if my name was not Mojib Latif it would be global warming.”

    • “The chances of correctly predicting such variations are much better than the weather for the next few weeks, because the climate is far less chaotic than the rapidly changing weather conditions,” said Latif. This is due to the slow changes in ocean currents which affect climate parameters such as air temperature and precipitation. “The fluctuations of the currents bring order to the weather chaos”.

      So prediction for next years winter are a coin flip. Predictions across decades are better than that, and predictions for 2100? He’s Mr. Global Warming.

    • Like I said – nuts.

      Latiff has done interesting work in decadal variability over at least the last decade.

      e.g http://journals.ametsoc.org/doi/abs/10.1175/JCLI3945.1

      “Thus, some useful decadal predictability of economic value may exist in the Atlantic–European sector. To exploit this decadal predictability, however, a suitable ocean observing system must be installed since the memory of the climate system resides in the Atlantic Ocean. In particular, the thermohaline circulation should be monitored carefully since its variations are most interesting in light of decadal predictability in the Atlantic sector. The current ocean observing system, however, is not suitable to produce ocean analyses that would provide a realistic description of the state of the MOC. A lot of effort should be spent in the next years to implement an “adequate” ocean observing system so that the decadal predictability potential can be exploited. A multimodel approach may be an effective way of sampling model uncertainties. Dealing with the uncertainties of anthropogenic climate forcing seems a bigger problem. But, as 20 years ago people may have wondered if seasonal forecasting would ever be possible, in 20 years from now routine decadal to multidecadal predictions may become accepted.”

      Seasonal forecasting is a reality in a probabilistic sense for hydrology. And while greenhouse gases may be a wild card in the 4 dimensional, spatio/temporal chaos of climate – it is by means certain that climate has or will fundamentally shift as a result.

      Perhaps he should call himself Mr. Abrupt and Uncertain. But claiming things are uncertain is a sure way of provoking rabid responses from poorly informed climate fanatics like JC.

      There are new observing systems

      https://watertechbyrie.files.wordpress.com/2014/06/26-degree-n-array.png

      And interesting data emerging.

      https://watertechbyrie.files.wordpress.com/2014/06/26-degree-n-array.png

      But it is not really about science is it JC? Or rather it is a science that JC doesn’t get and substitutes feral science for instead.

      • STFU.

      • Predicting a cold winter in Germany is impossible – predicting multi-decadal climate shifts in the Pacific is as accurate as tossing a coin. Predicting temperature over the 21st century? Yeah right.

      • Steven Mosher

        Prediction of how long it will take a rock to fall is pretty accurate. Prediction of how it will tumble through the air..
        Really tough.
        Global stats..those governed by energy balance are easy.
        It’s been done with accuracy since Hansen.

      • Prediction of flow fields in EAR

      • Prediction of flow fields in Earth’s spatrio/temporal chaos is impossible – and energy balance is the wrong term. Energy dynamics in the maximum entropy system is far more relevant. There are large changes in radiant flux at TOA with ocean and atmospheric circulation – and circulation changes in regimes in the spatio/temporal…

        Mosh and his absurd metaphor notwithstanding.

      • Steven Mosher

        “Prediction of flow fields in Earth’s spatrio/temporal chaos is impossible – and energy balance is the wrong term. Energy dynamics in the maximum entropy system is far more relevant. There are large changes in radiant flux at TOA with ocean and atmospheric circulation – and circulation changes in regimes in the spatio/temporal…

        Mosh and his absurd metaphor notwithstanding.”

        No prediction is very simple. Skillful prediction is a matter of degrees.
        Energy balance is the right term.
        We know, for example, that if the sun should go out, that the temperature will decrease. Chaos NOTWITHSTANDING. Lack of accurate prediction of flow fields, NOTWITHSTANDING.

        At the highest levels the climate is not chaotic. It obeys very simple laws.
        Like that rock falling. It will obey the law of gravity. Now we might be able to predict the first few tumbles, but the long range prediction of the flow field around the rock and how it tumbles second by second, will not be predictable with any significant or actionable skill.

        You would not bet on it.

        nevertheless, we know that just as the rock must fall, the planet must cool if radiative forcing decreases. At the limit we can predict the temperature of the earth should the sun extinguish. The global value, the lowest order metric of the system is highly predictable.

        The exact pathway from 15C down to the temperature of space.. will probably not be predictable with actionable skill. But we know that despite chaos at the smaller scales that at the largest scales the outcome is certain.

      • “Prediction of how long it will take a rock to fall is pretty accurate. ”
        “No prediction is very simple”
        JCH good to see you winning 2 months in a row.
        Time to adjust that satellite sensor it seems to have drifted hot.

      • “…we know that just as the rock must fall…”

        If it was a feather, it would be more interesting.

      • I’ve been winning since 2011-2012.

      • “… nevertheless, we know that just as the rock must fall, the planet must cool if radiative forcing decreases…”

        One of the simple rules chaos is that turbulent flows are chaotic at scales from micro-eddies to planets and stars. But my answer is that it depends on the planetary response to changes in control variables. Climate is a dynamical mechanism at a scale of thunderstorms to millennial patterns of flood and drought. It tends to equilibrium and thus to maximum entropy – largely through the simple physics of the Planck response. But it is massively more complex than a tumbling rock. Climate shifts at 20 to 30 year intervals over millennia.

        Energy dynamic is the right term given large change in TOA radiant flux with changes in ocean and atmospheric circulation.

    • Robert I am glad you bring sense to the conversation.

      • these changes we are talking about, these short-term changes, you know, their amplitudes are much smaller than the long-term warming trends.

        Right. So the guy you’re quoting is saying that the long-term warming trends will swamp out the short-term fluctuations.

        We’ll still have short-term fluctuations in a century, just like we do now. But when people here are talking about the “failing models”, they’re literally talking about surface temperatures changes of 0.1 or 0.2 degrees. That’s how far off the models are. Well, that’s how far off they “were” off, anyway, before the last few years.

        Meanwhile, the expected warming is 2-3 degrees.

      • So DeSmogBlog is your go to science source? I was actually being ironic.

        We predicted multi-decadal climate shifts – said that it added to late century warming and caused the pause. Nor do we know just how much was natural and how much anthropogenic – but natural seems large if we can believe ERBS and ISCCP data. Noe is there any basis for expectations of temperature at all in the far future – although I am pretty sure that the 20th century ‘fluctuations’ – ie abrupt climate shifts – won’t be repeated.

        Inept post hos rationalizations notwithstanding – it seems clear that climate is nothing like your memes. The next climate shift will be in a 2018-2018 window. Will that be a surprise – because climate shifts in 1976/77 and the late 1990’s is what the study was about?

      • Noe is there any basis for expectations of temperature at all in the far future

        On the contrary — thus far, if you have solar, GHG, and volcanic forcings, you can do a pretty good job of predicting the climate.

        So uncertainties in future solar and volcanic forcings translate into uncertainties in future temperatures. But the expected changes in CO2 totally swamp out those uncertainties.

      • Anastasios Tsonis, of the Atmospheric Sciences Group at University of Wisconsin, Milwaukee, and colleagues used a mathematical network approach to analyse abrupt climate change on decadal timescales. Ocean and atmospheric indices – in this case the El Niño Southern Oscillation, the Pacific Decadal Oscillation, the North Atlantic Oscillation and the North Pacific Oscillation – can be thought of as chaotic oscillators that capture the major modes of climate variability. Tsonis and colleagues calculated the ‘distance’ between the indices. It was found that they would synchronise at certain times and then shift into a new state.

        It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

        As you almost said – these are ‘fluctuations’ in surface temperature caused by shifting sea surface temperature in the eastern Pacific at 20 to 30 year intervals.

        The trajectory of surface temps during the 20th century – warmer, cooler, warmer – was determined by these regimes.

    • You misread Latif, and you are too dishonest to admit it.

      • I doubt if JC has read much of anything at all – and has no depth to speak of. It is all just seeing if he can interpret words to agree with his preconceptions – and not understanding what’s really going on at all. When he is not just totally talking through his arse.

      • RAZ: Now, your research, Dr. Latif, has been cited by climate change skeptics here in the U.S., by for example, George Will, a conservative columnist with the Washington Post, to show that the Earth actually goes through natural warming and cooling trends and that climate change is really being overhyped. Do you think your work is being misused?

        Dr. LATIF: Yes. It is misused. I must say this, unfortunately, because these changes we are talking about, these short-term changes, you know, their amplitudes are much smaller than the long-term warming trends. So we are talking about a hold, okay, in the last 10 years. We are not talking about a net cooling to, say, (unintelligible) temperatures, (unintelligible), you know, which we observed 100 years ago or so. Okay, and also what we predicted for the future is basically that this hold may continue for another 10 years or so, okay, but we did not predict a cooling. We basically said that we would stay for some more years on this plateau. [1]

        It seems a bit difficult to misinterpret – but JC manages it. It is all fairly obvious – when they are not busily denying it.

  59. The model should not be including and hence following the AMO assisted warming from 1995 to 2005. It was bound to overshoot from when the AMO leveled off into its warm phase.

  60. Dear friends,
    After the April USA solar induced blackout, McGill University started an international campaign to
    “create solar shields against coronal mass ejections”, and an extinction!
    http://www.mcgill.ca/iasl/files/iasl/global_space_governance_-executive_summary_and_key_proposed_actions.pdf

    8 institutions have already started the simulations on the ultimate truce tool: Laser Earth Shield vs all deadly space threats, as asteroids, comets and solar superstorms that may blow up all nuclear, as we thankfully escaped in July 2012! But there’s still no world congress on TRUCE and Shield, even there are ALREADY 30 superlasers* that may start shielding humankind!

    Shall we wait for the next medium solar storm to cause another April-like USA blackout???

    8 institutions in 3 countries started Earth laser-Shield simulations.
    https://phys.org/news/2017-07-scientists-laboratory-astrophysical.html Princeton, RochesterΝΥ, Michigan, New Hampshire.
    https://thewire.in/159826/tifr-tabletop-laser-plasma-omega-parker-aditya

    To produce the wave, scientists used a laser to create a high-energy plasma—a form of matter composed of atoms and charged atomic particles—that expanded into a pre-existing magnetized plasma.

    * petawatt class lasers currently operational, under construction or in the planning phase. http://journals.cambridge.org/download.php?file=%2F4331_9B60F139B12FCF681605BFFDDE518E50_journals__HPL_HPL3_S2095471914000528a.pdf&cover=Y&code=9c75a0e12c4828629f1f376d4cb1ef35

  61. 1. Climate models have 0 predictive power. They are invalid; they do not rank as scientific theories, but no more than scientific hypotheses.
    2. What models have shown is that George Simpson, head of the MetOffice, was correct 80 years ago when he told Guy Callendar “it was impossible to solve the problem of temperature distribution in the atmosphere by working out the radiation.” Climate prediction (weasel word: projections) from Radiative Forcing (Greenhouse Effect, aka the Callendar Effect) is a dead horse.
    3. Since the models fail to account for temperature sensitive albedo and for the oceanic regulatory effect of atmospheric CO2 concentration (Henry’s Law), two dominant feedbacks missing from the models of the climate, the models are the wildest kind of scientific conjectures, the kind in need of repair, teetering on pseudoscience.

    • > Climate models have 0 predictive power. They are invalid; they do not rank as scientific theories, but no more than scientific hypotheses.

      Modulz. Theories. Please learn the difference, JeffG.

      Modulz are indeed realizations of hypotheses.

      It’s not like we’ve never been there before.

      • Willard, you may not have seen this.

        Science creates models of the real world to predict the real world. These models come in four flavors: conjectures, hypotheses, theories, and laws. Conjectures are incomplete, not ready for experiment. Hypotheses are complete, describing an experiment unambiguously, but as yet no novel, non-trivial part empirically validated. Theories were once hypotheses with some non-trivial prediction experimentally validated. Laws are theories which have had all possible predictions from the model validated.

        Computer models are only a method of computation, but with sufficient text become an integral part of a scientific model. GCMs are computer models. With a dozen or so volumes of text, they become an integral part of the Radiation Forcing model of climate, which has been invalidated — invalidated on its prediction of the Equilibrium Climate Sensitivity.

      • > you may not have seen this.

        But now I did, Jeff, and I can’t unsee it.

        Your wordology is interesting, but too foreign to what I’m used.

        Is it based on any work in formal epistemology?

      • Steven Mosher

        No Willard Jeff has his own private language and epistemology.

        the Key is Jeff gets the “prediction” of ECS wrong.

        Nothing in observations “invalidates” the models. And particurly not ECS.

        he also gets Law wrong, but where do you start with junk philosophy of science?

      • Willard says I can’t unsee it, and that is a good thing, and finding my wordology is interesting, asked Is it based on any work in formal epistemology?.

        My goodness, yes, WIllard. Notwithstanding Mosher’s off-the-wall mud balls, the dialogue is not some personal epistemology. It’s Modern Science, alive and well, key works by Aristotle, Bacon, Kant, Neyman-Pearson, Kolmogorov, illuminated by the great works of science and further illuminated by its popular deconstruction by Popper, Fisher. But this thread is no place to hash out epistemology. We need to take this outside.

        Maybe you could be specific, as in saying what part of what I described you find unnerving. And don’t pull an empty Mosher by saying x is wrong.

      • > Maybe you could be specific

        I already told you. You’re conflating modulz with theories.

        To keep it simple, a model is a realisation of a theory. See for instance how to model Gettier cases:

        https://plato.stanford.edu/entries/formal-epistemology/#ModGetCas

        Models are just not theories, so your “[models] do not rank as scientific theories” is pure gibberish.

        Your distinction between hypothesis and a conjecture makes little sense to a Popperian, BTW.

        Hope this helps.

    • Well said, Jeff.

      Permit me to add:

      The comment you so accurately criticize takes us only to Step 2 of the Scientific Method’s four steps. Proceeding farther is impossible; we humans cannot do Step 3 (test the hypothesis) because a test of this hypothesis would require centuries or millennia.

      Frank Gue, B.Sc., MBA, P.Eng. Professional Engineer

      >

    • Agree completely Jeff.

    • The reason Callendar became famous and nobody has heard of Simpson is because Simpson was wrong and Callendar turned out to be right. It is said Simpson rejected the Arrhenius theory too. No one today rejects Arrhenius, so Simpson was on the wrong side of history.

      • “It is said” is senseless. What was said follows.

        Callendar (1938) said that Arrhenious thought that CO2 radiation absorption was negligible, disagreeing with him. P. 223. Callendar provided Table VI of results, which happened to disagree with Arrhenious’ results, and Callendar explained why. P. 232. Callendar (1938) did not reject Arrhenious.

        In this distant past, reviewers comments appeared in the Discussion section of technical papers (an excellent practice lost), and Callendar (1938) includes Simpson’s snarky remarks. P. 237. People who actually rely scientific papers instead of just puffing out opinions “heard” of Simpson, coincident with hearing of Callendar.

        You are right to this extent, no one (in this arena) today rejects Arrhenius. Instead, IPCC relegates him to a single historical footnote, once in AR4 and once in AR5.

        Callendar was right to this extent: the Callendar Effect (Greenhouse Effect) exists. Simpson was entirely right, it isn’t sufficient to solve the problem of global temperature, as IPCC has demonstrated at great expense and political disruption. And we now know why it isn’t sufficient: cloud albedo regulates surface temperature, and ocean absorption regulates atmospheric CO2 concentration, two essential feedbacks not modeled in the GCMs.

      • Callendar did not include the water vapor feedback that Arrhenius had, so his greenhouse effect was conservative by today’s standards. Only including the water vapor feedback can today’s warmth be explained given the greenhouse effects of the added CO2.

      • Jim D. … today’s standards, i.e., models that predict measured ECS with a 3% confidence, and that predict a catastrophe in a century if we don’t immediately cough up the treasury and reduce our economy to 3d word standards. This is junk science, Jim D.

        The problem with the original Callendar Effect is not the omission of water vapor feedback. It’s the same with the AGW model, and changing the name to Greenhouse Effect didn’t help. A sufficient cause for the model failure is that CO2 does not accumulate in the atmosphere.

        The atmospheric concentration is regulated by the Global Average Surface Temperature of the ocean. The GE has the causation vector backwards. GAST causes CO2 concentration, not the reverse. A principle of science missing here is that in the real world a cause must precede every one of its effects. Of course, GCMs can do whatever the programmer lets them do. Models are as good as their predictions, except for scaring the policymakers.

      • No, you’re just regurgitating the debunked Salby stuff. I have no patience for that.

      • With no specifics, much less support, Jim D 10/517 @ 8:59 am claims, No, you’re just regurgitating the debunked Salby stuff.

        Salby deserved to be debunked for referring to trapping heat, that Earth does things to maintain equilibrium, or, for that matter, that anything in the climate system is ever in equilibrium. Heat and equilibrium are defined words in thermodynamics, and Salby’s references abuse those definitions. IPCC does as well, using words off the grid without stipulating their novel meanings. I have never had any call to reference any of Salby’s work, but if I did it would be to debunk such junk.

        On the other hand, Salby does recognize that the ocean breathes CO2, and that cloud albedo is a variable. Are you claiming that those rather basic bits of physics, omitted by IPCC-type climatologists, were part of the “debunked Salby stuff”?

        What I regurgitated were principles of science, like the two omitted feedbacks of CO2 and temperature, and causation. Are you contending that these rather elementary principles were part of the “debunked Salby stuff”?

        This is what happens, Jim D., when you fire blanks.

      • Explain how the ocean acidifies (gains carbon) while also outgassing. The carbon is clearly coming from fossil fuels. Explain where the fossil fuel carbon is going if not to the ocean and atmosphere. You have clearly not thought it through from a carbon budget perspective, and Salby never provided a budget either.

  62. Judith Curry,

    I appreciate your statement ”how much warming is caused by humans is THE key issue of debate”.

    There are two key points we have to learn to understand:
    At first CO2 content in atmosphere is determined by dynamic balance between all CO2 emissions to atmosphere and all CO2 absorptions from atmosphere to other parts of environment.
    At second longer trends of CO2 changes in atmosphere follow changes of climate temperature trends and not vice versa.

    As far as I am aware you have stated, that the climate sensitivity – i.e. increase of climate temperature as CO2 content in atmosphere doubles – adopted by IPCC is highly uncertain and exaggerated. In addition, I have understood, that you et al. have stated, that the climate sensitivity is less by about half of the value adopted by IPCC as you have replaced the climate model calculations by using real temperature observations available instead of climate model results. As to ”THE key issue” of yours you seem to require a more exact result for the issue ” how much warming is caused by humans”.

    Even though Svante Arrhenius proved that CO2 is a greenhouse gas, he did not regard it as any threat of climate warming. Instead he regarded any increase of CO2 content in atmosphere as useful for vegetation.

    There is a lot of scientists, who, on the bases of experiences of their own, are agreeing with your statement on the ‘uncertain and exaggerated’ climate sensitivity adopted by IPCC. Especially I appreciate Nir Shaviv’s princible, too: do not believe any solution of problem if you have not scrutinized that yourself! And concerning the climate sensitivity, he agrees with you: the climate sensitivity adopted by IPCC is highly uncertain and exaggerated. Scafetta and Lindzen say, that it is lower than 1 and 0.5 C, but they do not state any lowest value. Wojick, Arrak and Cripwell have said that it can not be distinguished from zero, with which I agree, too.

    I have in my earlier comment https://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992 stated:

    ”Salby’s abstract for his talk agrees with what Tom V. Segalstad has been claiming already during two last decades, according to which maximum 4% of air CO2 has anthropogenic origin, found by carbon isotope mass balance calculations.”

    Concerning the key point on dynamic balance between all CO2 emissions to atmosphere and all CO2 absorptions from atmosphere to other parts of environment in my comment https://judithcurry.com/2017/02/10/the-new-climate-denial/#comment-838229 I have further written:
    ”In the same way as [water from] a small ditch has only minimal influence on lake flooding compared to all the amount of water coming to lake, the small amount of 4 % CO2 from fossile fuels in the total CO2 emissions to atmosphere has only minimal influence on total increase of CO2 content in atmosphere.
    This proves that CO2 from fossile fuels has recently conrolled only about 4% of the total increase of recent CO2 content in atmosphere at the most; https://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992. In the same comment I have proved that warming of sea surface water in oceans – especially on the areas where sea surface CO2 sinks are – dominates the recent increase of CO2 content in atmosphere.
    In addition, [even] the more minimal influence of total CO2 content in atmosphere on climate warming has been proved by the observations, according [to] which trends of increase of CO2 content in atmosphere follow trends of climate temperature and not vice versa i.e. according to geological observations during last 100 million years in 10 million years periodes CO2 contents in atmosphere have followed changes of climate temperture; during glacials and interglacials trends of CO2 content in atmosphere have followed trends of temperature changes in climate; and the last nearly two decades prove that CO2 content in atmosphere has been increasing by lag after climate warming. This all should make anyone true that climate sensitivity is so minimal that it can not be distinguished from zero.”

  63. “For example, from the beginning of this century until recently, the tropical Pacific was locked into a negative Interdecadal Pacific Oscillation phase (that is, low-frequency La Niña-like pattern) in association with increased Walker and Hadley circulation winds and eastern Pacific cooling2,3. The La Niña-like pattern is thought to be a factor contributing to the recent so-called ‘warming hiatus’2,3 and earlier twentieth century cool and warm decades4. Therefore, our analysis of multicentury hydroclimate variability suggests that projections of tropical rainfall patterns, and global temperature extremes, will remain uncertain until paleoclimate records and models consistently capture the lower-frequency variability, and associated feedbacks, in the tropical Pacific.” https://www.nature.com/articles/ncomms11719

    Large changes in top of atmosphere radiant flux result from changes in atmosphere and ocean circulation. The changes dominate global energy dynamics over recent decades – despite the carbon dioxide idée fixe. It is presumed that radiative effects from albedo change influence climate over a very long time as well – in a way that is not limited to ice sheet feedbacks.

    Models do not capture the low frequency modes of climate change at all. They have their own low frequency modes due to ‘Hurst effects’ emerging from the temporal chaotic dynamics of the core equations – but otherwise have zero connection to the spatio/temporal chaos of the climate system.

    Hurst effects emerged from very long term Nile River flow data. It is the result of regime behavior in a dynamical system.

    “Here we show that one of the most seminal models of deterministic chaotic dynamics, Lorenz’s 1963 model19 which he abstracted from Rayleigh-Benard convection, exhibits the Hurst effect. We show that the origin of the Hurst effect in the Lorenz model is regime behavior.” https://www.nature.com/articles/srep09068

    Regime behavior is of course found in ocean and atmospheric circulation.

    “The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” http://rsta.royalsocietypublishing.org/content/369/1956/4751

    Climate reality is that the atmosphere and oceans exhibit regime behavior for which the dynamical systems paradigm provides the best explanation for abrupt shifts in climate time series.

    https://watertechbyrie.files.wordpress.com/2015/10/ghil-sensitivity.png

    Climate is neither cyclical or periodic around a trend. The next climate shift is due in a 2018-2028 window – and no one knows where that will go. It will be marked by a break in the Pacific Ocean state.

    • If very low solar does impact the climate through overall oceanic cooling and increasing the albedo that the next climate shift 2018-2028 is going to be toward colder global temperatures.

      overall oceanic cooling due to less UV light

      albedo, slight increase due to

      a more meridional atmospheric circulation N.H.

      increase in global ice, snow ,cloud coverage

      increase in major volcanic activity

      I think the shift has already started.

  64. Ross McKitrick,

    Thank you for this interesting post, and for this recent paper:
    Kevin Dayaratna, Ross McKitrick, David Kreutzer (2016). Empirically-Constrained Climate Sensitivity and the Social Cost of Carbon.

    My interest is in what is needed to justify climate policies. Some points I’d make are outlined below.

    ECS, projected future emissions rates, and projected future temperatures are important inputs to the Integrated Assessment models, which are used to project the economic impacts of projected global warming. However, these inputs are relatively unimportant for policy analysis if the impacts of global warming, over the 21st Century, would be beneficial rather than damaging. IPCC AR5 WG3 Chapter 3 mentions ‘damage functions’ eighteen times. It says, for example:
    • “Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”
    • “Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]
    • “… the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”

    It seems there is a lack of valid evidence to support the contention that global warming would be harmful. On the other hand, several lines of evidence seem to suggest that, overall, global warming would be net-beneficial not detrimental for life on earth, human well-being and the global economy.

    1. According to Scotese (2016), the planet is currently in about the severest coldhouse phase since complex life began, around 600 Ma ago (see Figure 15): https://www.researchgate.net/publication/275277369_Some_Thoughts_on_Global_Climate_Change_The_Transition_for_Icehouse_to_Hothouse_Conditions .

    2. According to Scotese (2016), the average global temperature was around 7°C warmer than now for the past half billion years.

    3. Life thrived when the planet was warmer, but struggled when colder.

    4. About 10 times more carbon is tied up in the biosphere now than at the last glacial maximum (IPCC AR4 WG1 Chapter 6).

    5. Arguably, the optimum for life on Earth is around the average global temperature over the period complex life has existed.

    6. Global warming of 3°C would not get the planet even half way up to the average temperature that existed during the period complex life began – arguably the optimum for life on Earth.

    7. Tol, 2013, Figure 3 (bottom panel), projects the economic impacts of global warming would be beneficial up to around 4°C warmer than 1900, if the projected negative impacts of energy consumption are excluded
    https://link.springer.com/article/10.1007%2Fs10584-012-0613-3#page-1
    Or (Free access to working paper version): http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf.

    8. However, empirical evidence does not appear to be consistent with the projected negative impact of global warming on energy consumption. Empirical evidence seems to indicate global warming would decrease not increase expenditure on energy; this means the economic impact would be positive not negative for the global economy.

    9. In which case, contrary to the commonly held belief that global warming of 2°C or more would be harmful and must be avoided at all costs, the economic impacts of global warming would be beneficial, not harmful, up to around 4°C (relative to 1900).

    • Peter Lang,

      your main reference seems to be an unfinished, single – author paper on Researchgate not subject to peer review. We can note:

      (Stopped writing for now 05/10/2015)

      On Researchgate, Wiki reveals

      The New York Times described the site as a mashup of Facebook, Twitter and LinkedIn.[4]

      This unpromising start continues. Let’s for the sake of argument assume this paper is bona-fide – Scotese does appear to be a reputable researcher. From the abstract:

      During the following Interglacial period, the average global temperature slowly rose to 13.8˚C. Since 1880, it has increased another. 6˚egrees to 14.4˚C (as of 2015). This rate of warming is 50 times faster than what occurred during the previous 21,000 years.

      The enboldened sentence is the clue as to why your oft-repeated argument on the relevance of temperature ranges over 600M years is entirely clueless.

      • VTG,
        Scotese is a professor of Geology and has been responsible for the PaleoMap Project since the 1970s. It was supported by NASA, USGS and the many of the main Geologic representative organisations and universities. He is a world authority on plate tectonics paleo history and paleo climate over the Phanerozoic Eon. In 2014 Gavin Schmidt had a post on RealClimate stating that it was a serious deficiency that the climate research community could not provide a good chart showing that average global temperature profile for the Phanerozoic Eon. Scotese responded with an updated version of his famous schematic. The 2016 Working paper I linked to contains that updated chart and a full explanation of how it was produced. This post is to be part of a book he is writing.

        Scotese accepts the consensus view that global warming will occur as projected by IPCC. The projection is shown in the chart. However, this is not data; it is speculation.

        The validity of the comparison you highlighted in bold has been discussed many times before on Climate Etc. and elsewhere. If you’d followed those discussion you’d understand it is not valid to compare rates of change over 20,000 years with rates of change over decades and centuries.

    • To your question, “what is needed to justify climate policies” — as with any policy, a demonstration that benefits exceed costs. As to the form of policy, I prefer a temperature-indexed carbon tax for reasons given here: http://www.rossmckitrick.com/temperature-indexed-tax.html.

      Regarding the cost-benefit question, what must be considered is not just whether warming is taking place, but what the tax system looks like, how the policy is structured, whether it is used in isolation or in the presence of other regulations, etc. See
      http://www.uoguelph.ca/economics/repec/workingpapers/2017/2017-05.pdf

      • Ross McKitrick,

        Thank you for your reply. I have read many of your papers since your 2007 book “Taken by Storm”, which you co-authored with Christopher Essex, and which I have on my bookshelf. I read your paper on evidence based approach to CO2 pricing when it was published and your GWPF presentation. My concern is that most of the proposed policy solutions are based on a premise that global warming would be net-damaging – i.e., that the costs of global warming would exceed the benefits.

        Regarding the cost-benefit question, what must be considered is not just whether warming is taking place, but what the tax system looks like, how the policy is structured, whether it is used in isolation or in the presence of other regulations, etc.

        This seems to leave out the most important step, which is investigating the cost-benefit question itself – whether warming would be net-detrimental or net-beneficial after taking into account adaptation, which would inevitably occur. If global warming would be net-beneficial, mitigation policies other than ‘no regrets’ policies are not justified. I suggest the key questions that need to be addressed are:

        1. Is global warming taking place, how much is projected over the 21st Century, probabilities, uncertainties, etc.?
        2. Is warming net-detrimental or beneficial?
        3. Quantify the costs and benefits per degree of GMST change, and the uncertainties, etc.
        4. Only after that is established should we be evaluating, advocating and implementing mitigation policies.

        A great deal of work has been done on #1 but large uncertainties remain, as this post and many other researchers have been pointing out for at least a decade. However much less work has been done to answer questions #2 and #3 and the uncertainties are huge, as stated in IPCC AR5 WG3 Chapter 3.

        I realise you know much more about this than I do. However, having been following the climate policy debates for 26 years since I was an energy policy adviser (during and after the period Australia was preparing its position for the 1992 Rio ‘Earth Summit’), I remain concerned that the assumption that global warming would be detrimental to life on Earth, economic growth and human wellbeing is not justified. The world is spending an enormous amount of money and effort on climate policies. I suspect this is retarding economic growth and improvements in human wellbeing globally for no benefit.

  65. If you’d followed those discussion you’d understand it is not valid to compare rates of change over 20,000 years with rates of change over decades and centuries.

    Indeed. And equally you’d understand that comparing average temperatures over 600 million years with changes over a century or two is clueless.

    That’s seven orders of magnitude difference; about the same as the difference between the size of the earth and the size of Peter Lang.

    You do have a point (though the opposite one you’re trying to make) with economic models but your obsession with trying to compare impacts of a geologically instantaneous temperature rise with eonal mean temperatures is frankly idiotic.

    • VTG,

      Thank you for your personal opinion. Do you have valid evidence demonstrating that the overall impact of global warming up to say 3C relative to 1900 over a century would be negative for life on Earth, for the global economy and for human well-being. I am not interested in statements of your beliefs and innuendo, just the evidence, please.

      And do you have evidence to refute the damage functions for agriculture and health used in FUND?

      And, can you show that the impacts of global warming since the Little Ice Age have been negative for humanity and life?

      • Peter,

        thank you also for your personal opinions, which are extremely insightful!

        If you can forgive me, I will note but not respond to your *new* questions, preferring to attempt a resolution to your *existing*, extremely novel and rather problematic thesis, that being the relevance of the average temperature for the past “half a billion years” to current changes. Perhaps then we could then move on to other doubtless weighty and fascinating topics you raise?

        Now, you propose that

        global warming would be net-beneficial not detrimental for life on earth, human well-being and the global economy.

        and base that proposition on
        5. Arguably, the optimum for life on Earth is around the average global temperature over the period complex life has existed.

        6. Global warming of 3°C would not get the planet even half way up to the average temperature that existed during the period complex life began – arguably the optimum for life on Earth.

        Now, I’m sure this is not merely you “personal opinion”, so please, can you provide evidence of the relevance of this to the assessment of the impact of current and projected climatic change on the timescale of a short number of centuries?

        I expect, given your insistence on others providing PDFs for damages that you have this, so it should be trivial for you to provide, but *any* reputable peer reviewed research on the irrelevance of the seven orders of magnitude difference in rate that you dismiss would be welcome.

        Thank you!

  66. Apples with Apples

    Plot this modern temperature with the warming during all the other warming periods in the past ten thousand years.

    This modern warming is will below the upper bounds of many other warming periods.

    It is a natural cycle and the causes have not changed in ten thousand or even millions of years. one molecule of manmade CO2 has not changed the temperature of ten thousand other molecules enough to measure.

    Get Real!

    • Herman Alexander Pope,

      This modern warming is will below the upper bounds of many other warming periods.

      I agree. Life has flourished during rapid warm periods (other than when GMST is near the high end of the temperature range, which is about 13C above present). Coxon and McCarron (2009) http://eprints.maynoothuniversity.ie/1983/ , Figure 15:21, shows rapid warming from near ice age to near current temperatures occurred in 9 years and 7 years 14,500 and 11,500 years ago in Ireland, Iceland and Greenland. The text explains how life flourished during these rapid warming periods. While this climate change is regional, not global, the point is that life impacted by local climate change not global, and life flourished during these rapid warmings, as it has since the Little Ice Age.

      There does not seem to be valid evidence to show that global warming this century will be harmful rather than beneficial.

      Tol (2013), using FUND3.6n, projected that global warming would be beneficial for agriculture and health up to about 4C warming relative to 1900. Most other impact sectors other than energy (which as I mentioned in my earlier comments appears to be incorrect) are projected be little impacted – including sea level rise, fresh water, storms, extreme weather.

  67. The warmist problem is that climate scientists have made a basic mistake no competent engineer would make. Dissenters were kicked out of Academia and publications of dissenting papers banned. They then grabbed grants and promotions by pushing this fake science, the precursor of of fake news.

    And as for the mistake; it’s to fail to understand that Max Planck’s 1913 treatise is based on an assumption of equilibrium between radiative emitters in a vacuum. The atmosphere is not a vacuum. This explains the details of OLR and means extra extinction coefficient for rain/convective clouds. NASA claims that is due to lots of small droplets when it’s large drops that create dark undersides, high albedo.

    It creates 40% more radiative transfer in the atmosphere than reality. They then do a work-around using a big mistake in cloud physics – van der Hulst in1967, Hansen 1969. Thus they pretend rain and convective clouds are dark underneath – high albedo – is because droplets are small when it’s a large droplet phenomenon; ultimate scientific incompetence that was used as the basis for the NASA/IPCC fraud

    Sorry folks but allegedly even Stephen Hawkins is involved in the scam and has been protected by colleagues for 25 years. Planck’s vacuum assumption is fine but prohibits the use of his theory for much of radiative energy transfer at the junction between a solid surface and a ghg-containing gas.

    Myles Allen is escaping by blowing the whistle to survive the denouement of the incompetents.

  68. Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1909 to the mid 1940’s, cooling to the late 1970’s, warming to 1998 and at least less warming since.

    The surface temperature record is replete with poor data, inadequate coverage and artifacts related to soil moisture. e.g. http://onlinelibrary.wiley.com/doi/10.1029/2011GL048442/full

    But let’s assume that it roughly gets the trajectories right. The warmer and cooler regimes are evident and associated with warmer and cooler sea surface temperatures in the tropical and sub-tropical eastern Pacific. There is more, thicker and lower clouds with cold SST and vice versa. It is a natural feedback to regime change in the Pacific.

    The moisture artifact is more significant this century as seen in the land/ocean temperature divergence. Based on very simple physics of surface heat flux as in the study linked above. It is implicated in the recent spike in surface temperature. Regardless of the denial around satellite records – the less pronounced spike in the satellite records is a smoking gun for obvious and simple physics of reduced latent heat flux – in drought conditions – and higher surface temperatures.

    The changes in surface temperature trajectory are the result of Hurst effects, regime change, quasi standing waves – however you want to describe this – in the spatio/temporal chaos of Earth’s turbulent flow fields. The difference in temps between the end of the warm regime in the mid 1940’s and the end of the late century warming was about 0.4 deg C. Seemingly negligible in the scheme of things. It is the maximum that can be attributed to greenhouse gases – but unless you take into account ‘first the natural variability of climate’ then you are inevitably talking out of your ass. Starting your count in a cold regime in 1950 seems especially absurd.

    The latest climate regime that started at the turn of the century has not finished yet – perhaps. It seems very likely to shift again within a decade – and imagining that this can be modeled realistically is tragically laughable. No one knows where it will go to or by how much. These are not cycles – they are shifts in the multi-dimensional state space of the climate system. This notion puts paid to the simple memes of both sides of the climate war. The fundamentals are well understood and the future paradoxically even less obvious.

    • Don’t you think the shift to a cooler climate regime is more is likely ?

      I sure do.

    • I sure don’t know. If we perturb flow in the system – this will change many or few quasi-standing waves. It depends. Greenhouse gases are a wild card in a probabilistic deck.

      • “… this will change many or few quasi-standing waves. It depends.” Tomas

      • Robert: “I sure don’t know.”

        I remember watching a television documentary about the search for the Higgs boson made a year or so before its existence was confirmed. The documentary attempted to explain how the investigation was being conducted, what would constitute a positive result, what was at stake, etc.

        The documentary-maker conducted an informal poll of the experimental and theoretical physicists at CERN on whether they hoped the experiments being conducted there would confirm theory, and therefore buttress the ‘standard model’, or produce a negative result and throw their field into disarray. Like good physicists each answered that they cared not a fig one way or the other – they just wanted to know the answer whatever it may be. Like a good documentary-maker, he pressed each respondent on which outcome they would find more aesthetically pleasing. On this question opinion was divided 50/50. Half of the physicists thought that a negative result would be more pleasing because it would shake the elaborately constructed foundations on which their current understanding was based and leave them clueless about how they might begin to construct another.

        All scientists are interested in answers, but good scientists find questions equally, if not more compelling. Put another way, they are far more anxious to discover the precise boundaries of their knowledge than to have their assumptions confirmed. They do not presume that nature will give up her secrets easily, and will have conveniently organised herself to gratify their petty concerns, especially their vanity.

        When half of climatologists regard the prospect of proving themselves fundamentally wrong as a fabulous and deeply gratifying outcome, the practice of climatology will be worthy of being considered ‘scientific’. As far as I can see it is moving further from, rather than closer to, that destination.

      • The Higgs boson story tells us that scientists don’t ignore a 5-sigma signal especially when a theory predicts it, and there are few deniers left when the signal reaches that size. Likewise with temperature, the signal is already at an equivalent to 5-sigma for the centennial scale temperature rise, and the 100-year-old theory would have predicted that. In both cases the exact size of the signal was not predicted, but a confirmatory signal was there. E.g.
        https://www.livescience.com/44950-global-warming-natural-fluctuation.html

      • The ‘trend’ over the last two 20-30 year – warmer and cooler – climate regimes. Not so much warming in the scheme of things.

        https://watertechbyrie.files.wordpress.com/2017/02/berkley-trend.png

        Is it all anthropogenic? What will the next regime bring? Where is Mr. T?

      • The 30-year temperature smooths out the non-climate noise.
        http://woodfortrees.org/plot/gistemp/mean:240/mean:120/plot/gistemp/from:1986/trend
        Seems obvious which way things are going and why. Still too early to tell according to some who haven’t figured out why it is warming so much having ruled out the dominant rapidly rising GHG forcing for some reason known only to themselves.

      • With Jimmy it is all ‘non-climate noise’.

      • Anything that doesn’t show up in the 30-year temperature is not climate. Don’t let that stuff distract you. It self cancels within 30 years – noise.

      • Robert I Ellison: I sure don’t know. If we perturb flow in the system – this will change many or few quasi-standing waves. It depends. Greenhouse gases are a wild card in a probabilistic deck.

        I welcome this rare expression of humility. It has been pointed out many times that your many citations of dynamical systems theory do not address the issues of the effects of accumulating CO2, or the effects of a warming Earth surface, or the effects of changing hydrological cycle.

      • There is internal variability on multi-decadal scales influencing surface temps and hydrology – and no it doesn’t cancel out.

      • And, here comes the haughty humility:

        https://i.imgur.com/6HY0Bg3.png

      • It is inevitably from these people commentary that is technically far too silly to bother with – and they perversely always make a point of what? That I consider their pompous and pedantic little points not worth bothering with? Matthew’s pretentious and muddled noise with a vaguely sciencey tone – but without much in the way of a knowledge base. Jimmy’s perpetual recourse to stock standard memes and narratives around how dumb skeptics are. JCH’s feral science in which he claims to foretell the future by examining climate augury’s. Hard done by all of them by me thinking that they are all a bit obtuse? Sorry – but I haven’t found any of these to be conspicuously polite – so it seems a little ingenuous to complain yet again about me not taking them at their own estimation. Not much I can do about that. Nor do I follow any of them around dropping quite pointless comments of a personal nature – I ignore them all as a rule.

      • RIE, the linearity of the trend for the last few decades is not natural. Look again.
        http://woodfortrees.org/plot/gistemp/mean:240/mean:120/plot/gistemp/from:1986/trend

      • that is technically far too silly to bother with

        And yet, you always do. You’re a spoon bender.

      • Robert I Ellison: Hard done by all of them by me thinking that they are all a bit obtuse? Sorry – but I haven’t found any of these to be conspicuously polite – so it seems a little ingenuous to complain yet again about me not taking them at their own estimation.

        I sure don’t know. If we perturb flow in the system – this will change many or few quasi-standing waves. It depends. Greenhouse gases are a wild card in a probabilistic deck.

        You do not know what will happen if we perturb flow in the system. Full stop.

      • The next climate shift is due within a decade – whether that’s enough time for Matthew to get a clue is unknown.

      • Keep right on ignoring that the PDO flipped positive in 2014. And, for the first time in the record, has remained positive for 44 straight months in a row: soon to be 45 straight months.

        The PDO is swallowing La Niña events before they can earn to walk:

        http://www.ospo.noaa.gov/data/sst/anomaly/2017/anomnight.9.21.2017.gif

        http://www.ospo.noaa.gov/data/sst/anomaly/2017/anomnight.10.5.2017.gif

        The current positive phase of the PDO could last a few years or a few decades.

      • “Intensified circulation in subtropical gyre systems enhances upwelling and productivity in the California and Peru systems. Strengthened easterly trade winds increase equatorial current circulation in the Pacific.” http://www.mdpi.com/2225-1154/3/4/833/htm

        It is hypothesised that upwelling in the Pacific Ocean is modulated by solar activity over periods of decades to millennia – with profound impacts on communities and ecosystems globally. The great resonant systems of the Pacific respond at variable periods – the tempo increased last century for instance – of La Niña and El Niño alternation. There are variations in this tempo at 20 to 30 years that – throwing this out there – suggest a trigger in the 22 year Hale solar cycle of magnetic reversals. The solar butterfly flutters and the cyclone ensues. Longer term indicators of solar activity show changes over millennia that mirror the state of upwelling in the Pacific. The mechanism proposed is a spinning up of the Pacific trade winds and surface gyres as a result of colder and denser polar air. Low solar activity spins up the gyres producing more frequent La Niña and a cooler northeastern Pacific (more upwelling) – and vice versa. With a cooling Sun – it suggests that the next climate shift – due in a 2018-2028 window – will be to yet cooler conditions in the Pacific Ocean. This has implications for global heat content, hydrology and biology.

        If JHC did anything but eyeball sea surface temps – and make silly little rhetorical flourishes about lunches – if there were any knowledge of the physical linkages – it might be different. As things stand it is all just utter nonsense he pulls out of his ass.

      • Robert I Ellison: The next climate shift is due within a decade – whether that’s enough time for Matthew to get a clue is unknown.

        I sure don’t know. If we perturb flow in the system – this will change many or few quasi-standing waves. It depends. Greenhouse gases are a wild card in a probabilistic deck.

        You do not know what will happen if we perturb flow in the system. Full Stop.

  69. cor – is more likely—– for Robert.

  70. russellseitz

    Condolences to Ross on the recent tragedy at Las Vegas’s favorite climate conference venue :
    https://vvattsupwiththat.blogspot.com/2017/10/on-road-to-mandalay-bay.html

  71. It’s always about temperature isn’t it. Is that all there is to climate? Is that all the models do? Aren’t there any “Global Averages” for anything else?

    How are the models doing on other stuff – like humidity. Whats the Global Average Air Moisture Content nowadays? Or Wind? How about Cloud Cover? We can measure that, can’t we? Do the projections match the observations?

    Don’t get me wrong, I like temperature, and thermometers! But i think a more holistic approach is justified before reaching a verdict here. It seems that an awful lot of effort is given to wringing a fraction of a degree out of an error bar – when there is just so much more to be appreciated in the air!

    cheers, ken

  72. I have a totally harmless but pertinent reply in moderation. Can I get some help.

    • Words can be hurtful and we need to be protected from them, no matter how harmless they may appear to be!

  73. KenW: Very good question! For humidity see https://www.earth-syst-dynam.net/8/719/2017/ . ” None of the CMIP5 models or experiments capture the observed behaviour of the relative or specific humidity over the entire study period.”
    For wind see http://www.sciencedirect.com/science/article/pii/S1463500315002127 . “We identify that high performance of GCMs for ‘standard’ climate variables does not imply high performance for GCM forced wave simulations.”
    For the spatial resolution of the temperatures (SST):
    https://i.imgur.com/WywpoQr.jpg
    BUT: Climate models are valid for the future climate! :-)

  74. In 2001, whilst on my way to hear a speech in Westminster Hall by the Labour Science Minister, Lord Sainsbury, I discussed this issue with FoE people. They claimed we were to build the EU wide Virtual power station – solely renewables. I told them it would never work above 20%. The rest should be nuclear to be CO2 free.

    This Green dream is a Marxist ego trip by technologically-ignorant bullies backed up by corrupt climate scientists pushing false science based on two fundamental mistakes. Only now the experiment has been done have they realised their big mistake. The likes of Tesla’s batteries as a way of keeping Grid oscillation in short bursts are simply a way of Tesla corporate survival now electric cars have been proved to be uncommercial without owners bludging subsidies. Their use in SA is Musk using Obama’s subsidies to bail out SA temporarily, whilst SA deindustrialises to the level that the average voter realises their technologically ignorant MPs have forced their power system to revert to diesel – low efficiency, high pollution.CA is embarking on the same experiment but have more Hydro.

    OK, like SA, they are now desperately installing high efficiency CCGTs but fail to realise they must be operated at > 60% maximum output, otherwise you lose the steam cycle. So, over 15 years they will have converted a cheap coal base load system to an expensive gas base load system even less able to cope with the wind and PV supply fluctuations.

    In the UK we face the same level of dogma in Theresa May and the people who run power in her government. With a bit of luck she’ll soon be gone to be replaced by a non-Marxist pragmatist, willing to take the heat by confronting the technologically ignorant, including the National Grid Company. Our Grid is now on the verge of becoming SA-II. Huge diesel packs in government buildings and old quarries do the power humping whilst the Marxist politicians boast about the windmills, essentially a modern version of Easter Island statue virtue signalling.

    Central Europe is using Czech and French nuclear baseload. Our nukes are nearly gone. The social effects are now being felt as Brexit progresses. However, that will allow us to operate independently of EU Marxists to keep unemployment <10%.

  75. Once Marxism takes hold there is no such thing as science. Only political correctness.

    • Marxism is the principle “From each according to his ability, to each according to his needs.”

      If you’re a wealthy capitalist like me, good job spreading the anti-Marx gospel, brother.

      If you’re not, yet you remain convinced that Marx’s principle is unsound, that puts you in the camp of those easily brainwashed into voting against their own interests.

      In either case we capitalists depend on the easily brainwashed. Without them it would be a lot harder to widen the gap between rich and poor. ;)

      • And the difference is your fair share.

        Q: Who determines the from and to parts?
        A: The Federal Government.

        In Marxism, money is the property of the government. You get the use of what it thinks you need, and the gov. just keeps your fair share.

    • pochas94,

      Dead right. And its coming again. An excellent article by Paul Kelly in the Weekend Australian is here: http://www.theaustralian.com.au/opinion/columnists/paul-kelly/conservative-failures-open-door-to-return-of-socialism/news-story/811914e125decfa1227cc44c720257a6 . I’ll try to post it, but it may be too long and get caught by the SPAM filter.

      Conservative failures open door to return of socialism

      “Within the Anglosphere, the ideological lines are being drawn sharply and Britain leads the way. Speeches from Tory leader Theresa May and Labour leader Jeremy Corbyn have raised the existential question: Are we returning to socialism?
      The euphoria of British Labour, facilitated by Conservative Party traumas, has been on display. A slicker Corbyn told a rousing party conference his agenda is that of a “modern progressive socialist party that has rediscovered its roots and its purpose”.
      The politics of the left has been in turmoil in the Anglosphere. Virtually all parties have moved to more radical positions tempered by different degrees of pragmatism: the ALP is among the most pragmatic and British Labour the most extreme.
      Mocking the May government, Corbyn says Labour is now “winning the arguments” about a new direction for Britain with its agenda of expanded public services, higher taxes on the rich and corporates, renationalisation of railways, energy companies and water, rejection of welfare cuts, abolition of tuition fees and a transformation of power in British society away from elites and back to the people, reminiscent of Donald Trump on the hustings.
      The Economist magazine says “the unthinkable image of a left-wing firebrand in 10 Downing Street is increasingly plausible”. It reports the bookies have Corbyn favourite to be the next prime minister, probably an exaggerated call. Incredibly, The Economist even offers a semi-apologia for Labour, saying there is a “good Corbyn” as well as a “bad Corbyn”. Yet it concludes with harsh realism that PM Corbyn would do “serious and lasting harm” to Britain.
      The cultural phenomenon in the Anglosphere is the resurrection of socialist ideas seeded by the decade-old global financial crisis and the monumental ineptitude of ruling elites, unmatched since the 1930s. Corbyn says the centre ground of politics is not “where it was 20 or 30 years ago” and that “a new consensus is emerging from the great economic crash and the years of austerity”.
      He says the conservative mantra of deregulate, privatise, cut taxes for the rich, austerity and restrictions on the public sector is unpopular and consigned to the dustbin of history. Have no doubt, Corbyn’s propaganda is powerful. Yet the real story is how conservative failure has given a bankrupt agenda new life — an insight into the deeper cultural crisis of the Anglosphere, where the Enlightenment legacy seems finished.
      In a recent interview with Unherd.com, commentator, academic and historian Niall Ferguson warns Britain and the US are making a major historical blunder by misinterpreting the current age and the causes of their economic malaise originating in the GFC. Ferguson repudiates the entrenched dogma that the problem is a crisis of capitalism; he says it comes instead from bad public policy and government failures.
      Ferguson warns of the consequences arising from a false diagnosis seen in Britain, above all, in the rise of Corbyn and in the US in the rise of Bernie Sanders with similar views.
      “The real worry is not that capitalism is in crisis,” Ferguson says. “It’s not. The real worry is that socialism is making a comeback, and that’s different. The reason it’s concerning is we thought we had killed it off about 1989 but as I discover every day students have no memory of that great battle in the 1980s that led to the defeat of the communist states and socialism as an ideology.
      “They (students) are ready to lap it up as though the 1980s never happened. If I were living in Britain now I would be worried, above all, by the rise of Jeremy Corbyn as Labour leader from being a kind of lunatic fringe Militant Tendency element.”
      Ferguson calls Corbyn an “unreconstructed leftist who I think is tempting younger voters with implausible promises along the lines of jam today, jam tomorrow and regulate anything you don’t like”.
      He says the lesson from history is that “the more you attempt to pursue socialist policies in pursuit of egalitarian outcomes the worse it goes for the economy and potentially for political freedom”. Yet this is the direction being taken by the left.
      The British Labour Party has marched far down this road, intoxicated by legions of activists. The related fear is that the Democratic Party in the US might follow. The data shows this is a generational event. The proposition yet to be tested is obvious: is Australia any different?
      Ferguson says “we’ve seen this movie before” — witness Britain in the 70s with high taxes, strong government intervention and powerful trade unions. He says the issue is whether there is a reversion to the failed policies of mid-20th-century socialism. Will the Anglosphere decide to forget its history?
      Ferguson highlights that Trump, not Sanders, became US President. This was not a vote against capitalism — given Trump is a capitalist. It was, rather, a vote saying globalisation had “overshot” and gone too far. Trump was a critic of globalisation: of free trade, open immigration, outsourcing US jobs.
      The US public wanted globalisation “dialled back” in its economic benefit, not the demise of capitalism or any lurch to socialism. This was completely understandable. Addressing the malaise facing the US and British economies, Ferguson says the “central problem” has been “bad government policy”, citing the West’s fiscal crisis, the decline in educational standards and problems in health systems.
      The data shows the problems in the US public school system are “shocking” and the main problem is not funding but teacher unions. The left’s agenda of huge increases in social spending is unlikely to deliver better services.
      For Ferguson, this evidence is clear: he predicts that if robust policy reforms are instituted by governments, the upshot will be productivity gains and stronger economic growth. What is overdue is “taking a long, hard look” at fundamental state policy failures. Good luck with that.
      In her blunder-ridden speech to the Conservative Party conference this week, May tried to rally a dispirited party, aware of the great ideological war now at hand. She said she aspired “to root out injustice and to give everyone in this country a voice”, and then, aiming at Corbyn, declared her mission was to win the battle of ideas “in a new generation all over again”.
      “Free and open markets” she said, had raised living standards for everybody, had entrenched freedoms and had been central to Britain’s role “as a force for good in the world”.
      “So don’t try and tell me that free markets are no longer fit for purpose, that somehow they’re holding people back,” May said. “The free market and the values of freedom, equality, rights, responsibilities and the rule of law that lie at its heart remains the greatest agent of collective human progress ever created. So let us win this argument for a new generation and defend free and open markets with all our might.”
      After her disastrous effort at the Conservative conference, May’s leadership hangs by a thread. The issues are: when will she be replaced, and by whom? The crisis of the Conservatives suggests they are ill-equipped for any battle of ideas. This gifts Corbyn an immense opportunity, yet it is also possible Labour has reached its zenith and that British common sense will deny him the PM’s office, given the next election is a long way off.
      May’s speech revealed the huge problems the Conservatives face in the coming battle of ideas.
      As part of her defensive tactics, May pledges more funds to education, a better health system, urgent action to confront housing affordability (the average British home cost is eight times average earnings), price caps on energy bills and intervention to limit student debt.
      But Corbyn merely says if these interventions are essential, then let Labour do the job properly.
      He mocks May by demanding she “go the whole hog, end austerity, abolish tuition fees, scrap the public sector pay cap”.
      He says the Tories won’t change. They stand for “rampant inequality, the hollowing out of our public services, disdain for the powerless”.
      The language is reminiscent of Bill Shorten in our 2016 campaign. Corbyn says British Labour has become the “political mainstream” and that its once denounced manifesto is now “popular”. The harsh truth, however, is that Corbyn’s history suggests that he stands to the left of even the formal manifesto.
      Given the length of time Corbyn has espoused disreputable ¬positions, there is no certainty he will be tempered by electoral pragmatism. Corbyn has long been sceptical about NATO and Britain’s membership; he attributes much of the Islamist terrorist problem to the foreign policy of successive British governments; he is a sceptic about globalisation; and as a fan of the former Venezuelan leader Hugo Chavez, he refused to condemn his hero even when the economy collapsed and there was warfare in the streets.
      The Economist suggests ¬Corbyn “would instinctively line up against America in a geopolitical emergency and he would see a financial crisis as Act One in the collapse of capitalism”.
      Yet sections of the Australian left take him seriously.
      Corbyn says his aim in Britain is not just redistribution but a transformation in the system and ¬nature of power in British society. Nationalisation of utilities and an expanded public sector are the heart of this construct.
      His slogan is that “socialism for the 21st century” is “for the many not the few”.
      British conservatives have made two shocking mistakes this year. First, they overestimated public abhorrence towards a radical left agenda, along with their own ability to demonise Corbyn. Second, their astonishing and accumulated ineptitude — as shown in politics, policy and business — has created the serious option of the most decisive leftward shift for more than half a century. The dominant story of the times is conservative failure.
      May’s central dilemma is an insight into the problems faced by most Anglosphere incumbents: Malcolm Turnbull is a relevant example. The public is hostile to the status quo: weak growth, poor wages, high energy prices, inept government service delivery and failing schools.
      Because of chronic failures of public policy, May must also present as an agent of change. She must defend the kingdom against the socialists while seeking to re-position the kingdom with new policies. In May’s case, this is a task beyond her. It is the exact challenge facing Turnbull.
      The arrogance of both sides is astounding. The Conservatives have simply assumed for too long the mad and radical ideas involved in the return to socialism make their oppositions unelectable. The British left, on the other hand, is infatuated with the quasi-revolutionary belief the status quo is indefensible and that they represent, as they always have, the “right side of history”.
      The Corbynites have made a mistake Shorten would never entertain: the delusion of the revolutionary mindset. It is likely to be their downfall.
      Yet there is one message the ALP is probably ¬absorbing, that British Labour may be more defined as the party of youth than the party of the working class.
      If true, it is an unprecedented event with both pluses and negatives. It injects Labour with great energy, another aspect of the Sanders phenomenon in the US. British party activists carry a sometimes embarrassed Corbyn as their hero, a process likely to be as fickle as it is fashionable.
      There is nothing inevitable about the victory of the more radical left. May won the 2017 poll against Corbyn. New Zealand PM Bill English far outpolled Labour in the recent election and should — we cannot be certain — form a new government. Trump, not Sanders, became US President. Turnbull, not Shorten, won the 2016 election in Australia.
      In each case, the opponents on the left, representing various degrees of radicalism, were defeated. Yet their momentum was unmistakable. The issues for the future are how strong this tide runs and whether the optimal strategy for the radicals lies in pragmatic adaptation or tougher quasi-socialism.
      Invoking a mood of ideological anger, May denounced Corbyn as a politician who wants to pile taxes on business when business investment is essential; borrow hundreds of billions of pounds to nationalise industry; strip Britain of its nuclear deterrent; and play with progressive culture’s descent into anti-Semitism and intolerant hatreds.
      Such rhetoric was once a winning position. But it will no longer suffice. The vault of conservative scare campaigns — justified or not — is increasingly ineffective. Young people are unconcerned and unpersuaded.
      They live in the present, not the politics of the past.
      Meanwhile, the dilemma for conservative incumbents arises in their repeated failures to manage the economy and spread the benefits, the traditional hallmark of conservative success.
      They live in the extended shadow of the GFC, the growth showdown of the past decade, the excesses of the rich 1 per cent and their own inability to devise policies to revive their economies.
      It is tempting to think much of the current battle of ideas is fraudulent. This is not like the 20th century, when socialists and communists offered an alternative model to democratic capitalism and a number of nations originating with the Soviet Union embraced this alternative.
      On display today is something very different: the failure of conservative management, the ¬romantic and self-interested impulse of radical politics unable to reinvent itself and the delusions of large sections of Western populations, angry, frustrated, looking for a better way and susceptible to the greatest hoax of the previous century.”

  76. My latest –> The “Climate Science Special Report” demands a “red team”
    http://www.cfact.org/2017/10/03/the-climate-science-special-report-demands-a-red-team/

    Here is the beginning (important links not included): “Several months ago there was a brief furor when the New York Times leaked the final draft of the upcoming Climate Science Special Report (CSSR), which is extremely alarmist. The CSSR is being prepared by the federal Global Change Research Program (USGCRP) and has been in the works for several years, mostly under Obama and still staffed by die-hard alarmists.

    The USGCRP consists of the 13 federal agencies that do climate science and it was speculated that one of these, especially the EPA under Pruitt, might block the CSSR. This has not happened and the Report is now scheduled for release next month. The CSSR is far more alarmist than any IPCC report, as most USGCRP reports have been, thanks in particular to NOAA. It will be an official Federal report. Even worse, the Report is slated to be Volume I of the National Climate Assessment (NCA), which is due out late next year. The NCA is mandated by law, which gives the CSSR even more status as federal policy.

    It would be ironic indeed if the skeptical Trump Administration were to simply issue this alarmist report as federal policy on climate change science. In fact it would be tragic, a major defeat for climate realism and sound science. But there is a simple way to turn this looming defeat into a major victory. The solution is to do an official Red Team critique of the CSSR.” (end of quote)

    There is more to the article of course.

    David

    Here is my crowd funded project to teach the debate, not alarmism:
    https://www.gofundme.com/climate-change-debate-education

  77. J’accuse

    Cess 1976 made a fundamental Physics’ mistake – to claim OLR/theoretical ground IR emission was Earth’s radiative emissivity. This was puerile and claimed falsely increased ground IR emission absorbed by the atmosphere by 40%.

    Also in 1976, NASA claimed this was offset by an opposite flow negative convection, which is impossible. Hansen later admited this was a ‘fudge’. So GISS has admitted the 1976 fraud.

    To hide the fraud, which was obvious to any professional, the Met. Office devised a fake Kirchhoff’s claim based on faulty cloud physics, that also creates false positive feedback. The person responsible for this frauds was Reading Universities’ late husband of Julia Slingo. There is a hitherto undiscovered increase of extinction coefficient in rain clouds missed by Hansen in 1969. This gives fake positive feedback when in reality low level cloud removal of latent heat from the surface plus peculiar properties of 16 – 23 micron water vapour nearly exactly offset GHG warming.

    So, we have a story of deliberate fraud in climate modelling since 1976. It is proven. Reality is now entry to a new LIA as tsi falls.

    • Reality is now entry to a new LIA as tsi falls.

      And there’s the effect of albedo rise caused by tinfoil hats..

      On Climate Etc particularly one should *never* underestimate the numbers of tinfoil hats being worn.

      Down those temperatures will plummet.

      • Can you imagine any ‘science professional’ claiming Earth’s radiant emissivity is 18 deg C OLR and 15 deg surface IR emission?

        Physics 101 requires equal temperatures; to claim otherwise is to betray at the very least substantial scientific ignorance.

        This outrageous error needs to be corrected, reducing atmospheric IR absorption by 40%. The GISS modelling fraud in 1978, invoking fallacious negative convection, later covered up by Julia Slingo’s late husband to give fake positive gain, is another shameful fraud. It may apparently work but it’s still wrong, hence the models ‘run hot’.

        All in all I am ashamed that so many people accept false science – the end justifying the Marxist means.

      • No, the radiated emission is observably -18 C and the surface is observably +15 C. This is the measured value of the greenhouse effect. It works like insulation and there is no mystery about these observations.

      • Sorry: the quotient of OLR and surface Planckian emission assuming it is the Planckian vacuum level – wrong.

        The GISS 2D model using fictional negative convection to offset the imaginary 40% extra IR heating of the atmosphere was in 1976. 25 years’ later Hansen admitted ‘this was a fudge’. It partially cleared his conscience but the bandwagon could not admit climate models were a Goebbels-like propaganda ploy with lefty politicians strutting around, claiming only peer-eviewed science approved by scientific ignorami.

        I feel ashamed that ignorant people are forcing energy poverty onto our populations to assuage their Marxist beliefs.

      • Those are observations. Your problem must therefore be with the instruments rather than the science, but you have not said that.

      • @JimD

        Tell me any physcis’ text outside of fake climate science that claims radiant emissivity calculation can be done with different temperatures – it needs rel IR emitter/black body at the same temperature.

        To inform you further, the real science involves the real IR self absorption data, but I suspect that you may be ignorant of this: it’s tough science.

      • real IR self absorption data,

        Well, “self absorption” seems about right.

        Keep going Alexander, you fit right in, though it’s a bit OTT, you’re in danger of sounding like satire.

      • Does your house radiate with an external temperature the same as its internal temperature when the inside is warmer? No, its external radiation is less. There is a temperature gradient sustained by insulation. If you know how insulation works, you see how the physics works for the atmosphere because GHGs insulate the surface. Plus, you are still saying that the actual observations are impossible for you to understand.

    • Alexander Davidson: Cess 1976 made a fundamental Physics’ mistake – to claim OLR/theoretical ground IR emission was Earth’s radiative emissivity. This was puerile and claimed falsely increased ground IR emission absorbed by the atmosphere by 40%.

      Also in 1976, NASA claimed this was offset by an opposite flow negative convection, which is impossible. Hansen later admited this was a ‘fudge’. So GISS has admitted the 1976 fraud.

      Links or exact citations of sources would be helpful.

  78. I wanted to post this again, and add the solar parameters I think are needed. These parameters are not in full force, but close and coming I believe. My other condition was 10+ years of sub solar activity in general which is in place.

    POTENTIAL COOLING

    What I am trying to say is La Nina does not factor into my cooling ,although it will cause cooling.

    Here is what I am looking for to accompany the potential cooling if my thoughts may be correct.

    Factors with the cooling if it comes that will make me confident.

    1. Very low solar activity

    a. solar wind 350 km/sec or lower

    b. ap index 5 or lower with random very high spikes

    c. cosmic ray counts 6500 units or greater

    d. euv light 100 units or lower, uv light off 5% or more

    e. solar flux sub 90

    f, imf field 4.2 nt or lower

    g. solar irradiance off .05% or more

    2. Overall lower sea surface temperatures on a global basis

    3. A slight increase in albedo

    a. due to an increase in global cloud coverage

    b. due to an increase in global snow coverage, sea ice

    c. due to a N.H more meridional atmospheric circulation

    d. due to an increase in major volcanic activity.

  79. Willard, 10/5/17 @ 12:57 pm, said I’m conflating modulz with theories. The word you’re looking for is models, not modulz, which uncapitalized seems to be a stand-in variable in a computer program pronounced like its parent, modules.

    W. said, a model is a realization of a theory as in to model Gettier cases. A Tinkertoy representation of a molecule is a model in science. A computer program that simulates something external to the computer is sometimes a model in science. A scientific proposition that objectively accounts for something in the real world is again a model, this time a scientific model in its full form.

    Science creates models of the Real World. It’s propositions define experiments, mapping existing facts onto future facts, where facts are observations reduced to measurements and compared to standards. Gettier cases are from philosophy, in particular epistemology, dealing with truth values in knowledge. And truth is a methaphysical notion, not epistemological. Scientific propositions are on the reals, they are real valued, not truth valued. When a metaphysical proposition happens have scientific validity, that is a Gettier case. It’s a matter of luck, not knowledge. See Ichikawa, J.J., The Analysis of Knowledge, SEP, 3/7/2017

    For a fair discussion of what science is, you might want to read Jennings, B.K., On the Nature of Science, La Physique au Canada, V. 63, No. 1, February 2007. He has a good grip on scientific models, along with their forms as conjectures, hypotheses, theories, and laws.

    W. worried that the distinction between hypothesis and a conjecture makes little sense to a Popperian.

    What Willard and Jennings didn’t know was, in philosophical terms, that Popper is irrational. Stove, D.C., Popper and After : Four Modern Irrationalists, December 1982. Stove set up two delightful but erudite criteria for what it means to be irrational, and provided ample examples from the scribblings of Popper, Kuhn, Lakatos, and Feyerabend in support. Popper was a metaphysician.

    Here is an even more fundamental indication of irrational behavior: Popper didn’t hold with definitions. Now he wasn’t above using definitions here and there when it suited his theses, but that just adds inconsistency into the dense mix of his irrational quirks.

    Popper didn’t understand induction, yet wrote volumes about the Problem of Induction. Popper criticized Bacon for advocating Aristotle’s infinite regression induction, but never understood that Bacon specifically replaced that form of induction with what he called real induction, i.e., deduction, a word that was just coming into use in the 17th Century.

    Popper thought all scientific propositions were either Universal Generalizations, or equivalent to one. “All Ravens Are Black” was his model of a scientific propositionl that helped launch his fame. No scientist worthy of the title would propose such a scientific proposition. Popper knew that a UG cannot be empirically affirmed but can be disproved by a single counterexample. So he created falsification as a necessary equivalent statement, one that conceivably could be empirically tested, for every scientific proposition. Moreover, Popper was implying that scientific propositions are truth valued. They never are.

    Falsification solved Popper’s imagined empirical problem, but he went further to claim that it solved his demarcation problem to separate science from nonscience. His need for demarcation problem existed because he lacked the knowledge and discipline to define science at the outset. Then for icing on the cake, Popper simply assumed that propositions from a nonscience field could never be UGs.

    So Popper was forced to conclude that only false scientific propositions could be validated empirically, but never true ones. Falsification cannot be a method for validating a true model! So Popper removed empirical testing from his model of science, replacing it with intersubjective testing, consisting of the triad of peer review, publication, and consensus, (all within a certified community, of course). Perfect!

    Perfect for Publish-or-Perish academia. It was like pushing a car downhill. Scientific propositions no longer actually had to work. Deconstruction was complete, and Post Modern Science was born. Popper would confess, “I am not a scientist”, but he wasn’t above deconstructing Modern Science into Post Modern Science, which has captivated and captured most of academic science. AGW is one of the consequences, doomed to vanish like facilitated communications mired in molasses.

    Popper was a rank crackpot and a crank. Pity his followers.

  80. Dear readers, it’s about time I nail the failure of R D Cess, backed up by fallacious GISS 2d modelling by GISS in the same year, 1976, to use the right definition of emissivity, aka emittance.

    What is the definition of surface emissivity?
    A Combustion File downloaded from the IFRF Online Combustion Handbook ISSN 1607-9116/b>

    “The emissivity of a surface may be defined as the ratio of the radiance from the surface to that from a black body viewed under identical optical and geometrical conditions and at the same temperature.

    I’m a metallurgical engineer who has designed radiant heating furnaces. Cess’ claim that Earth’s emissivity is OLR (-18 eg C) divided by Planckian surface emissivity (15 deg C), giving rise to 40% extra atmospheric IR thermalisation and 33 K GHE is as wrong as can be imagined.

    Thus the climate models are wrong from the ground up, and must be withdrawn from the literature. This is an attempt for objective science to reject the ignorant and the fraudsters who have made a corrupt living from pretending the climate models are accurate. I do hope that climateetc now realises it has simple proof of how climate science has beeen corrupted for 40 years. There are two more major errors, including a hitherto aspect of the extinction coefficient of rain clouds missed by Hansen in 1969 and van der Hulst in 1967. I’m working on the real physics.

  81. Jim D, 10/5/2017 @ 1:49 pm asked,

    Explain how the ocean acidifies (gains carbon) while also outgassing. The carbon is clearly coming from fossil fuels. Explain where the fossil fuel carbon is going if not to the ocean and atmosphere. You have clearly not thought it through from a carbon budget perspective, and Salby never provided a budget either.

    IPCC defines acidification as a consequence of the equations of Marine Carbon Chemistry, which it attributes to Zeebe & Wolf-Gladrow (2001). It also relies on this statement: Equilibration of surface ocean and atmosphere occurs on a time scale of roughly one year. AR4 §7.3.4, pp. 528-9.

    Zeebe & Wolf-Gladrow (2001) derives the marine carbonate equations under the assumption of thermodynamic equilibrium with gaseous carbon dioxide [sic, bold added], and where the concentration of dissolved CO2 is given by Henry’s law … with … the solubility coefficient of CO2 in seawater. P. 2.

    A detailed understanding of the carbon dioxide system in seawater is a prerequisite for the investigation of a variety of phenomena in the marine environment. … the storage capacity of the ocean for … CO2 … chemistry in the global carbon cycle is the CO2 exchange between ocean and atmosphere. … ocean chemistry between glacial and interglacial periods.While a description by equilibrium properties is reasonable for most of the mentioned phenomena an analogous treatment of processes on small time and length scales is inadequate. Bold added, Zeebe, R.E., D.A. Wolf-Gladrow, & H. Jansen, On the time required to establish chemical and isotopic equilibrium in the carbon dioxide system in seawater, Marine Chem. V.65, 1999, 135-153, pp. 1336-7.

    Oceanic uptake of anthropogenic CO2 leads to a decrease in seawater pH and thus lowers the saturation state for carbonate minerals such as calcite and aragonite (CaCO3). This process, termed ocean acidification, … . … carbon emissions must be reduced to avoid these consequences. Bold added, Zeebe, R.E, et al., Carbon Emissions and Acidification, Science, V. 321, 7/4/2008, p. 51.

    First, a definitional problem: these authorities define acidification as a change of pH caused by added CO2. That is is not observable and hence not measurable and not factual. In the science of AGW hysteria, acidification is measured by scattered reports of increased oceanic pH. The idea that pH change has any relation to CO2 uptake is a bald assumption. The model that ocean pH depends on CO2 uptake is a conjecture. Conflating pH with ph due to CO2 is like confusing CO2 following a temperature change with CO2 a cause of temperature change. These are a matched pair of bone-headed errors for anyone working in science.

    Second, treating natural and anthropogenic CO2 differently, per Zeebe, is unsupported. The only difference climatologists claim between the two species of CO2, other than their origin, is in the mix of the three isotopic forms of CO2, hence a different isotopic weight. This is far from the state of the art in specifying Henry’s coefficient for CO2 in any kind of water.

    Third, no part of Earth’s climate system is in thermodynamic equilibrium, much less the surface layer of the ocean. And surely that is where investigators make most of their pH measurements, because that is the only part of the ocean where the scare tactic is operational. Thermodynamic equilibrium is a specific state between a thermodynamic system and its environment. Zeebe & Wolf-Gladrow reference to thermodynamic equilibrium with gaseous CO2 requires some quite friendly parsing to have any meaning. Nonetheless, thermodynamic equilibrium is the state of simultaneous mechanical, chemical, and thermal equilibrium. Not one of these conditions is even close to holding for the surface layer of the ocean, presuming a colloquial distance in disequilibrium.

    Neither a valid à posteriori (empirical) nor a valid à priori (theoretical) reason exists to believe that dissolution of CO2 causes a lowering of ocean pH. Nothing is left to explain.

    As for Salby, my response 10/5 @ 1:43 pm and Jim D’s question just after at 1:49 pm seem to have crossed in the ether.

    As to the fate of CO2 emissions, they just get mixed with the CO2 in the several reservoirs and in flux. And it’s not the carbon coming from fossil fuels that counts; it’s the entire flow from over 90 (to the ocean) to 485 GtC/yr (including leaf water and all reservoirs). Atmospheric CO2 on climate time scales follows Henry’s Law, and the coefficient of solubility depends on the temperature of the solvent, recognizing that atmospheric pressure, salinity, and isoptoic weights are negligible. The carbon budget is determined by long term global surface ocean temperature, and that’s about as thought through as anything gets in this climate business.

    • Your carbon budget goes something like this. The atmosphere’s CO2 level goes up by x. We emit 2x. Those 2x are absorbed by something you can’t specify and 3x come from somewhere else you can’t specify to make the increase of x. The view in real science is that 2x is emitted, the atmosphere gains a net x while the land and ocean also gain a net x. The ocean and atmosphere have to obey Henry’s Law so both have to increase in some proportion to maintain the chemical equilibrium.
      You also seem confused about acidification which is defined from the level of H+ ions. As CO2 is absorbed by the ocean, H+ ions do increase because CO2 plus water is carbonic acid (H2CO3). As carbonic acid increases, so does acidification. Carbonated drinks are acidic. Imagine carbonating seawater. So pH has nothing to do with how the H+ ions increased, but only measures the fact that they have increased.

  82. “Oceanic uptake of anthropogenic CO2 leads to a decrease in seawater pH and thus lowers the saturation state for carbonate minerals such as calcite and aragonite…”

    There is a multi-phase chemical equilibrium process. The end state is a reduction in carbonate oversaturation – and a presumed impact on shell formation. It leads to models that forecast under-saturation of aragonite in southern oceans towards the end of the century.

    Oceans retain the chemical reactions that maintain the state of carbonate oversaturation with an altered stoichiometry. In practical terms – it means that more carbonate is remobilized in the photic zone of open oceans – or from benthic deposits of carbonates in shallow waters – and neutralizing carbonic acid. There is a massive supply of carbonate available in the Earth’s oceans to neutralize carbonic acid and retain carbonate oversaturation.

    This doesn’t mean that changing ocean chemistry is not a problem – with unknowable impacts on ocean ecology. In the end – we always come up against the unknowable. Get used to it. Just that the problem is not properly posed.

  83. Pingback: Weekly Climate and Energy News Roundup #286 | Watts Up With That?

  84. Oh, here we go again.

    McKitrick says:
    “the majority of the reconciliation happens by using “updated forcings”, i.e. peeking at the answer post-2000”

    Calling it “peeking” is a distortion and a misleading insinuation. The “inputted forcings” is not some unfair “peeking”; it’s a justified explanation of why model projections can differ from observations (DOI: 10.1175/JCLI-D-16-0333.1, page 379; DOI: 10.1038/NGEO2973). Maybe some people don’t like that explanation because it doesn’t suit their politically-motivated narrative?

    There are still at least four possible explanations for the differences between the models and the observations:

    1) Observational uncertainty / heterogeneities in the observational data
    2) Internal variability / chance fluctuations
    3) Error in inputted forcings
    4) Model error (ex: models over-estimate climate sensitivity)

    If McKitrick really wants to claim that the models over-estimate warming (i.e. there’s something wrong with the models), then he needs to justify explanation 4. Simply pointing out model-data discrepancies is not enough, since discrepancies are often explained using explanations 1, 2, or 3. And he can’t dodge the other explanations by calling them “peeking”.

    To illustrate this point, look at how explanation 1 addresses some of the model vs. observations discrepancies McKitrick discusses for satellite-based analyses:

    RSS admits that their version 4.0 TLT likely under-estimates lower tropospheric warming (DOI: 10.1175/JCLI-D-16-0768.1, page 7715). And I don’t see how folks can take UAH versions 6.0 TLT seriously when:

    1) UAH TLT is about the same as for ERA-I (DOI: 10.3402/tellusa.v68.31503, figure 3), even though the ERA-I team admits that they under-estimate lower tropospheric warming (DOI: 10.1002/qj.2317; DOI: 10.1002/qj.2949).
    2) UAH’s has history of under-estimating tropospheric temperature (for example, DOI: 10.1126/science.1216273).
    3) There’s continuing criticism of UAH’s homogenization methods (for example, DOI: 10.1175/JTECH-D-16-0121.1).

    The UAH analysis is so bad, that some researchers aren’t even bothering with it anymore (DOI: 10.1002/2017JD027158).

    So UAH and RSS both under-estimate lower tropospheric warming, and thus explanation 1 contributes to the satellite-based discrepancies McKitrick discusses.

  85. McKitrick says:
    “Finally, Tim Vogelsang and I showed a couple of years ago that the tropical LT (and MT) discrepancies are also present between models and the weather balloon series back to 1958.”

    This is the paper in question doesn’t really help McKitrick’s case. To see why, note that there are still at least four possible explanations for the differences between the models and the observations DOI: 10.1175/JCLI-D-16-0333.1, page 379; DOI: 10.1038/NGEO2973):

    1) Observational uncertainty / heterogeneities in the observational data
    2) Internal variability / chance fluctuations
    3) Error in inputted forcings
    4) Model error (ex: models over-estimate climate sensitivity)

    If McKitrick wants to claim that the models over-estimate warming (i.e. there’s something wrong with the models), then he needs to justify explanation 4. Simply pointing out model-data discrepancies is not enough, since discrepancies are often explained using explanations 1, 2, or 3.

    Discrepancies between models and radiosonde trends largely stem for internal variability and heterogeneities in the radiosonde data (DOI: 10.1002/2017GL073798). Basically, a 1980s shift in radiosonde equipment caused 1980s radiosonde tropospheric temperatures to be artificially high, affecting radiosonde-based tropospheric temperature trends (DOI: 10.1175/JCLI3717.1; DOI: 10.1126/science.1115640; DOI: 10.1175/2008JCLI1929.1).

    So much of the discrepancy between models and radiosondes is due to explanations 1, 2, and 3, not explanation 4. That’s been known for years. McKitrick’s paper does nothing to show otherwise (DOI: 10.1002/env.2294). In fact, as far as I can tell, McKitrick’s paper doesn’t mention the previous scientific literature in support of explanations 1, 2, and 3. I guess that makes it easier for him to lead readers towards accepting his preferred conclusion of explanation 4.

    • None of your citations offer anything close to conclusive evidence and your 4 cited reasons are merely guesses.

      Nowhere is it mentioned that CO2 just does not cause the warming assumed, assumed.. assumed, junk science

      All of the papers you refer to provide no conclusive evidence for any claims including McKitrick’s.

      What we have is another field polluted by cancerous thinking of mathematics replacing physics.

  86. Pingback: Energy & Environmental Newsletter: October 16, 2017 - Master Resource

  87. Pingback: Energy And Environmental Newsletter – October 17th 2017 | PA Pundits - International