Will a return of rising temperatures validate the climate models?

by Donald C. Morton

The coincidence of the current plateau in global surface temperatures with the continuing rise in the atmospheric concentration of carbon dioxide has raised many questions about the climate models and their forecasts of serious anthropogenic global warming.

This article presents multiple reasons why any future increase in temperature should not be regarded as a vindication of the current models and their predictions. Indefinite time scales, natural contributions, many adjustable parameters, uncertain response to CO2, averaging of model outputs, non linearity, chaos and the absence of successful predictions are all reasons to continue to challenge the present models. This essay concludes with some suggestions for useful immediate actions during this time of uncertainty.

1.  Introduction 

What if the global climate began to warm again? Would all the criticisms of the climate models be nullified and the dire predictions based on them be confirmed? No one knows when the present plateau in the mean surface air temperature will end nor whether the change will be warmer or cooler. This essay will argue that the climate models and their predictions should not be trusted regardless of the direction of future temperatures.

Global temperatures usually are described in terms of the surface air temperature anomaly, the deviation of the temperature at each site from a mean of many years that is averaged over the whole world, both land and oceans. The plots in Fig 1 show how this average has changed since 1880 while the concentration of carbon dioxide (CO2) has steadily increased. The temperature rise from 1978 to 1998 has stopped, contrary to expectations, as shown in Fig. 2 from the latest report of the Intergovernmental Panel on Climate Change (IPCC 2013). Some climatologists like to claim this discrepancy is not sufficient to invalidate their theories and models, but the recent proliferation of papers trying to explain it demonstrates this plateau is a serious challenge to the claims of global disaster.

In this essay I will refer to the present leveling of the global temperature as a plateau rather than a pause or hiatus because the latter two imply we know temperatures will rise again soon. Also I prefer to describe CO2, methane (CH4,), nitrous oxide (N2O), ozone (O3), and the chlorofluoro carbons (CFC’s) as minor absorbing gases rather than greenhouse gases because glass houses become hot mainly by keeping the heated air from mixing with cooler air outside rather than by absorption in the glass. Atmospheric absorption by these gases definitely does warm the earth. The controversy is about how important they are compared with natural causes. We must remember the effect of CO2 is proportional to the logarithm of the concentration while CH4 and N2O contribute according to the square root of concentration but are less abundant by factors of 200 and 1200 respectively. The CFC’s act linearly but the ones still increasing have less than a millionth the abundance of CO2.

IPCC (2013) prefers the term projections rather than predictions for future changes in temperature, but everyone who wishes to emphasize alarming consequences treats the projections as predictions so I will do so here.

Slide1Fig. 1. Global Average Temperature Anomaly (°C) (upper), and CO2 concentration (ppm) on Mauna Loa (lower) from http://www.climate.gov/maps-data by the U.S. National Oceanic and Atmospheric Administration. The CO2 curve is extended with ice-core data from the Antarctic Law Dome showing a gradual increase from 291 ppm in 1880 to 334 ppm in 1978. See ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/law/law_co2.txt.

Skeptics have used this continuing plateau to question whether CO2 is the primary driver of climate, so if temperatures begin to rise again, we can expect many claims of vindication by those who have concluded human activity dominates. An increase is possible as we continue adding CO2 and similar absorbing gases to the atmosphere while natural variability or a continuing decrease in solar activity might result in lower temperatures. It is a puzzle to know exactly what physical processes are maintaining such a remarkable balance among all the contributing effects since the beginning of the 21st century.

Here then are some reasons to continue to distrust the predictions of climate models regardless of what happens to global temperatures.

 2.  Time Scales

How long do we need to wait to separate a climate change from the usual variability of weather from year to year? The gradual rise in the global surface temperature from 1978 to 1998 appeared to confirm the statement in IPCC2007 p. 10 that, “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”. Then in 2009 when the temperature plateau became too obvious to ignore, Knight et al. (2009), in a report on climate by the American Meteorological Society, asked the rhetorical question “Do global temperature trends over the last decade falsify climate predictions?” Their response was “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

Slide2Fig. 2. Model Predictions and Temperature Observations from IPCC (2013 Fig. 11.9). Beginning in 2006, RCP 4.5 (Representative Concentration Pathway 4.5) labels a set of models for a modest rise in anthropogenic greenhouse gases corresponding to an increase of 4.5 Wm-2 (1.3%) in total solar irradiance.

As the plateau continued climatologists extended the time scale. Santer et al. (2011) concluded that at least 17 years are required to identify human contributions. Whether one begins counting in 1998 or waits until 2001 because of a strong El Niño peak in 1998, the 15- and 17-year criteria are no longer useful. To identify extreme events that could be attributed to climate change in a locality, the World Meteorological Organization has adopted a 30-year interval (https://www.wmo.int/pages/themes/climate/climate_variability_extremes.php) while the American Meteorological Society defines Climate Change as “Any systematic change in the long-term statistics of climate elements (such as temperature, pressure or winds) sustained over several decades or longer” (http://glossary.ametsoc.org/wiki/Statistics). Now Solomon, as reported by Tollefson (2014), is saying that 50 to 100 years are needed to recognize a change in climate.

The temperature curve in Fig. 1 does have a net increase from 1880 to 2014, but if we are free to choose both the start date and the interval, wide ranges of slopes and differences are possible so any comparison with climate models becomes rather subjective. If we do not understand the time scale, even if it differs from place to place, we cannot distinguish between the natural variations in weather and a climate change in which we want to identify the human component.

3.  Natural Versus Anthropogenic Contributions to Climate Change 

Among the multitude of explanations for the temperature plateau there are many that are based on natural causes not fully incorporated in the models. These effects include

  • a decreasing concentration of stratospheric water vapor that slowed the rise in surface temperatures (Solomon et al. 2010),
  • decadal climate variability (IPCC2013 SPM-10),
  • uncertainties in the contributions of clouds (IPCC 2013 9-3; McLean 2014),
  • the effects of other liquid and solid aerosols (IPCC 2013 8-4),
  • El Niño warming and La Niña cooling in the South Pacific Ocean (de Freitas and McLean, 2013 and references therein; Kosaka and Xie, 2013),
  • a multidecadal a deep ocean sink for the missing heat (Trenberth and Fasullo, 2013; Chen and Tung, 2014),
  • the Atlantic multidecadal oscillation (Tung and Zhou, 2013),
  • a multidecadal climate signal with many inputs propagating across the Northern Hemisphere like a stadium wave (Kravtsov et al. 2014),
  • SO2 aerosols from moderate volcanic eruptions (Neely et al., 2013, Santer et al., 2014),
  • a decrease in solar activity (Stauning 2014), and
  • aerosols in pine forests (Ehn et al. 2014).

Also, as proposed by Lu (2013) and Estrada et al. (2013), there could be an indirect human effect of reduced absorption by CFC’s resulting from the Montreal Protocol constraining their use. It is not the purpose of this essay to discuss the merits of specific hypotheses, but rather to list them as evidence of incompleteness in the present models. For example Fig. 3 shows the dominance of El Niña warming events from 1978 to 1998 that could account for some of the temperature increase after 1978 as well as the 1998 spike.

When the rising temperatures of the 1980’s coincided with an increasing concentration of CO2, the model makers assumed that human activity was the primary cause, never thoroughly investigating natural contributions. The next step is to assess which ones are significant and add them to the models. Climate predictions without accounting for the relative importance of natural and human effects are useless because we cannot tell whether any proposed change in human activity will have a noticeable effect.

Slide3Fig. 3. Multivariate index for the El Niño Southern Oscillation in the Pacific Ocean from the U. S. National Oceanic and Atmospheric Administration. The index combines sea surface air pressure, the components of the surface wind, sea and air surface temperatures and the cloudiness fraction. The upward-pointing red areas indicate El Niño warming intervals and the downward blue ones La Niña cooling.

 4.  Parameterization in Place of Physics

One often sees the claim that the climate models are based on solid physical principles. This is true in a broad sense, but there are many phenomena that are too complicated or have too small a scale for direct coding. Instead each General Circulation Model (GCM) presented by the IPCC depends on hundreds of parameters that are adjusted (tuned) to produce a reasonable match to the real world. According to IPCC2013 (9-9), ” The complexity of each process representation is constrained by observations, computational resources, and current knowledge.” The availability of time on supercomputers limits the ranges of parameters and the types of models so subjective choices could have influenced the available selection.

IPCC2013 (9-10) further elaborates the challenges of parameterization stating, “With very few exceptions modeling centres do not routinely describe in detail how they tune their models. Therefore the complete list of observational constraints toward which a particular model is tuned is generally not available.” and “It has been shown for at least one model that the tuning process does not necessarily lead to a single, unique set of parameters for a given model, but that different combinations of parameters can yield equally plausible models.”

Parameters are necessary in complex climate modeling, but they have the risk of producing a false model that happens to fit existing observations but incorrectly predicts future conditions. As noted below in Sect. 9, a model cannot be trusted if it does not make correct predictions of observations not used in determining the parameters.

5.  Uncertainty in the Climate Sensitivity

The contribution of CO2 to global temperatures usually is quantified as climate sensitivity, either the Equilibrium Climate Sensitivity (ECS) or the Transient Climate Response (TCR). ECS is the increase in the global annual mean surface temperature caused by an instantaneous doubling of the atmospheric concentration of CO2 relative to the pre-industrial level after the model relaxes to radiative equilibrium, while the TCR is the temperature increase averaged over 20 years centered on the time of doubling at a 1% per year compounded increase. The appropriate perturbation of a climate model can generate these numbers once the parameters are chosen. The TCR is a more useful indicator for predictions over the next century because reaching equilibrium can take a few hundred years.

IPCC2013 Table 9.5 quotes a mean TCR = 1.8º (1.2º-2.4º) C and ECS = 3.2º (1.9º-4.5º) C with 90% confidence intervals for a selection of models. This ECS is close to the most likely value of 3º and range of 2.0º to 4.5º adopted in IPCC2007 SPM-12, while IPCC2013 SPM-11 widened the range to 1.5º to 4.5º C, presumably in recognition of the temperature plateau. IPCC2013 SPM-10 admitted there may be “in some models, an overestimate of the response to increasing greenhouse gas and other anthropogenic forcing (dominated by the effects of aerosols)”, but retained the alarming upper limit of 4.5º C from IPCC2007.

Alternative estimates are possible directly from the observed changes in temperature with the increasing concentration of CO2. Huber and Knutti (2012) obtained TCR = 3.6º (1.7º-6.5º 90%) consistent with the models, but others derived lower values. Otto et al. (2012) reported TCR = 1.3º (1.2º-2.4º 95%), ECS = 2.0º (1.2º-3.9º 95%), Lewis and Curry (2014) derived TCR = 1.33º (0.90º-2.50º 95%), ECS = 1.64º (1.05º-4.05º 95%), and Skeie et al. (2014) found TCR = 1.4º (0.79º-2.2º 90%), ECS = 1.8º (0.9º-3.2º 90%). As expected the TCR values always were less than the ECS results.

These wide uncertainties show that we do not yet know how effective CO2 will be in raising global temperatures. If ECS and TCR really are close to the lower end of the quoted ranges, temperatures will continue to increase with CO2 but at a rate we could adapt to without serious economic damage. The recession of 2008 did not have a noticeable affect on the CO2 curve in Fig. 1.

 6.  Applying Statistics to Biased Samples of Models

Fig. 4 is typical of the IPCC plots of model outputs showing a wide range of possibilities, some of which already represent averages for perturbed parameters of individual models (IPCC 2013 Box 9.1 iii and Sect. 9.2.2.2). Contrary to the basic principles of statistics, the authors of the IPCC Report presumed that the thick red lines representing arithmetic averages have meaning. Averaging is appropriate for an unbiased sample of measurements, not for a selected set of numerical models. Physicists do average the results of Monte-Carlo calculations, but then the inputs must be chosen randomly over the region of interest. This is not so for these climate models because, as noted in Sect. 4, we do not know the selection criteria for most of them. Also, referring to these multimodel ensembles (MME), IPCC (2013 9-17) states, “the sample size of MME’s is small, and is confounded because some climate models have been developed by sharing model components leading to shared biases. Thus, MME members cannot be treated as purely independent.” The following page in the IPCC report continues with “This complexity creates challenges for how best to make quantitative inferences of future climate.”

Slide4

Fig. 4. In this plot from IPCC (2013 Fig. 9.8), the thin colored lines represent individual models from the Climate Model Intercomparison Project 5 (CMIP5) and the simpler Earth System Models of Intermediate Complexity (EMIC) and the thick red lines their means, while the thick black lines represent three observed temperature sequences. After 2005 the models were extended with the modest RCP 4.5 scenario used in Fig. 2. The horizontal lines at the right-hand side of each graph represent the mean temperature of each model from 1961 to 1990 before all models were shifted to the same mean for the temperature anomaly scale at the left. The vertical dashed lines indicate major volcano eruptions.

Knutti et al (2010) discussed the values of multimodel comparisons and averages and added the cautionary statement, ” Model agreement is often interpreted as increasing the confidence in the newer model. However, there is no obvious way to quantify whether agreement across models and their ability to simulate the present or the past implies skill for predicting the future.” Swanson (2013) provided the example of a possible selection bias in the CMIP5 models due a desire to match the recent arctic warming and remarked that, “Curiously, in going from the CMIP3 to the CMIP5 projects, not only model simulations that are anomalously weak in their climate warming but also those those that are anomalously strong in their warming are suppressed”.

Furthermore the comparisons in Fig. 4 as well as in Fig. 2 attempt to relate the model temperatures to the observations by calculating temperature anomalies for the models without accounting for all the extrapolations of the measurements to cover poorly sampled regions and epochs. Essex, McKitrick and Andresen (2007) have questioned the validity of the global temperature anomaly as an indicator of climate, but since the IPCC continues to compare it with climate models, we should expect agreement. Instead, outside the calibration interval, we see systematic deviations as well as fluctuations in the models that exceed those in the observations.

 7.  Nonlinearity and Chaos in the Physics of Climate

Climate depends on a multitude of non-linear processes such as the transfer of carbon from the atmosphere to the oceans, the earth and plants, but the models used by the IPCC depend on many simplifying assumptions of linearity between causes and effects in order to make the computation feasible. Rial et al. (2004) have discussed the evidence for nonlinear behavior in the paleoclimate proxies for temperature and in the powerful ocean-atmosphere interactions of the North Atlantic Oscillation, the Pacific Decadal Oscillation and the El Niño Southern Oscillation in Fig. 3.

Frigg et al. (2013) have described some of the approximations in the GCM and their application to derivative models for regional prediction and concluded that “Since the relevant climate models are nonlinear, it follows that even if the model assumptions were close to the truth this would not automatically warrant trust in the model outputs. In fact, the outputs for relevant lead times 50 years from now could still be seriously misleading.” Linearity can be a useful approximation for short-term effects when changes are small as in some weather forecasting, but certainly not for the long-term predictions from climate models.

When the development of a physical system changes radically with small changes in the initial conditions it is classed as chaotic. Weather systems involving convection and turbulence are good examples resulting in computer forecasts becoming unreliable after a week or two. It was an early attempt to model weather that led the meteorologist Edward Lorenz (1963) to develop the theory of chaos. The IPCC Report (2013 1-25) recognizes the problem with the statement “There are fundamental limits to just how precisely annual temperatures can be projected, because of the chaotic nature of the climate system.” However, there is no indication of how long the models are valid even though predictions often are shown to the year 2100.

The difficulties with non-linear chaotic systems are especially serious because of the inevitable structural errors in the models due to the necessary approximations and incomplete representations of physical processes. Frigg et al. (2014) have described how a non-linear model even with small deviations from the desired dynamics can produce false probabilistic predictions.

 8.  The Validation of Climate Models

How do we know that the models representing global or regional climate are sufficiently reliable for predictions of future conditions? First they must reproduce existing observations, a test current models are failing as the global temperatures remain nearly constant. Initiatives such as the Coupled Model Intercomparison Project 5 (CMIP5) can be useful but do not test basic assumptions such as linearity and feedback common to most models. Matching available past and present observations is a necessary condition, but never can validate a model because incorrect assumptions also could fit past data, particularly when there are many adjustable parameters. One incorrect parameter could compensate for another incorrect one.

The essential test of any physical theory is to make predictions of observations not used in developing the theory. (Of course success in future predictions never is sufficient in the mathematical sense.) In the complicated case of atmospheric models, it is imperative to predict future observations because all past data could subtly influence the permitted ranges of parameters. Unfortunately this test will take the time needed to improve the models, make new predictions, and then wait to see what the climate actually does. Fyfe et al. (2013) aptly described the situation at the end of their paper. “Ultimately the causes of this inconsistency will only be understood after careful comparison of simulated internal climate variability and climate model forcings with observations from the past two decades, and by waiting to see how global temperature responds over the coming decades.”

Through the inadequate inclusion of natural contributions, we lost the recent decades for comparing predictions with observations. It is time for a new start.

 9.  What Should We Do Now?

Whether global temperatures rise or fall during the next decade or two, we will have no confidence in the predictions of climate models. Should we wait, doing nothing to constrain our production of CO2 and similar gases while risking serious future consequences? If the climate sensitivity to CO2 is near the lower edge of the estimates, the temperature rise could be manageable. Even so many people would argue for major investments to reduce our carbon footprints as insurance. However, as in all situations of risk, we can purchase too much insurance leaving no resources to cope for unforseen developments in climate or other environmental problems. Instead, until there is a new rise in the temperature curve, we have time to pause and assess which projects can be useful and which may be ineffective or even harmful. Here are some proposals.

1) Return to rational discussion, listening to challenges to our favorite ideas and revising or abandoning them as needed, realizing that is how science progresses.

2) Discuss what are optimum global temperatures and CO2 concentrations before we set limits because cold weather causes more fatalities than hot weather and there is evidence that the present warm climate and enhanced CO2 are contributing to increased plant growth (Bigelow et al. 2014).

3) Consider a topic often avoided – the effect of increasing population on the production of CO2 and general sustainability.

4) Cancel subsidies and tax benefits for biofuels because corn competes with food production in temperate zones and palm trees reduce jungle habitat in the tropics.

5) Increase the use of nuclear power, which produces no CO2.

6) Stop asserting that carbon emissions by the industrialized countries are the primary cause of previous warming or sea level rise because undeveloped countries are claiming reparations on this unvalidated premise. (United Nations Climate Change Conference, Warsaw, Poland, 2013 Nov. 11-23).

7) Cease claiming that rising temperatures are causing more occurances of extreme weather because the evidence is not there. (Pielke 2014).

8) Admit that we do not yet understand our climate well enough to say that the science of global warming is settled.

At this stage in the development of the climate models used by the IPCC, it is unclear whether they ever will be useful predictors because the extra computing power needed to reduce the grid scale and time steps in order to adequately include more of the processes that determine weather and climate. There has been no improvement since the ECS estimate of 1.5º to 4.5º C in IPCC (1990 Sect.5.2.1) in spite of an exponential increase in computer capabilities in 23 years. Curry (2014) has emphasized the need to investigate alternatives to the deterministic GCM’s. Including the stochastic nature of climate would be an important next step.

The present temperature plateau has been helpful in identifying the need to consider natural contributions to a changing climate, but the basic problems with the models have been present since their beginning. Whether or not the plateau continues, the current models used by the IPCC are unreliable for predicting future climate.

References [link]

Biosketch. Donald Morton has a Ph.D. in Astrophysics from Princeton, and served as the Director General of the Herzberg Institute for Astrophysics of the National Research Council of Canada.  His web page with cv and list of publications is found here [link].  During more than 60 years, his research has followed a variety of topics including spectroscopic binaries, stellar structure, mass transfer in binaries, stellar atmospheres, the interstellar medium, UV spectra from space, QSO spectra, instrumentation for space and ground-based telescopes, observatory management, and the compilation of atomic and molecular data. Now as a researcher emeritus his current interests are theoretical atomic physics, solar physics and astronomical contributions to climate change.

JC note:  As with all guest posts, keep your comments relevant and civil.

598 responses to “Will a return of rising temperatures validate the climate models?

  1. Reblogged this on Kirk M. Maxey: Blog and Website.

  2. Our current arguments cluster around assumed correlations that even if perfect are not necessarily indications of causality. It might be interesting to take an entirely different approach that essentially makes no assumptions. We could try using fuzzy logic (hate the term, love the math) to seek causal relationships among climate response variables and possible climate-forcing variables, alone and in combination. It would require a great deal of computing power but would have the advantage of eliminating selection biases. There may certainly be problems with non-linearities, but as long as any relationship was monotonic the technique could work.

  3. Concerning #5, Nuclear power clearly lies in the future. But whether we should be investing now in gen 3 (e.g the Westinghouse AP 1000 design), or investing now in USC coal, CCGT, and nuclear research into Gen 4 designs for the future is an unresolved energy policy question. With the new observational estimates of TCR, it appears that China’s choice of primarily USC coal now plus gen 4 nuclear research ( they are building the world’s first pilot scale LFTR) is a wise one. Taking Xi’s commitment to Obama at face value (yes, a stretch) suggests China will be ‘going nuclear’ by 2030 after its gen 4 nuclear research matures.
    For several gen 4 possibilities that appear underfunded in the US see essay Going Nuclear in Blowing Smoke: essays on energy and climate. They could all be funded from existing research budgets to at least pilot scale (including the Skunkworks high B modular fusion invention) by simply cancelling the funding for the NIF and ITER fusion boondoggles highlighted in the essay.

    • Leonard Weinstein

      You have left out LENR, which seems to be coming out in a short time frame. Go to e-cat.com for more information. There is also possible regional distributed use of of small scale nuclear reactors (DEER) that seem unconditionally safe and also minimize long range power transmission needs.

      • Curious George

        It is always in a short time frame. e-cat dramatically switched to a completely different nuclear reaction in the last year, without changing anything – not even fuel – in their apparatus. Flexibility is a big virtue.

  4. Thank you, Dr. Morton, for this learned commentary. A few months ago I reviewed an article on the same topic by Cato Institute scientists Patrick Michaels and Chip Knappenberger. They looked at how well IPCC models would match observations over the 80-year period from 1951 to 2030 under three scenarios of how global surface temperature might behave between now and 2030.

    In Scenario 1, the “plateau” continues and the warming rate in 2014-2030 remains what it was in 2001-2013 (essentially zero). In Scenario 2, warming resumes at the long-term 1951-2012 rate (0.107ºC/decade). In Scenario 3, warming resumes at the 1977-1998 rate (0.17ºC/decade) — the rate from the start of recent global warming until the plateau. They found that even in the warmest scenario, fewer than 5% of model simulations of the long-term, 80-year trend agree with observations by 2020 and fewer than 2.5% agree by 2030. Their analysis is posted here: http://www.cato.org/blog/clear-example-ipcc-ideology-trumping-fact

    What, I wondered, would be the result if warming resumes at 0.265ºC/decade — the rate during 1984-1998, which the IPCC identifies as the 15-year period with the most rapid warming?

    I asked Mr. Knappenberger to test the models’ agreement with long-term observations using an alternative scenario in which warming resumes at 0.265ºC/decade. He kindly obliged.

    Here’s what he found. If warming resumes at the 1984-1998 rate, the models never reach outright statistical failure (<2.5%). Nonetheless, they perform very poorly, matching observations less than 5% of the time in 2020 and continue to do so through 2030. My blog post provides a bit more detail and helpful graphics: http://www.globalwarming.org/2014/08/17/can-natural-variability-save-climate-models/

    • Indeed, Marlo, I would think that a necessary condition for validation is that the models can explain the pause, which is never going away, even if we get another spurt of warming. Science is about specifics.

    • there is no pause. At least no statistically significant pause.

  5. Thanks for the great essay Dr. Morton

  6. The climate establishment may have to keep concerns about warming simmering on the back burner for another decade or more; and, the EU falling off a cliff and becoming a Russian puppet state by then will either be a sobering wake-up call to the rest of the West or simply be looked at as Eurocommie government planning ending up as planned: out of gas.

    • Wagathon said

      ‘….and, the EU falling off a cliff and becoming a Russian puppet state by then will either be a sobering wake-up call to the rest of the West or simply be looked at as Eurocommie government planning ending up as planned: out of gas.’

      What??!!!

      tonyb

      • Something has changed since the UN-IPCC showcased the iconic ‘hockey stick’ graph of Michael Mann that essentially blamed America as the unapologetic creator of evil modernity’s CO2, which beyond doubt was the cause of global warming. Some things have not changed: most school teachers sided with Mann and the IPCC and most still do, despite facts to the contrary. There’s a price to be paid — psychologically and socio-economically — for the Left’s Big Lie.

      • Max_OK, Citizen Scientist

        That will teach Tony not to ask you questions.

    • Wagathon,

      Yes, a dog’s breakfast mix of leftist orgs has adopted the global warming narrative as it’s own, as it provides plausible cover to their agenda.

      Question: Just what is a “dog’s breakfast” anyway? I know it’s not good.

      • Regurgitated food from the previous day’s meal, ready-to-eat again today!

      • Something you cooked up and didn’t turn out so well so the dog gets it.

      • Where I came from, dogs were generally too poor to get breakfast. They only got “dinner” around 5pm.

        Whatever. I always thought “a dogs dinner” was what the bowl and surrounding area looked like a few seconds after you had put a full bowl of food on the floor in front of said dog: An indescribable mess. Probably like Jackson Pollock’s bedroom.

  7. Nice essay, good job hitting many of the big points.
    Of course, even if we had a perfect model, that model would have to be discretized to solve, and the error in integrating such a nonlinear system forward in time would be extremely challenging, most likely impossible.
    “But the averages are correct even if the solution isn’t,” the claim goes…. And the mathematical theory for why this is so? Nonexistent, because its not true.

    • The “mathematical theory” is primarily the First Law of Thermodynamics — conservation of energy.

      • So you claim that any equilibrium CO2 sensitivity “validates” the models? With that kind of standard, there are tons of models all over science that are “validated.”

        Just because there are some (very misguided) people in this forum that believe that the CO2 sensitivity is zero doesn’t mean any nonzero value “validates” GCMs.

      • The First Law of Thermodynamics is the mathematical theory by which one can claim that the average of a collection of models is correct, even though none of the individual models can be shown to be the one with correct physics and they don’t agree?

        I would be very interested in hearing you expand on that further. Not expecting it, but it would be interesting.

      • “The “mathematical theory” is primarily the First Law of Thermodynamics — conservation of energy.”

        Huh?

      • So all conservation laws are perfectly solvable?

      • Obviously I forgot average temperature is a conserved quantity, hate it when I do that.

      • David Springer

        Conservation of energy isn’t a theory. No wonder you’re unemployable.

  8. Donald: Short intervals aren’t indicative of GHG-caused climate change. How do you expect climate models to predict a short interval (20 yrs) without knowing the ENSOs that will occur, the volcanic eruptions, changes in solar variability, etc, in that time period, which can be significant in the short-term but not in the long-term?

    In short, how do you expect modelers to foresee the immediate future?

    • We never expected climate modelers to foresee the immediate future, davey. We have not been disappointed. Did the modelers tell us that we should not be surprised, if there was a 17 year pause? Or did they tell us that we were in store for accelerating warming that was already in the pipeline? And we were to expect stronger and more frequent El Ninos, regular Katrinas, Manhattan flooded, droughts, famine , plague, cats sleeping with dogs? Don’t you see the problem here, davey? The pause is killing the cause.

    • David
      Thank you for the question. I do not expect the climate models to predict short-term changes, though as my quote from Knight et al (2009) shows, proponents recently had thought 15 years would be sufficient. My challenge to the model makers is to tell us now how many years are needed for a valid average and not after the next shift in the temperature pattern.
      Don

      • A couple of follow on points.
        (1) It isn’t clear to me that the modellers have fully specified how the models are to be verified, and that is something that should be intrinsic to model design. Part of this is an operational definition of “climate”.
        (2) With the recent history the policy imperative is to understand how the weather evolves on decadal timescales rather than ~100 year that GCMs are now being held out as doing. We probably know enough about the latter given the circumstances we now find ourselves in. This suggest a significant reduction of effort into the current GCMs and more effort into the short-term understanding.

      • The payoff table is the most crucial issue. There is vanishingly small justification for high-cost ineffectual expenditures now to avoid un-empirical events misattributed to unobserved warming. The smart bet (both odds and benefit) would be on cooling, and coping with it.

      • Knight et al would require trends adjusted for ENSO.

    • Not to put a fine point on it but per Dr. Spencer there is somewhere in the vicinity of 90 models with a temperature spread of more than 3. So, which one are we talking about or are we talking about the average. The spread alone should cause a pause (pun intended) in some of the more outrageous comments. If you are going to pick one to defend perhaps it would be wise to shut your eyes and pick the one that most closely agrees with observations. No matter which you select it should not be called science.

    • Well, before the pause, you guys were harping that CO2 was the driver of warming and would swamp any natural variability. What changed?

    • “How do you expect climate models to predict a short interval (20 yrs) without knowing the ENSOs that will occur, the volcanic eruptions, changes in solar variability, etc, in that time period, which can be significant in the short-term but not in the long-term? ”

      Because they told us that ENSO was noise, that volcanic eruptions were random, and that solar variability was insignificant. They told us that all these things averaged out over the long term.
      They didn’t specify what “long term” meant. It appears to have started at 10 years (according to Gavin @ RC), then changed to 15, then 17. What is it now? 30 years? 50? Was it always 30 years and they just “winged it”? Who knows?
      None of which means they are wrong, let alone evil. However, those of us in the real world know what scam-artists are like – from our own experiences. And they are sure ACTING like scam-artists…

    • David Springer

      David Appell | December 15, 2014 at 12:13 pm | Reply

      Donald: Short intervals aren’t indicative of GHG-caused climate change. How do you expect climate models to predict a short interval (20 yrs) without knowing the ENSOs that will occur, the volcanic eruptions, changes in solar variability, etc, in that time period, which can be significant in the short-term but not in the long-term?

      In short, how do you expect modelers to foresee the immediate future?

      How do you know that the warming from 1980 – 2000 wasn’t due to a below average number of volcanic eruptions, above average solar activity, ENSO, and so forth?

      Duh.

    • You seem to describe the ENSO as an input, but your conclusion (excuse!) seems to indicate it to make GCMs incomplete because of the ENSO. Which is it. If an input, then as it unfolds put it into the GCMS machine and evaluate the skill of the model. Otherwise you are saying is that if there was no ENSO the G C Ms would be skillful.

  9. John Smith (it's my real name)

    thanks for this paper
    “Discuss what are optimum global temperatures and CO2…”
    Tonyb has asked this question many times and is usually ignored
    at least I’ve never seen an answer
    I don’t think there are such numbers
    even if there were…say we pick 1880
    how ya’ gonna get there and how ya’ gonna stay when ya’ do?
    we should be careful what we wish for
    I think there is a profound philosophical deficit surrounding this subject
    imagining that we can micromanage the biosphere to this extent is fantastic hubris IMHO
    unintended and unforeseen consequences lurk

    • I’d be careful of wishing Africa back to the 1970s, Australia back to the half-century after 1895, Texas to the 1950s, California to the whole medieval period, most of the world back to the late 1870s (ugh)…and all the world (but China especially) back to the 1930s. I’d be extra careful of sending India back to any of those periods of major monsoon failure (1770s, 1630s etc etc)

      As for the the LIA, it was very real, and you don’t want to go there at all. Especially China. Mind you, Western Europe in the lead-up to the Hundred Years War or the French Revolution was likely no place to be, if you like to eat. Better dial in some other climate, maybe the MWP, the period that got squashed by a hockeystick. Or maybe just stick with this climate?

      But most commentators and experts on climate change are so little interested in actual climate or actual climate change it is vain even to raise these matters. They reserve their knowingness and certainties for stuff that hasn’t happened yet. Stuff that actually happened just gets in the way of what they bizarrely call their science.

      Observation and history are sooooo Last Enlightenment.

      • It is interesting to note that the LIA led to significant innovations in farming technology: enclosure (end of the commons), crop rotation, green manures, legumes, root crops, etc. Only the French hung on to archaic practices, which, some argue, led to the bloodletting and rampant murder – by consensus, of course, like so many lynchings – of the French Revolution.

        Adaptation is good. Anyone care to guess what will be the most significant innovations between now and 2100.

      • Better Heat Pumps? Works fine in warm and cool climates!

      • Justin, I guess there will be direct responses to actual problems, along with the usual blunderings.

        The problem with guessing future innovations is that not one person who has ever lived has ever known what future climate will be. This, for the very obvious reason that climate is fantastically fluid, vast and complex…and what gets called climate science is so stubbornly based in narrow dogma and theory. For example, nearly all the earth beyond the crust and upper hydrosphere is off limits to mention, let alone contemplation. If there’s a bit of levelling or cooling (not that the Pause matters) someone might mumble vaguely of volcanoes – just in that tight period! – to explain away some unwelcome numbers. Now volcanoes are new, like Lake Effects and polar vortices!

        Maybe we have more in common with the Medieval Warming than just the warming.

      • I prefer to call the Little Ice Age by its other non climate related name.

        The Renaissance.

      • Long Renaissance!

      • bob droege,

        15th & 16th century Renaissance – followed by 17th century Enlightenment

      • Three centuries of dance, play, deep thought and fine art!

        Take us back there now, climate diallers.

    • Good point. Note from the graph, in the link below, that CO2 has been at an historic minimum for the Phanerozoic during our current inter-glacial. The PT extinction was bad news but it’s cause is still under study. The earth never turned into Venus, and the location of tectonic plates is very different now from the days of Pangea. I wonder about the limits placed on ocean currents by the modern configuration of the continents.

      http://deforestation.geologist-1011.net/PhanerozoicCO2-Temperatures.png

  10. Matthew R Marler

    Thank you Dr Morton. This is a good essay.

  11. This is a great summation and background essay. I especially appreciate reading the thoughts of someone who has such an extensive knowledge of this subject.

  12. The whole premise of this article is that there is a pause in global warming. If there is no real pause, the article is meaningless.
    In fact there is no pause. Tamino at open mind has looked to see if the increased global temperature trend has changed since 1970. in the following post:

    http://tamino.wordpress.com/2014/12/09/is-earths-temperature-about-to-soar/

    The statistics say that since 1970, there has been no change in the global warming trend, and all the fluctuations in short term rates of change are due to noise. The plateau that we can observe in the maximum annual temperatures observed in the last few years is negated by a recent increase in the minimum observed temperatures.

    Another look at this question is described on the RealClimate web site by Stefan Rhamsdorf, who shows a similar analysis by a mathematician Niamh Cahill of the School of Mathematical Sciences, University College Dublin., which he calls change point analysis. The only breaks in temperature trends detected by this analysis are the years 1912, 1940 and 1970.

    http://www.realclimate.org/index.php/archives/2014/12/recent-global-warming-trends-significant-or-paused-or-what/

    In addition, the uptake of heat in the ocean between 0 and 2000 meters shows that the earth is still heating up without a pause. Most of the heat retained by the earth is absorbed in the ocean.

    It is surprising that an astrophysicist neglects to mention these very important points and simply eyeballs the data to see what he wants.

    • Wherever did Dr. Morton get the idea that there has been a pause in the alleged global warming?

      https://search.yahoo.com/yhs/search?p=pause+in+global+warming&ei=UTF-8&hspart=mozilla&hsimp=yhs-001

      Tamino and realclimate are outliers. Make that outliars.

    • eadler2
      Whether or not there is a pause, the present trend in the global mean surface temperture is not what was expected from a steadily rising concentration of CO2 in our atmosphere, demonstrating that the models failed to include some important natural effects. Nevertheless, the main point of my essay is that even if the models happen to agrees with the present observed temperature curve there are many reasons we should not trust the models to predict future climate.
      Don

      • We don’t need sophisticated models to know the rate at which the earth is warming. The change in ocean heat content tells the story. The oceans asorb 90% of the heat retained by the earth. The amount of ocean heating up to 2000M tells the story that the earth has been gaining heat and the rate has not decreased since 1990.
        http://www.realclimate.org/index.php/archives/2013/09/what-ocean-heating-reveals-about-global-warming/
        T

      • Eadler2

        You argue that the oceans are heating as the result of increased CO2?

        The following is from your realclimate link:

        “If the greenhouse effect (that checks the exit of longwave radiation from Earth into space) or the amount of absorbed sunlight diminished, one would see a slowing in the heat uptake of the oceans. The measurements show that this is not the case.”

        Do you believe this? The greenhouse effect checks the exit of LW into space?

        Keep Warm,

        Richard

      • On top of the ocean heating, we can look at the outgoing radiation from the atmosphere, by satellite, to see that frequencies associated with water vapor and CO2 have reduced upward emissions. These same frequencies are associated with enhanced downward emissions toward the surface. showing that the greenhouse effect has been intensified.

        The difficult thing for the models to predict are ocean currents. One paper which includes actual ENSO data in addition to the models shows that this accounts for the temperature variations that some people like to call a “pause” in global warming.
        http://www.nature.com/news/tropical-ocean-key-to-global-warming-hiatus-1.13620

      • ‘One important development since the TAR is the apparent unexpectedly large changes in tropical mean radiation flux reported by ERBS (Wielicki et al., 2002a,b). It appears to be related in part to changes in the nature of tropical clouds (Wielicki et al., 2002a), based on the smaller changes in the clear-sky component of the radiative fluxes (Wong et al., 2000; Allan and Slingo, 2002), and appears to be statistically distinct from the spatial signals associated with ENSO (Allan and Slingo, 2002; Chen et al., 2002). A recent reanalysis of the ERBS active-cavity broadband data corrects for a 20 km change in satellite altitude between 1985 and 1999 and changes in the SW filter dome (Wong et al., 2006). Based upon the revised (Edition 3_Rev1) ERBS record (Figure 3.23), outgoing LW radiation over the tropics appears to have increased by about 0.7 W m–2 while the reflected SW radiation decreased by roughly 2.1 W m–2 from the 1980s to 1990s (Table 3.5)…

        Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’ http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

        http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch3s3-4-4-1.html

        ‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

        It seems unlikely that they have the slightest clue.

      • I think I’ve never heard so loud
        The quiet message in a cloud.
        ======================

      • The steadily rising forcing leads to a combination of increasing surface temperature and rising ocean heat content. The rise in ocean heat content in the last 20 years has been monotonically increasing and this is the most obvious impact of the changing forcing in the period. A post on surface temperature is not complete without taking this into consideration, otherwise people are misled into believing the forcing is having no effect at all. We have to drum that you need these twin measures into people. The effect is not just one-dimensional. Neither alone tells the full story, but between them, they show most of the effect of forcing.
        http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000mwerrpent.png

      • Donald Morton | December 15, 2014 at 2:10 pm |
        even if the models happen to agrees with the present observed temperature curve there are many reasons we should not trust the models to predict future climate.
        Don

        IMHO the main reason to not trust the models is they are primarily CO2 driven and there is no data showing temperature is driven by CO2, especially anthropogenic CO2. Indeed the data that does exist suggests the CO2 is driven by temperature. Absent data showing causation, which are the arbiter of scientific disputes, little can be learned from temperatures going up or down.

      • http://curryja.files.wordpress.com/2014/01/presentation3.jpg

        ‘Time series of annual average global integrals of upper ocean heat content anomaly (10^21 J, or ZJ) for (a) 0–100 m, (b) 0–300 m, (c) 0–700 m, and (d) 0–1800 m. Time series are shown using ZIF estimates relative to both ClimArgo (dashed grey lines) and Clim1950 (dashed black lines). Time series are also shown using REP estimate (black solid lines), which are not affected by shifts in the mean climatology (B11). Thin vertical lines denote when the coverage (Fig. 3) reaches 50% for (a) 0–100 m, (b) 100– 300 m, (c) 300–700 m, and (d) 900–1800 m.’

        Pre-Argo – it all seems a lot of nonsense. The bridge between datasets in the early 2000’s seems a lot of nonsense as well.

        So there seems to be a bit of an increase in the late 1990’s – mostly due to cloud radiative forcing.

        https://watertechbyrie.files.wordpress.com/2014/06/wong2006figure71.gif

        The increase in the early 2000’s – as I said – seems most unlikely.

        Seriously – if they had any clue at all.

      • Don (Morton), thanks for a simply wonderful essay. I enjoyed everything about it.

        You make a very clear well reasoned case for the irrationality of using current numerical climate models fitted to past average temperature data to predict or project future average temperatures let alone temperature distributions or extreme weather events. Anyone with a reasonable understanding of statistics and data analysis should be able to follow your arguments.

        Section 9 on “What Should We Do Now” is right on the money. Unfortunately there are too many scientists who believe that their political goals justify obfuscation and taking short cuts.

        Thank you for adding another voice of reason to the debate. Clearly you have had a long and gloried career as respected member of the science community. Brace yourself for the ad hominem attacks that will likely come your way.

    • eadler2: I agree that Tamino’s presentation is quite convincing, and all the better for its simplicity. Add to it the recent finding that volcanic eruptions caused 0.05 – 0.12 C of cooling since 2000, and there is little-to-no pause at all.

      http://onlinelibrary.wiley.com/doi/10.1002/2014GL061541/abstract

      • It is more like Tamino grapsed at volcanic straws to wave and hide the pause. Post hoc arm waving seems to be one of the major sports of climate alarmism.

      • John Vonderlin

        David,
        I read the Abstract and have a question you might be able to answer. If recent volcanic eruptions caused .05 to .12 C relative cooling since 2000, what did pre-2000 volcanic eruptions do to whatever the average temp might have been? Has there been a significant increase in volcanic eruptions in regards to aerosol injection into the atmosphere in the 2000-2014 period as compared to the 1980-2000 period? If not, wouldn’t this paper be irrelevant in evaluating whether there was a statistical pause in global warming? Where is my thinking wrong? If volcanic aerosols have increased in the recent period could you provide a citation or link? Thank you.

      • John Vonderlin

        David,
        Being curious I did some research on recent volcanic eruptions. While there have been a few papers claiming that recent volcanic eruptions, particularly tropical ones, may be part of the pause, I think they are engaged in blowing smoke. Here’s the facts I found:
        Are volcanic eruptions on the rise?
        Not according to Lee Siebert, director of the Smithsonian Global Volcanism Program (GVP). Charged with documenting, analyzing and disseminating information about Earth’s active volcanoes, the GVP boasts 40 years of data to indicate it’s business as usual under the crust.
        Checking the Volcanic Explosivity Index (Wikipedia)for the 1980-2000 and 2000-2014 periods we find while there has been a swarm of volcanic eruptions since 2000 they have been relatively small; all 4 or below on the log scale. The 1980-2000 period had a 6, a 5+ and three 5s. Larger eruptions not only eject more material they tend to eject it higher into our atmosphere, causing more long-lasting effects.
        Perhaps, more relevant: an astronomer noted that during the period since 1995 when the brightness of the Full Moon is measured(he claims a good indicator of atmospheric volcanic aerosols) it has been at the highest levels since the 60s.
        If these are true I’m curious of any caveats you can supply or failing that your explanation of the paper’s relevance to the issue being discussed in this post.

      • David, you (and Tamino)’are entitled to your own opinions, but not your own facts. For the actual volcanic facts on aerosols and albedo, see essay Blowing Smoke in the ebook of same name.

    • Matthew R Marler

      eadler: The statistics say that since 1970, there has been no change in the global warming trend, and all the fluctuations in short term rates of change are due to noise.

      That’s just one more post-hoc rationalization of an observed trend. Every “warmer” or “alarmist” who had the opportunity up through 1998-2002 (say) to predict a leveling off by whatever known or unknown mechanism did not do so. Having been so clearly wrong, how much more work should alarmists do to re-establish credibility. Dr Morton has presented his outline. He included models that accurately predict “out of sample data”; how well has Tamino predicted out of sample data with his model? I’d recommend modesty for that model until after the next 20 years worth of data confirm his prediction for them.

      • “…to predict a leveling off by whatever known or unknown mechanism did not do so.”

        How could anyone have done so, without knowing what ENSOs (especially) would happen in the next 15-20 years, the volcanic eruptions, changes in solar variability, etc?

      • Yet we profess to “project” the future of climate and related “weather events” using the same lacking criteria, via modeling? Isn’t that troublesome?

      • DAvid, “How could anyone have done so, without knowing what ENSOs (especially) would happen in the next 15-20 years, the volcanic eruptions, changes in solar variability, etc?”

        ENSO should have been considered as part of the model uncertainty and since there hasn’t been any volcanic activity that would have been considered “significant”, unless the models were assuming there should have been some, they would be high would they?

        Now if you are just agreeing the models aren’t up to the task, welcome aboard.

      • davey, davey

        “…to predict a leveling off by whatever known or unknown mechanism did not do so.”

        No they were too busy predicting stronger and more frequent El Ninos. Bigger and better Katrinas. All sorts of weather calamities that would be wrought by the continuing buildup of CO2, and all that heat about to gush out of the pipeline. Hey, the First Law of Thermodynamics — conservation of energy. How could they possibly be wrong?

        The only thing they were right about was the continuing buildup of CO2 in the atmosphere. Now, where’s the heat?

        You can’t reason with pause deniers.

      • Pinky and the Brain

        ‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

        It was evident a decade or more ago – and more likely than not to persist for a decade or so more.

      • Matthew R Marler

        David Appell: How could anyone have done so, without knowing what ENSOs (especially) would happen in the next 15-20 years, the volcanic eruptions, changes in solar variability, etc?

        You are conceding that at the time IPCC and Hansen were confidently predicting warming in the early 21st century, they did not know enough to predict, yet they were confident. Some, like Hansen, are still confident in their predictions — is that reasonable to you? What will be learned in the next 20 years to explain why this year’s predictions don’t turn out well.

      • Morton claimed that models failed to predict a pause in global warming and therefore are not reliable. In fact, the statistics show there is no pause.

        The claim of a pause is noise made by global warming deniers who are misinforming themselves and others. Then they claim that the models should have predicted this non existent pause.

        There are no models that can predict ENSO, volcanoes and solar intensity, In addition economic development in China has produced a large amount of sulfate aerosals. There are no functions in climate models that can cover these phenonena. It is unreasonable to claim that models are useless because of this. These fluctuations will eventually average out.

        All real life economic decisions involve uncertainty. There is no way to avoid uncertainty in a decision about how to tackle climate change which is an economic decision which has huge implications for the planet earth and future generations

      • Matthew R Marler

        eadler2:All real life economic decisions involve uncertainty.

        Given the lack of surface warming in the 21st century so far, and the contrast between that and the climate predictions, how certain are you that the 21st century will witness more than a 1C warming, and how confident are you that any warming will be the result of human CO2, and how confident are you that such a warming will be “too much” warming for the good of the planet? Everyone agrees that there is uncertainty — it is the certainty of the global warming prognosticators that is being questioned here.

    • I find that I keep being surprised by the nonsense that is published, and is supposedly derived from surface station measurements by “scientists”.

    • Tamino is always fun because you have to think to see the scam. What his analysis shows is that if you take the long term trend then there is a statistical measure such that the pause is small. One can see that by inspection, without the change point gimmick. Does this mean there is no pause? Of course not. At least the scientific community recognizes that there is a pause and it must be explained. Tamino not so much. Good job he is not doing science.

      • “What his analysis shows is that if you take the long term trend then there is a statistical measure such that the pause is small”

        His analysis shows that the so-called “pause” isn’t statistically significant. Deniers will often trot out a trend since 2003 or so, without providing the confidence interval for that trend. And claim it is a pause.

        “At least the scientific community recognizes that there is a pause”

        They’ve been misled by deniers

      • OMG! Little tomas has thrown the scientific community under the bus. He says they have had the wool pulled over their eyes by the deniers. Appeals to authority on the climate science shall henceforth be backed up by the credibility of some armwaving character, who calls himself tamino. It was dem ballcanoes what done it.

      • @tomas
        Really? The entire scientific community, including any number of “warmists” have been successfully hoodwinked by the likes of Anthony Watts et al? Do you also blame your dog when you have a bad day at work?

      • David, can you give us a numerical definition of the pause and with error bounds?

      • David Springer

        “Scientific community misled by deniers.”

        That’s the last thing!

        I’ve officially heard everything now.

        Thank you !!!!

      • “They’ve (the scientific community) been misled by deniers” This was the best laugh I’ve had in a long time. Thanks Tomas.

  13. Great article. I have always wondered about the chaotic nature of the mathematics at the core of the models. Do they depend on Navier-Stokes ( a million dollar opportunity right there – a prize from the Clay Math. Inst.)? If they are chaotic all bets are off – future states of the phase space are unknowable without perfect information – and statistics are meaningless. Given that all, all statistical hoo haa hurled at us are .

    • Meant to say:

      Given all that, all the statistical hoo haa hurled at us is just bullpucky.

    • I believe the statistics are what makes climate a “boundary value” problem. For a chaotic problem initial values are critical and there is a limited range of validity.

      The problem with a boundary value problem is knowing when you have exited the initial value phase and moved into the boundary zone. Science of Doom has a good post on that.

      http://scienceofdoom.com/2014/11/29/natural-variability-and-chaos-four-the-thirty-year-myth/

      Santer’s increasing the minimum trend length from 15 to 17 years is an attempt to get out of the initial value region but it is likely climate models could require 50 to 60 years before the boundaries come into play which would make them a bit limited as a forecasting tool for quite some time.

      • Cap.

        Ok, I scanned the sod post and this jumps out at me:

        “Over a long time period the statistics of the system are well-defined”

        I’m skeptical of that statement, it looks like “…and then a miracle occurred…”. The blogger claims this has been shown in the previous blog post, which I also scanned, but the statement had changed between the two posts and, anyway, referred to a particular system. I am not sure that statement is true. If it is true, I am not sure she/he knows it is true, but is only guessing.

    • JustinWonder, It makes sense to me but the boundaries can be pretty large depending on what the real range of natural variability actually is. Assuming natural variability is less than +/- 0.1 C looks a bit overly optimistic. If it is closer to +/-0.5C then current warming would fit into Judith 50% +/-30% unless the LIA at about -1C lower than today just magically ended in 1900 which I tend to doubt.

      The interesting thing about long term recovery is that water vapor response doesn’t care what causes SST increase, it just responds and would apply an effective forcing all its own. Makes it kind of hard to determine cause and effect.

      Based on most of the reconstructions of the tropical oceans I have seen, the current rate of ocean heat uptake is perfectly consistent with long term recovery.
      https://lh5.googleusercontent.com/-6Tf2glKTcu0/VG330klOFZI/AAAAAAAALxM/M60OPJJ6ITs/w858-h528-no/curry%2Btalk.png

      If there is a Stadium Wave take two, it would likely include multi-century variability.

    • JW, my understanding is that the models are chaotic at small scales but these unpredictable wiggles do not effect the long term projections. I am inclined to think that the climate is also chaotic at large scales.

      • “I am inclined to think that the climate is also chaotic at large scales.”

        Yep.

        Fractals, scale invariant self-similarity and all that good stuff.

      • The underlying problem is the statistical assumption that climate is trying to converge long term to an average, without the theoretical basis to show that such an average exists.

        the paleo data suggest that there is no mathematical basis for a long term average. the average changes at all time scales.

  14. Interesting article about the PETM. Paleoclimatology is just fascinating.

    http://phys.org/news/2014-12-global-similar-today-size-duration.html

    • Well, at the current time the earth has land at the north pole, a semi-enclosed north pole, two land bridges from the arctic to the antarctic (disrupting ocean circulation), and perhaps the highest semi-tropical mountain range in history. All these features cool the earth. Once Antarctica got isolated at the south pole and the circumpolar current started 40 million years ago the temperature has been downhill ever since.

      http://esseacourses.strategies.org/module.php?module_id=167
      http://www.ees.rochester.edu/ees119/reading4a.pdf
      The PETM is interesting. The Antarctic coast was about 16°C (doesn’t seem that tropical) and the ocean kept the temperature stable. Pangaea was breaking up and there was a lot of volcanic activity. The native concentration of CO2 was 1000 PPM before the event and the event added 750 – 26000 PPM of CO2 to the atmosphere.

      There isn’t much to say about the PETM until the CO2 increase gets bounded better than 750-26000 PPM. That kind of estimate can’t even be called a guess. 26000 PPM would cause a 4.5-5°C rise (just CO2 forcing no feedback) even without the volcanoes. Which would mean that 2100 could be about 0.35-0.5°C warmer than today.

    • You are welcome Justin. I didn’t know anything about the PETM and your post was educational. The 26,000 is accurate – the 1200 to 2000 usually quoted are values in the middle of the range.

      I did find this:
      http://hidethedecline.eu/media/BLANDET/PETMtimescale.jpg

      The apectodnium is a temperature proxy and would indicate that about 3000 years before the CO2 rise the temperature increased, and after 10,000 years of high temperature the temperature decreased while the CO2 was still rising. The sedimentation rate is 10 cm per millennium.

  15. Stephan Rhamsdorf points out that a trend analysis of the Hadcrut4 data enhanced by satellite data in the fast warming Arctic, where there are few surface stations, shows a trend of 0.175 +/- 0.045 DegC/decade since the satellite era began. Shorter intervals show a huge uncertainty in the trend, for instance, since 1998, we get 0.116 +/- 0.137 DegC/decade. This means we can’t tell whether there was a deviation from the long term trend.
    We can see that the claims of a pause or plateau cannot be supported by statistics.
    http://www.realclimate.org/index.php/archives/2014/12/recent-global-warming-trends-significant-or-paused-or-what/

    The human mind is a funny thing. Even very sophisticated and educated people can find a way to ignore basic statistics if they want to believe the opposite of what the statistics tell them.

    • John Smith (it's my real name)

      “…fast warming Arctic, where there are few surface stations, shows a trend of 0.175 +/- 0.045 DegC/decade since the satellite era began. Shorter intervals show a huge uncertainty in the trend, for instance, since 1998, we get 0.116 +/- 0.137 DegC/decade.”

      call me uneducated and unsophisticated
      those look like really small numbers to me
      0.1 per decade for the whole arctic for 3-4 decades?
      so it’s 39 below zero as opposed to 40 below zero?
      I am afraid
      and sure that measurement is accurate to tenths and hundreths

      • John Smith,
        “call me uneducated and unsophisticated
        those look like really small numbers to me
        0.1 per decade for the whole arctic for 3-4 decades?
        so it’s 39 below zero as opposed to 40 below zero?
        I am afraid
        and sure that measurement is accurate to tenths and hundreths”
        Where did you get your figure of 0,1C/decade rate of increase for the Arctic? It seems to me that you made this up.

        The 0.175 C/decade was the average for the earth since 1979. The rate of surface temperature increase for the Arctic is about double that, or about 0.35C/decade. The Arctic is the area which warms the fastest because heat from the tropics is transported there by ocean and air currents.
        Can you explain the mental process by which you obtained your numbers? I don’t see how lack of education or sophistication explains it.

      • John Smith (it's my real name)

        eadler2
        misplaced reply below

    • eadler2, It is pretty amazing how HADCRUT.C&W has that nifty Arctic Winter Warming Spike starting around 2005.

      https://lh6.googleusercontent.com/-lzsGQU1RjO4/VI9V215PMhI/AAAAAAAAL6Y/UWtjuhH8Osc/w661-h461-no/65-90%2Bc%26w.png

      Looks like it can shoot up there quick and fall back just about as fast. Could be lots of noise in those temperatures down close to -30 C degrees.

      • Yes, a little more ice than there has been lately and it dives.

      • The Arctic is about 5% of the earth’s surface area. One can expect a lot of noise in annual average temperatures over a small segment of the globe.
        Recent research has shown that temperature changes in the Arctic are magnified by feedback over and above the effect of changing albedo due to reduced snow and ice cover.

        http://phys.org/news/2014-02-temperature-feedback-magnifying-climate-arctic.html

        “Normally, they explain, changing weather patterns (such as thunderstorms) in other parts of the world keep atmospheric air churning, which in turn allows heat closer to the ground to be moved higher, allowing some of it to escape into space. Things are very different in the Arctic—there is very little churning, which means that warm air close to ground (just one to two kilometers thick) remains where it is, trapped by a heavy layered atmosphere.”

      • “Recent research has shown that temperature changes in the Arctic are magnified by feedback over and above the effect of changing albedo due to reduced snow and ice cover.”

        Well I imagine :0 -30C would have an effective energy of about 197 Wm-2 which is half of the “global” average. So when you dig out highest latitude temperature anomalies to pad the “global” record, you are getting a twofer. When you consider the latent “global” average, you are getting a threefor. Duh, there is “Polar” amplification. Since GHG forcing is in terms of energy and guestimated as a temperature impact, you can push the limits of the zeroth law and show some super impressive temperature response that is just about meaningless in terms of “global” impacts.

        Then once you start considering sea surface as “Land” you can make tiny energy anomalies over less than a percent of the global surface actually appear significant. When you assume somethings are “negligible” and “decide” others are not, your bias tend show, doncha know.

      • I wasn’t actually refering to the albedo. I was refering to the amount of atmosphere exposed to open water.

        “Based on the simulated ocean heat budget it is found that the heat transport into the western Barents Sea sets the boundary of the ice-free Atlantic domain and, hence, the sea ice extent. The regional heat content and heat loss to the atmosphere scale with the area of open ocean as a consequence. Recent sea ice loss is thus largely caused by an increasing “Atlantification” of the Barents Sea.”

        http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00466.1

    • All this talk of broken models, and yet the mean model trend of ~0.2C/decade is still contained within the error bars of the Cowtan and Way trend stated by eadler2…

    • John Smith (it's my real name)

      eadler2
      “…satellite data in the fast warming Arctic, where there are few surface stations, shows a trend of 0.175 +/- 0.045 DegC/decade since the satellite era began.”

      this part
      “in the fast warming Arctic, where…trend of 0.175”

      don’t see a gosh darn thing about “the average for the entire earth since 1979”

      also, don’t quite understand how “average for the entire earth” relates to an “average” for the Arctic

    • Statistics on time-series are notoriously unreliable. The values are time dependant and statistics theory is based on independance of events.

      Fore time-series, you need to prove either stationarity or ergociticy. But I doubt if that has been able to be done.

      Thus I will continue to regard any trend analysis as interesting, but likely wrong.

  16. Dr. Morton,

    Thank you for your essay and associated works.

    I’m not knowledgeable about models, but have wondered based on my limited understanding if “reverse engineering” of a model is possible? I see where backtesting is performed as an attempt to substantiate it’s forward looking projections. Regarding “reverse engineering” it seems we have a number of observable variables for say the most recent 30 year window, including the “plateau”. Using these data can a model be constructed in reverse based on the knowns, then applied in a forward looking fashion? Would this change the comfort level with the future projections? I suppose some of the answer would lie in knowing if only one model resulted from the reverse engineering, or multiple variations.

    • The only period for which the models work is the 20-yr period to 1998 on which they were “trained”. They show “no skill” at either hindcasting or forecasting outside that. The obvious conclusion is that they are ##))#&(&$#*#*A!#&#)#(#)$(@#&#^_(*#^%@(@&^-ed up.

      • Further, the models share much code and many assumptions. They are really just grad-student variations on one model.

      • Brian,

        I can see how you could arrive at that conclusion! :)

        We build models that project (not predict) then back test them to substantiate the supposed reliability of the projections. But we have the most well rounded data set that we can have. We have the actual weather events, temperature data sets (including the “plateau”), volcanic activity, solar activity, etc. so I’m seriously wondering if one (much more capable than I) could reverse engineer a model. Seems it would take much in processing capability,but should that effort follow the observable historic evidence we have in hand it might lead to knowing what we are (or are not) doing to our climate via process of elimination.

      • “Further, the models share much code and many assumptions. They are really just grad-student variations on one model.”

        Wrong.

        “The only period for which the models work is the 20-yr period to 1998 on which they were “trained”.”

        Very Wrong

      • Curious George

        Steven – I only have an experience with a CAM5.1 model. Why is this model not very wrong?

    • Dear Danny
      Your idea of reverse engineering in a sense does occur in the hindcast mode when climate models tuned to recent observations are used to predict ancient temperatures. If I remember correctly the tests have not been notably successful. Furthermore there always is the risk of subconsciously biasing the parameters because you know what you are looking for.
      Don

      • Dr. Morton,

        Thank you, and please forgive my ignorance. I’m having a hard time putting in to words my meaning, but it’s different than hind/backcasting. It’s me needing a better understanding of hindcasting, but I see that as taking a model that we point towards the future and use the results from the same model by pointing it back in history (where we have observable evidence) and use the quality of hindcasting results to support the presumed reliability for the forward projections.

        Instead, I’m trying to superimpose “reverse engineering” of a highly complex mechanical system (motor/gears), and building one after taking the original apart. It seems that this method, once an “accurate” model was established, could be verified and repeated as well as falsifiable. Ignorance of the process may make this a futile thought/thread so with apologies if my differentiation is poor.

    • Danny
      If it is reverse engineering in the sense of looking at the details inside a back box system and reproducing it, then I think that is possible with most climate models because the computer code is generally available. The difficulty would be in reproducing the parameters if they are not completely specified. See my note in Sect. 4. Of course one also would need a supercomputer to run the code.
      Don

      • Don,
        You can get a PC version of GISS Model II from Columbia
        http://edgcm.columbia.edu/

      • Instead, I’m trying to superimpose “reverse engineering” of a highly complex mechanical system (motor/gears)
        =====
        there are an infinite number of “black boxes” that will recreate the past perfectly, but have no skill at forecasting the future. thus, one cannot use hind-casting skill as a measure of forecasting skill.

        for example, take N data points from the past. There exists a polynomial of degree N+1 that will exactly fit all N data points, but it will only correctly predict the future if the physical laws that determine the future just happen to exactly match the polynomial. Since the odds of this being correct are by chance vanishingly small, while the polynomial will hind-cast perfectly, it will almost certainly show no forecasting skill.

      • This is what Webby is doing with Seasalt (not the N+1, but he used an equation solver to generate a polynomial that used climate signals to generate a matching temp series).

  17. William McClenney

    Wouldn’t it be just wonderful if all we needed to resolve was the AGW “bump” on an essentially never-ending Holocene? You know, like the 50kyr “long interglacial ahead” of Loutre and Berger, 2003? The result of a 2D model of intermediate complexity run in CLIMBER2 and which has long been disavowed?

    “Recent research has focused on MIS 11 as a possible analog for the present interglacial [e.g., Loutre and Berger, 2003; EPICA community members, 2004] because both occur during times of low eccentricity. The LR04 age model establishes that MIS 11 spans two precession cycles, with 18O values below 3.6 o/oo for 20 kyr, from 398-418 ka. In comparison, stages 9 and 5 remained below 3.6 o/oo for 13 and 12 kyr, respectively, and the Holocene interglacial has lasted 11 kyr so far. In the LR04 age model, the average LSR of 29 sites is the same from 398-418 ka as from 250-650 ka; consequently, stage 11 is unlikely to be artificially stretched. However, the June 21 insolation minimum at 65N during MIS 11 is only 489 W/m2, much less pronounced than the present minimum of 474 W/m2. In addition, current insolation values are not predicted to return to the high values of late MIS 11 for another 65 kyr. We propose that this effectively precludes a ‘double precession-cycle’ interglacial [e.g., Raymo, 1997] in the Holocene without human influence.”

    http://large.stanford.edu/publications/coal/references/docs/Lisiecki_Raymo_2005_Pal.pdf

    But what if the 11,717 year old Holocene interglacial is now half a precession cycle old? Which it is. Would we even be able to measure an AGW effect at all during the climatic “madhouse” that is glacial inception? You know, say SRES marker A1F1 in AR4, the upper error bound estimate of sea level rise by 2099 of +0.6 meters if we do nothing about CO2? That is just 10% of the lower estimate for the sea level highstand achieved during the second thermal pulse during glacial inception during the end of the Eemian? How does one see a signal that is at best but 10% of the normal natural end extreme interglacial climate noise?

    And what if Ruddiman’s Early Anthropogenic Hypothesis is correct and that the reason we are still enjoying interglacial conditions is BECAUSE of our AGW emissions? If the Anthropocene is the extension of the now over Holocene, wouldn’t ending the Anthropocene leave but one other climate state? You know, the long cold one?

    “The possible explanation as to why we are still in an interglacial relates to the early anthropogenic hypothesis of Ruddiman (2003, 2005). According to that hypothesis, the anomalous increase of CO2 and CH4 concentrations in the atmosphere as observed in mid- to late Holocene ice-cores results from anthropogenic deforestation and rice irrigation, which started in the early Neolithic at 8000 and 5000 yr BP, respectively. Ruddiman proposes that these early human greenhouse gas emissions prevented the inception of an overdue glacial that otherwise would have already started.”

    conclude Muller and Pross (2007) http://folk.uib.no/abo007/share/papers/eemian_and_lgi/mueller_pross07.qsr.pdf

    “We will illustrate our case with reference to a debate currently taking place in the circle of Quaternary climate scientists. The climate history of the past few million years is characterised by repeated transitions between `cold’ (glacial) and `warm’ (interglacial) climates. The first modern men were hunting mammoth during the last glacial era. This era culminated around 20,000 years ago [3] and then declined rapidly. By 9,000 years ago climate was close to the modern one. The current interglacial, called the Holocene, should now be coming to an end, when compared to previous interglacials, yet clearly it is not. The debate is about when to expect the next glacial inception, setting aside human activities, which may well have perturbed natural cycles.”

    Crucifix and Rougier (2009) http://arxiv.org/pdf/0906.3625.pdf

    It is hard to even get remotely worried about the pause given the signal to noise ratio issues AGW represents at such an inauspicious moment as a half precession cycle old interglacial, and especially if the reason we are still enjoying interglacial warmth is if warmists are right about AGW.

    Go ahead, end the Anthropocene. Best thing that could happen to the present swollen human species, and might be the best thing for the genus too :-)

  18. Dr. Morton, with regard to rising population, there’s another empirical plateau you might like to consider: the only UN Population Survey variant ever even close to accurate is the Low Band, now dubbed the Low Fertility Band. It projects peak population at about 8 bn in about 2045, declining to 6+ bn. by 2100. The implications are immense.

  19. Unlikely.
    Most of the solar heat energy is absorbed in the ocean’s equatorial belt. Only warming of any note in the recent years, took place in the Indian Ocean. If the so called ‘tele-connection’ is a real factor, then it appears that the equatorial Atlantic and Pacific follow the suit, and only in the second part of the year.
    http://www.vukcevic.talktalk.net/EquatorialSST.gif
    It is up to the major ocean currents how energy is then transported pole-ward.
    Note: equatorial Atlantic is only one showing any sign of the ‘60 year cycle’, and only in the first half of the year.

  20. Excellent post with several significant points, but still a focus on tropospheric temperatures as “validation” for the models is a very weak argument. Ocean heat content is the very best single proxy we can use for gains in Earth’s climate system energy levels, and in that regard, there as been neither a hiatus, pause, nor plateau, but a persistent rise covering many decades. It is not a question of the “warming resuming” for it never stopped if you look at the full system, which is the best approach. The troposphere is subject to far too many other influences, especially from the ocean, for the models to ever track properly. The models will always be wrong, and even if they knew every single dynamic and feedback that could be quantified they’d be wrong. There is far too much natural variability and noise in the system over decades and multi decades for the models to get every bit. The very best we can hope for is that the models will hit the decadal averages approximately correct, but that would be the very best. A resumption of tropospheric warming (except in the case of a major volcano or two) will happen, but natural variability will dictate when and how much over shorter time frames.

    • For a counterargument (against using ocean heat content), see this post at RealClimate by Stefan Rahmstorf: http://www.realclimate.org/index.php/archives/2014/10/ocean-heat-storage-a-particularly-lousy-policy-target/

      • Indeed, ocean heat storage poses no threat so no action is called for. Go ocean!

      • I would partially agree with Stefan on his points, but one in particular I completely and strongly disagree with– point #2: Ocean heat content has no direct relation to any impacts.

        Ocean heat content:

        1) Affects the strength of hurricanes, especially in the step up to super-typhoons. The churning of the top 300m of the ocean mixed layer that a hurricane causes brings up additional heat (if any) from the lower depths. If the heat isn’t there– no super typhoon will form, no matter how supportive the atmospheric conditions are. The hurricane/typhoon will not develop into a super status without the heat at depth– surface layer heat will quickly be exhausted. The record high heat content of the Western Pacific over the past few years has led to the direct development of several super-typhoons. These super-typhoons, as we’ve seen this November, can go on to become extra-tropical cyclones which in turn can have extreme effects on planetary scale Rossby wave activity and global weather patterns.

        2). Warmer oceans lead to great heat being advected via ocean currents to the polar regions, leading to more ice loss and general warming.

        Stefan is correct to point out the high thermal inertia of the oceans, thus making it tough to use it for policymaking, as no one wants to wait 1,000 years to see a policy have some effect, but it can be used, and should still be used in the science community for a gauge of overall energy in the Earth’s climate system. The climate can’t warm if the oceans are not warming– the oceans drive the atmosphere, not the other way around.

        Also, Stephan’s final point about ocean heat content being hard to measure is absolutely correct– but that is no excuse for not trying, nor aiming for more and more accuracy as a goal. Deep ARGO is just now starting to ramp up. In the next decade, the “hard to measure” argument will become increasingly less of an excuse.

        Atmospheric proxies for total energy in the climate system are fickle, weak, and inaccurate at times, and subject to ocean cycles. They may be the best we have now, but they are so noisy and the atmosphere has such low thermal inertia, that the data is only good on decadal or longer averages anyway.

      • R. Gates – how many m/s of hurricane wind speed will an additional 0.1 C cause?

      • To be fair, Rahmstorf is arguing against using ocean heat content in the context of a “climate policy target”, R Gates has suggested it in the context of “gains in Earth’s climate system energy levels”. It clearly is an important measure in the latter context.

      • RGates

        Good grief! You nearly gave me a heart attack when you said you agreed with Stefan, having just seen two of his comments on the side bar.

        However on closer reading it seems you were not talking about Stefan the Denier but Stefan Rahmstorf. Bearing in mind Rahmstorf’s argument about using ocean heat content though, which of the stefans do you agree with most?

        tonyb

      • “1. Ocean heat content is extremely unresponsive to policy.

        While the increase in global temperature could indeed be stopped within decades by reducing emissions, ocean heat content will continue to increase for at least a thousand years after we have reached zero emissions. Ocean heat content is one of the most inert components of the climate system, second only to the huge ice sheets on Greenland and Antarctica (hopefully at least – if the latter are not more unstable than we think).”
        http://www.realclimate.org/index.php/archives/2014/10/ocean-heat-storage-a-particularly-lousy-policy-target/#sthash.WVQ7QVqT.dpuf

        Something not bat crazy from realclimate, wow.
        To say ocean content is extremely unresponsive to policy is precisely
        correct. And to tie Ocean heat content to the other unmovable elements
        of the polar caps, makes wonder if the guys have lowered their consumption of recreational drugs.
        Polar caps are not only a similar “problem” but they tied to ocean content.
        Or you can’t significantly affect one without affecting the other- and of course, neither can affected by the puny human influence.
        Though I see they are still concerned by our future thousands of year in future, which is of course adorable.
        And thousands of years is also good clue to remind some that it took thousands of years in last intergalactic to get the ocean warmer than they are today. It one thing to hope within a century or two that might get back to MWP type warm conditions- grapes growing in UK and farming in Greenland- but such warming would have little affect upon the ocean’s temperature.

      • Tell you what, R Gates, why don’t you amble over to RC and put them straight on the matter?

      • Your alleged counterargument is sophistry.

        You are misstating the argument made by Rhamsdorf. He agrees that the Increase in ocean heat content shows that the earth has continued to gain energy during the so called “pause” or “hiatus”.

        He argues against its use as a policy target, because it is not directly related to what is experienced by people living on the earth’s surface, and the surface temperature data base goes back further in time and is more extensive.

      • John Smith (it's my real name)

        Tonyb
        please stop reading Gates
        need you to finish that M.Paris stuff
        prior to heart attack
        thanks

      • John

        It’s a mammoth task to collate all the material i have on the medieval period into a readable form so I am continually looking for distractions. In any case I am hoping that Santa will be bringing me a couple of specialist climate related books as so much material is behind pay walls

        Tonyb

    • The MEI of Claus Wolter was posted above. It shows a multi-decadal variability in ENSO. Should the system return to El Nino dominant in future it is likely that energy will flow from the Pacific Ocean to the atmosphere and then to space. It is a simple matter of the area of warm ocean.

      https://watertechbyrie.files.wordpress.com/2014/06/loeb2012-fig1.png

      Is a return to El Nino dominance guaranteed? Perhaps not. More salt at the Law Dome ice core is La Nina.

      https://watertechbyrie.files.wordpress.com/2014/06/vance2012-antartica-law-dome-ice-core-salt-content.png

      But this is just one of many complications in the Earth energy dynamic. There is very little to suggest – against a background of considerable variability – that the planet is net warming still from the combination of factors.

    • Ocean heat content is the very best single proxy we can use for gains in Earth’s climate system energy levels, and in that regard, there as been neither a hiatus, pause, nor plateau

      As the ohc trend is not ubiquitous ie there is no trend in either the tropics,or NH you need another mechanism (as opposed to increased subsurface observation ) eg Sutton and Roemmich

      The warming of the oceans follows different patterns in the upper 400 m than deeper in the water column. Surface layer temperature tracks interannual El Niño/Southern Oscillation (ENSO) fluctuations (Roemmich and Gilson, 2011), but with the 0 – 100 m surface layer variations partially offset in heat content by opposite variability from 100 – 400 m. The deeper layers have a steadier decadal warming trend with maximum at about 1000 m. In the most recent observations from 2013-2014, the upper layers’ compensatory variability has given way to warming over the entire water column from 0 – 2000 m. The spatial distribution of the 2006 – 2014 warming indicates that all of the heat content increase during that period is in the southern hemisphere ocean (60°S to 20°S), with no significant trend in the tropics (20°S to 20°N) or the northern hemisphere (20°N to 60°N).

      Obviously your boundless lines are not stronger then blowing wind.

  21. Generalissimo Skippy

    ‘Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.’ http://rsta.royalsocietypublishing.org/content/roypta/369/1956/4751.full

    Such small scale modelling in the climate system is computationally impossible. But is it even necessary? In hydrodynamics we look at grid sizes – sometimes sub-grids within grids – that gives useful information on the required scale. For this high frequency micro-eddies are much less interesting than large scale macro structures – such as convection in the atmosphere – that might be usefully approximated.

    Here’s a study – still in review – that looks at Lorenz’s convection model as a ‘metaphor’ for climate.

    http://www.nonlin-processes-geophys-discuss.net/1/1905/2014/npgd-1-1905-2014.html

    Does the Lorenz model work at all as a metaphor? Or is the change in cloud, ice, snow, biology, atmosphere and ocean a fundamentally different type of system? The scales of interest in the latter are metres to global spanning regime like structures and decades to millennia.

    Although the Earth system does seem to share behaviours with these nonlinear sets of equations – is this merely coincidental? Is calling climate chaotic useful information or misleading? Does it require a different maths approach (networks?) at these large scales in time and space? We are on a climate modelling path – but it is perhaps not the right one. It may not be able to say anything useful about the future at all.

    There is however one fundamental reality that can only be approached at the level of data.
    ‘The climate system has jumped from one mode of operation to another in the past. We are trying to understand how the earth’s climate system is engineered, so we can understand what it takes to trigger mode switches. Until we do, we cannot make good predictions about future climate change.’ Wally Broecker

    https://watertechbyrie.files.wordpress.com/2014/06/gisp2_vostok1.png

    Climate has made major shifts in as little as a decade – and we have little idea of how and why the balance of the physical components of the system change. It suggests that climate is highly sensitive to small changes in conditions in dynamic ways that are still quite obscure. The best we can seem to do is a metaphor. Breakpoints – btw – have been identified around 1912, the mid 1940’s, the late 1970’s and 1998/2001.

    This post hand waves towards chaos in climate without comprehending the implications of abrupt climate change given modest initial changes. And I am not much interested in ad hoc rationalizations about small or nonexistent risk given how little is certainly known. Nor is there much understanding of the nonlinear equations at the core of climate models – and why that curtails climate prediction.

    On the other hand there are policy approaches that link social and economic development with environmental progress – at negligible additional cost.

    e.g. – http://www.copenhagenconsensus.com/post-2015-consensus

    As well as the potential to build low carbon and low cost energy sources. Ultimately the only rational response is and always has been technological innovation.

  22. I’ve often stated that skeptical investment in the “pause” talking point is dubious. Since temperature trend direction is a coin toss at even the decade frame it plays the game at the cherry pick level which what alarmist “science” is all about. So the pause helped contradict the junk models but the premise of models….”better models” is preserved even if they wait and wait and wait for the next upswing in temperature to claim predictive victory. All the while hedging “change” into the past “warming” meme. It’s all rubbish at the public press release levels. Why validate the Gruber assumptions of accepting this from the start without addressing the core corruption of climate advocacy??

    Regardless, it’s all a side show to serious skepticism that focuses on the political agenda of the bought science community and the machine it’s part of: Global Socialism (U.N.), similar U.S. Statists and government authority advocates (Greenshirts), crony rentseekers all with the backing of left-wing media and academic alliance. Too many skeptics can’t go there for many complex social reasons…..the sham of AGW advances for that reason more then any other.

    If you’re arguing as if the models were ever a serious science talking point in the debate then you are largely clueless (willfully perhaps) to begin with. The “pause” is a suspect point since a short-term trend can always change. Just as when a hurricane hits a populated area it becomes time for the “change” propaganda effort. So if one convoluted model hits the numbers and an up warming trend “climate change” again becomes “warming” so we have to relive the mid-90’s again with crap hockey sticks?? The point on temperature is that nothing is outside historical norms (natural variation) and the physical claims of human co2 are purely speculative and unquantified with hard science rules linking human emissions. Climate science is “opinion” science more like psychology then mathematics, chemistry or physics. Hence, it was an easy mark for political use and quickly became a partisan enclave (Greenshirt/leftist). Model “success” isn’t going to change that and this is what truly matters.

    If the pause changes in nature to something less useful to skeptics…..move on…..it was always side show but perhaps framed a little too much to advocate cherry picking cultures about very small temperature changes. Play their game and 20+ years of almost no change on record co2 goes to the memory hole.

    • I agree with you cwon14 that there is too much clutching of straws going on in the climate debate. The political decisionmaking horizon is generally aimed at winning the next election, which is too short for making the long term investment in research that the study of climate truly deserves.

      The correct timescale of climate change studies seems closer to 100 years than 30 years and events such as the observed current stasis in regional and global temperature series or any observed increase or decrease in the prevalence of storms are merely weather.

      • Too much of the skeptical community are devoted to “in the weeds” topical points and thereby enable the green political monolith to advance or even survive. For example; While Lima was toothless and pointless in substance the dog and pony show never the less rewards the agenda and continues the propaganda offensive. That in a past time everything (advocacy wise) was bet on “models” predicting “warming” and has failed for so long illustrates the importance of controlling the goal posts and measures. If academia and media are ideologically corrupt….and they are….you will never get a reasoned consensus of the same people.

        Instead of focusing on failed model predictions the actual consensus should be on the political corruption that has always driven the green movement and AGW advocacy specifically. A better summary of where the debate is found here and by similar voices;

        http://wattsupwiththat.com/2014/12/12/gruber-thinking-in-climate-science-disconnect-between-academia-and-the-real-world/

        Arguing over temperature stats and spaghetti charts, as a main point, enables the underlying political goals of the AGW culture to fly under the radar.

  23. “Not only is there no valid evidence that the “pause” has lasted 25 years or longer, there’s not even any valid evidence that a “pause” happened at all. Heck, there’s not even any valid evidence that there’s been a slowdown — let alone a “pause” — in this data set.”
    https://tamino.files.wordpress.com/2014/12/rssrate2.jpeg

    And that’s RSS

  24. When it rains, it pours… but soon, the sun shines again. Stay positive. Better days are on their way.

    • I thought weather was chaotic. So when it rains, it pours… but soon, it rains forever more.

      • Or at least for 40 days and 40 nights ;-).

      • It remains to ask what rate of warming there is to return to. The following shows the results of numerical analysis of breakpoints in atmospheric and oceanic indices.

        https://watertechbyrie.files.wordpress.com/2014/12/swanson-et-al-2009.gif

        ‘Synchronization as measured by the root-mean-square correlation coefficient between all pairs of modes over a 7-year running window. Note the reversed ordinate; synchronization increases downward in the plot. High synchronization at the p = 0.95 level is denoted by shading, tested by generation of surrogate data as described by Tsonis et al. [2007]. (middle) Coupling as measured by the fraction of consistently increasing or decreasing mode time series as described in the text. The shaded region denotes coupling at the p = 0.95 level as calculated from the surrogate data used for the confidence intervals in Figure 1 (top). (bottom) HadCRUT3g global mean temperature over the 20th century, with approximate breaks in temperature indicated. The cross-hatched areas indicated time periods when synchronization is accompanied by increasing coupling.’ http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/full

        It is no coincidence that shifts in ocean and atmospheric indices occur at the same time as changes in the trajectory of global surface temperature. Our ‘interest is to understand – first the natural variability of climate – and then take it from there. So we were very excited when we realized a lot of changes in the past century from warmer to cooler and then back to warmer were all natural,’ Tsonis said.

        Four multi-decadal climate shifts were identified in the last century coinciding with changes in the surface temperature trajectory. Warming from 1912 to 1944, cooling to 1976, warming to 1998 and declining – or at least not rising – since.

        It provides the beginnings of a rational starting point to disentangle anthropogenic forcing from natural variability. The temperature rise between 1944 and 1998 was 0.4K at 0.07K/decade. Starting elsewhere – outside of the regimes of natural warming and cooling – seems a trifle arbitrary.

        Nor is it likely – all other things being equal – that the warming will continue – eventually perhaps – at this extreme rate as the Sun cools from a 1000 year high. And the ENSO shifts – coincidentally – from a 1000 year El Nino frequency and intensity peak to the more common La Nina dominance.

      • Dear Pinky
        Thank you for telling me about this paper. I often have wondered if the abrupt changes in the slope of the temperature plot were evidence of changes of state in a chaotic system. Such important clues to the nature of our climate system will be masked if one insists on smoothing with a decade-long filter.
        Don

      • The implications of the 16 year plateau are this:

        a) the IPCC detection arguments rely on a clear separation between the signals from forced climate change and natural internal variability. Numerous climate model analyses find that it is very unlikely that a plateau or period of cooling extends beyond 15-17 years in the presence of anthropogenic global warming.

        b) failure of the climate models to predict a >17 year plateau raises questions about the suitability of the climate models for detection and attribution analyses, particularly in terms of accounting adequately for multidecadal modes of climate variability…

        ~Judith Curry (2012, emphasis added)

      • John Smith (it's my real name)

        in few years it might be difficult to keep arguing that the world is ending
        oops… guess someone said that in 2012

      • However Swanson and Tsonis say the new regimes due to natural cycles can last for decades. This doesn’t conflict with the fact that the earth is warming due to GHG emissions produced by human industry.

        http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/full

      • @ eadler2

        “This doesn’t conflict with the fact that the earth is warming due to GHG emissions produced by human industry.”

        Nor does it establish the ‘fact’. In ‘fact’ the only evidence that emissions produced by human industry have any measurable effect on the ‘Temperature of the Earth’ is that it is proclaimed loudly and often that it is an unchallenged and unchallengeable fact and every effort is made do demonize and/or ridicule and/or prosecute as a criminal anyone who dares to question, no matter how mildly, that it is in fact a ‘fact’.

      • The role of CO2 and other GHG’s in causing global warming is accepted as a fact by 97% of climate researchers. The deniers are outliers.

        The original hypothesis was made by Joseph Fourier in 1828, when he developed the theory of IR radiation. The first measurements of IR absorption were made by John Tyndall in 1859, and the first estimates of climate sensitivity were made in 1896. Very accurate measurements of the effects in clear air were made in 1959 by Plass. Of course the feedback mechanisms in the climate system are complex and climate sensitivity has a considerable uncertainty, but the role of CO2 is nearly universally accepted.

        The burden of proof is on the deniers, and they have not been able to meet it.

      • Climate Researcher 

        Yes but not one among those 97% can produce any valid physics that shows carbon dioxide and the main “greenhouse gas” water vapor can warm the surface. Empirical evidence shows water vapor cools and you cannot produce any study showing it warms by nearly 30 degrees as the IPCC want you to be gullible enough to believe.

        A planet’s surface temperature cannot be explained just with radiation calculations. The solar radiation reaching Earth’s surface (163W/m^2) could only support a surface temperature of about -40°C. Without water vapour, clouds shading the surface or the current 20% of incident solar radiation being absorbed on the way down through the atmosphere, then nearly twice as much solar radiation would strike the surface, making it more like +5°C.

        But none of this enables you to explain the surface temperature of Earth, let alone Venus, or how these surface temperatures get the required thermal energy to rise in temperature during the day. Radiation between the surface and the atmosphere can only cool the surface and it can never help the Sun to reach a higher daily maximum temperature – nowhere on no planet. If you don’t understand why, then you need to study graduate level physics.

      • I see. You are a genius, and the 97% of climate researchers who disagree with you are wankers. Also according to you, three of the greatest pioneering physicists of the 19th century and early 20th century, Fourier, Tyndall and Arrhenius were wrong about the role of water vapor and CO2 in warming the earth.

        I will not hold my breath waiting for you to win a Nobel prize for your earth shaking work in Physics showing that everything we thought we know about this is wrong. Your thinking is clearly flawed.

        Energy arrives at the earth from the sun continuously. The reason the earth doesn’t heat up catastrophically is because it loses heat to outer space by IR radiation.
        The way GHG’s work is to block radiation from escaping directly into space from the earth’s surface by absorbing it and radiating it back to the surface. The radiation in the absorption spectrum of GHG’s finally escapes into outer space directly from higher altitudes where the gases are cooler and the rate of radiaton is reduced. If the radiation into space is at a lower rate than the arrival of radiant energy from the sun the planet heats up.

        It is not clear whether you are intellectually challenged or are suffering from confirmation bias and refuse to accept the simple physics which explains how GHG warming works.

      • The global warming establishment is the pharaonic construction of Western academia to serve the political interests of the Left.

      • Max_OK, Citizen Scientist

        I see you figured out the most intelligent people lean left.

      • Max

        Your elitism is showing. Perhaps you equate intelligence and high SAT scores with good thinking? The western academia is not a gathering of smart people but rather a gathering of 1960s-70s radicals and their students. It is a gathering of intelligent anti-American paranoid with a penchant to re-write history and a philosophy of relativism.

        Richard

      • Max_OK, Citizen Scientist

        rls, I’m flattered you see me as among the elite, defined in the Oxford Dictionary as

        “A select group that is superior in terms of ability or qualities to the rest of a group or society”

        Uh Oh, my head is starting to swell ! I better not forget where I came from.

        rls, I can confirm what you suspected. I do equate intelligence with good thinking.

        Not good thinking is the notion academics aren’t smart, but this notion may appeal to people who envy the elite.

      • Max

        There are no intelligent people that make stupid decisions? And no people of average intelligence that make smart decisions?

        I have some experience and training in leadership. Sound decisions are best achieved not from an individual but rather when two or several individuals collaborate in a mutually respectful atmosphere. If a leader believes he/she has the only answer and refuses to listen to others, bad decisions are more likely. There are examples:

        1. Abraham Lincoln did not shy from conversations of differing views; I’m not sure of his intelligence.

        2. General William Westmoreland graduated from West Point at/near top of his class but had a reputation for not listening to underlings and making bad decisions.

        I suggest also that academics seek those with differing views and participate in rational discussions, putting biases at the door. Listening to Ward Churchill, Bill Ayers, and Cornell West I hear intelligent but illogical professors; people that will not engage a rational discussion; that prefer re-writing history.

        Richard

      • Max_OK, Citizen Scientist

        rls, household pets can make good decisions and Phd’s can make bad decisions, but as a rule intelligence is more and asset than a liability when it comes to making decisions.

        I believe scientists listen to what other scientists have to say. I believe they are open to what people outside of science have to say about science, providing it’s relevant.

        Regarding your comment “sound decisions are best achieved not from an individual but rather when two or several individuals collaborate…,” I tend to agree, but many here at ClimateEtc believe “group think” isn’t a good thing.

      • Maxy proceeds from an assumption that progressives are smarter than the alternative.

        It is demonstrably a false assumption.

        Perhaps one of the greatest conundrums of classical liberal philosophy is the tolerance that classical liberalism exhibits towards those who choose to follow opposite ideologies, such as communism, socialism, communitarianism and, more recently, “greenism”. It is a conundrum for the simple fact that it appears counter-productive for the sake of survival ever to tolerate subversive elements. Of all political philosophies, classical liberalism tolerates, permits—and even encourages—people to make their own political choices and follow their political conscience. The same cannot be said of any left-leaning political philosophy or party, even the Australian Labor Party. Political philosophies of the Left do not tolerate diversity of political thought.

        Classic liberals are demonstrably much smarter, much nicer and much more environmentally sensitive.

        https://watertechbyrie.files.wordpress.com/2014/06/blue-rose3-e1418789192370.jpg

      • Going from 2001 or 2002 cherry-picks a highpoint to highpoint trend.

        The numerical analysis suggests the new multi-decadal climate regime started in 2002. This is supported by both satellite and telescope techniques. The trend from 2002 is not meaningful in itself – other than suggesting that we are still on track for a 20 to 40 years cool Pacific regime.

        http://www.woodfortrees.org/plot/hadcrut4gl/from:1975/plot/hadcrut4gl/mean:12/from:1975/plot/hadcrut4gl/from:2002/trend

        Indeed – the same periodicities are found in NH data – suggesting an external but relatively subtle cause.

        https://watertechbyrie.files.wordpress.com/2014/06/smeed-fig-71.png

        UV modulation of the polar annular modes – in the Hale cycle – spinning up sub-polar gyres has been suggested.

        But per your request, taking out the ENSO cycle, lets start at 2000 (below).
        Still an undeniable upward trend the past 15 years…

        Only reason it is slower is because most of that period we’ve been in the Cold Phase of the PDO.

        The PDO and ENSO are part of the Pacific regime.

        ‘This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.’ http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

        Something that has been evident for decades.

        So despite the cool phase of the PDO, we continued to warm, albeit at a slower rate. Well that is about to change because the PDO has begun to shift in the past few months, and in response 2014 has been warming rapidly. I won’t bore you with the “2014 will be warmest year of all” hype, but it is almost certain, and with a greater likelihood of an accelerated warming trend well into the next decade at twice the rate of the previous one, similar to the 0.2 deg C.

        Indeed – we continue to not warm, these regimes persist for 20 to 40 years in the long proxy records and there is no guarantee that climate will shift to yet warmer conditions at the next climate shift. What is being indulged in is technically known as clutching at straws with little to no scientific justification.

      • Max_OK, Citizen Scientist

        Delta Dawn fails to mention Classical Liberals are market worshipers who believe it’s a sin to violate free-market principles, and will insist on putting the market first regardless of the damage this may do to people. Delta Dawn sees this as proof Classical Liberals are smart. I see it as proof they are nuttier than fruit cakes.

      • Max

        “Delta Dawn fails to mention Classical Liberals are market worshipers who believe it’s a sin to violate free-market principles, and will insist on putting the market first regardless of the damage this may do to people.”

        That statement is not accurate. They believe that limited regulation is required.

        Richard

      • Max_OK, Citizen Scientist

        Exactly what limited regulations do they believe are required?

      • Max

        “When the regulation, therefore, is in support of the workman, it is always just and equitable; but it is sometimes otherwise when in favour of the masters.” – Adam Smith

        Richard

      • ‘Delta Dawn fails to mention Classical Liberals are market worshipers who believe it’s a sin to violate free-market principles, and will insist on putting the market first regardless of the damage this may do to people. Delta Dawn sees this as proof Classical Liberals are smart. I see it as proof they are nuttier than fruit cakes. Maxy

        Classic liberals in fact have a commitment to democracy and the rule of law. These are the essential freedoms long fought for and hard won. The role of government is the protection of the citizenry against the brutal – with police and armies. It includes civil defence against the ravages of nature. Laws evolve in the cut and thrust of democracy to make markets fair, the protect consumers and workers, to protect natural environments amongst other things. The optimum size of government to maximise economic growth is some 22% of GDP.

        In economics the role of governments is to enforce rules that maintain fair markets – and to set the pace of the market through management of interest rates.

        All this is a little bit subtle for Maxy – who is the epitome of the progressive intellect. Enough said.

        https://watertechbyrie.files.wordpress.com/2014/06/blue-rose3-e1418789192370.jpg

      • Richard,

        ‘Blue roses do not occur in nature, at least not the absolute blue roses. Roses lack the pigment that produces blue color. Our blue roses have been painstakingly created and imbued with a special meaning.

        Much like its mysterious origin, the blue rose means mystery. An appreciation for the enigmatic, the inexplicable is expressed by blue roses. A tantalizing vision that cannot be totally pinned down, a mystery that cannot be fully unraveled is the blue rose…’

        http://www.roseforlove.com/the-meanings-of-blue-roses-ezp-39

      • Max_OK, Citizen Scientist

        Delta Dawn said “Classic liberals in fact have a commitment to democracy and the rule of law.”
        _______

        Especially laws that hold down the poor. Adam Smith admitted it.

        “Civil government, so far as it is instituted for the security of property, is in reality instituted for the defence of the rich against the poor, or of those who have some property against those who have none at all.”
        Adam Smith, The Wealth of Nations: An Inquiry into the Nature & Causes of the Wealth of Nations

        Classical Liberals are cranky misfits who can’t adjust to modern times.
        Little about today’s world pleases them and they pine for an earlier era (pre-20th Century). Their chronic complaining is a drag on society, and one wishes they were living on another planet.

      • So property laws are aimed at stopping you getting robbed? What a surprise.

        Freedom can only flourish in a capitalist framework – it is the key to all strands of western economic thought throughout the past 200 years – and to the economic success of western nations.

        Classic liberals have a commitment to democracy, open and transparent access to the protection of the law and, necessarily, to the the social contract that evolves in the cut and thrust of politics. Even the politics of such oblivious throwbacks as Maxy. That is the thing that distinguishes classic liberals. I’ve got an idea Maxy. Let’s vote on it.

        https://watertechbyrie.files.wordpress.com/2014/06/blue-rose3-e1418789192370.jpg

      • Max

        “Especially laws that hold down the poor. Adam Smith admitted it.”

        It must be intellectually comforting for progressives to read what is not there. Perhaps they do not lie all the time, perhaps the fondness for Marks makes them see and think things that are not there. There has to be a name for this condition! Marksian Dyslexia?

        Richard

      • Max
        Are you trying to say Smith was an advocate of what you quoted from him or that Capitalism is a proponent of what you cited Smith saying?

      • Max_OK, Citizen Scientist

        Are you sure Classical Liberals favored the minimum wage and the occupational health and safety regulations?

      • Delta Dawn

        Like the photo. Are there blue roses in Oz? Photoshop? Like your name too.

        Richard

      • eadler2

        Thank you for the reply. I was expecting a negative critique but it appears you agree with what was written; I wasn’t trying to teach. I take your reply as a compliment to a lowly BSEE.

        Regards,

        Richard

      • Max

        I agree that intelligence can be useful, and there are many intelligent people that deserve respect. Generally, scientists stand out as honorable people that unselfishly contribute to knowledge and progress. I see this in Judith Curry and Tony Brown; they are tireless. However academics such as Ward Churchill, Bill Ayers, and Cornell West are too common.

        Just read an article that was complimentary of Samantha Powers and of note was her retraction of things she wrote and said while in academia; her statement was “Serving in the executive branch is very different than sounding off from an academic perch”

        Regards,

        Richard

      • rls

        Here is your thinking put in practice in the analysis of someone in all our lives.

      • Wagathon

        It is also a construct of the progressive media. Today Brian Beutler wrote a sloppy article in the New Republic titled “Obama Approach to Bush Torture Has Failed”. It is all unsubstantiated and slanted opinion written as fact. The comments at the New Republic are all negative and point to a problem for the progressive movement, and the CAGW movement. Of particular interest to me was this comment:

        “It must be hard to wake up and face another day of trying to promote the ole progressive viewpoint these days.  Still the biggest deal right now is that the country rejected progressive politics and politicians in the mid-terms.  So, like weird flak cannons throwing up pretty baubles, the left has run a string of big stories to divert and deflect attention from the mid-term loss.  We had the illegal Hispanic amnesty.  We had the talk of a phantom GOP shutdown.  We had the big climate agreement giveaway with China.  We had the Ferguson and Garner riots and looting.  We had the UVA non-rape case. And, now, the big torture program cynical self-flagellation. 

        Not one of the stories grabbed the national interest and held it.  This one won’t either.  The electorate is one stubborn group that seemingly is smarter than the Democrats Gruber deceived.  They are fed up with a bunch of white environmentalists and oppressed do-gooders running the country into ruin by ignoring the fundamentals of jobs, wage growth, and personal and family security in favor of reduced coal burning, reduced oil drilling, no oil pipelines, no fracking, endangered species over the human kind, and attempts to control the long-term climate at tremendous cost to the citizenry.

        So, happy story-telling, Brian.  This will probably occupy the leftist readership here for a few days and keep them distracted from the dissolution of their party…  Run Liz, Run!”

        Keep yea warm,

        Richard

      • rls

        Whoa! What could possibly be added to that? Think how bad off the left would be without the MSM. Of course, the people of the MSM came right out of the left, as did most of the education establishment. If it weren’t for reality, “conservatives” (def. anyone not in the left) would be in a world of hurt.

      • Is threading broken?

      • The AGW denier establishment is built on opposition to government regulation by the right wing libertarians and the corporate big energy establishment that funds Heartland, Heritage Cato and other so called right wing “think tanks”. They invented phoney science to oppose regulation of tobacco, sulphate pollution, and GHG emissions.

        http://www.cbsnews.com/news/nyff-review-the-documentary-merchants-of-doubt/

      • Your post is nonsense. You cannot determine the temperature of a surface receiving radiation solely by the incoming solar radiation flux. You must also look at the radiation arriving at the surface from other sources, and the tlux of radiation leaving the surface from all sources. One estimate of the earth’s radiation budget shows that the earth’s surface is gaining energy at a rate of 0.9W/M2. Most of this flows into the ocean, but some of it heats up the surface and the atmosphere.

        http://www.windows2universe.org/earth/Atmosphere/earth_radiation_budget.html

      • eadler2, “One estimate of the earth’s radiation budget shows that the earth’s surface is gaining energy at a rate of 0.9W/M2 ”

        Yep that would be one estimate.

        http://curryja.files.wordpress.com/2012/11/stephens2.gif

        0.6 Wm-2 is another and I believe there are some as low as 0.32 Wm-2.

        That 0.6 estimate by Stephens et al. is pretty interesting with that +/-17.0 Wm-2 uncertainty at the “surface”. What was your point again?

      • James Hansen quotes a value of 0.58 +/- 0.15 W/M2, during a period of solar minimum.
        http://www.giss.nasa.gov/research/briefs/hansen_16/
        “However, given that the imbalance of 0.58±0.15 W/m2 was measured during a deep solar minimum, it is probably necessary to increase radiation to space by closer to 0.75 W/m2, which would require reducing CO2 to ~345 ppm, other forcings being unchanged. Thus the Earth’s energy imbalance confirms an earlier estimate on other grounds that CO2 must be reduced to about 350 ppm or less to stabilize climate (Hansen et al., 2008).”

        I don’t know where you get such a large uncertainty from Stevens et. al.
        The paper that I found by him
        http://www.clas.ufl.edu/users/prwaylen/GEO2200%20Readings/Readings/Radiation%20balance/An%20update%20on%20Earth's%20energy%20balance%20in%20light%20of%20latest%20global%20observations.pdf

        shows a diagram that gives a value of 0.6 +- 0.4 W/M2. The text says that using ocean heating observations, the error in the satellite observations can be drastically reduced to the +/-0.4 W/M2 that he quoted.

        “…This suggests that the intrinsic precision of CERES is able to resolve the small imbalances on interannual timescales12,16, thus providing a basis for constraining the balance of the measured radiation fluxes to time-varying changes in OHC (Supplementary Information). The average annual excess of net TOA radiation constrained by OHC is 0.6±0.4 Wm–2 (90% confidence) since 2005 when Argo data14 became available, before which the OHC data are much more uncertain14. The uncertainty on this estimated imbalance is based on the combination of both the Argo OHC and CERES net flux data16….”

        Is there a scientific reason that you and ( Dr Curry) chose to ignore this information and quote a much larger uncertainty?

      • Temperature is an intensive variable: a global temperature does not actually exist; and, an average global temperature is meaningless concept. A free people must learn to see and refuse to be led by liars. Global warming is nothing but a hoax and scare tactic: an unprovable hypotheses that is useful in the Left’s propaganda war against Americanism. The Left would embrace the use of nuclear energy to generate electricity if they really believed burning fossil fuels caused climate change

      • eadler2, Stephens et al. have both TOA and “surface” uncertainties. Since “surface” temperature is the metric used for “sensitivity” you need both to determine impact. All that land amplification, polar amplification ocean heat uptake efficiency changes tends to complicate things.

        If you would like a “scientific” reason,

        https://lh6.googleusercontent.com/-0Apkk3K8V6M/VJIY-EDzLAI/AAAAAAAAL7o/j9nGC6DMxfM/w775-h473-no/so%2Bfar%2Bthis%2Bcentury.png

        There is a comparison of a few temperature series with CMIP5 model projections. It is fairly obvious there is a slight divergence. Stevens et al. has +/-0.4 TOA and +-17 “surface’ which is more believable than +/-0.15 based on models that are currently diverging from observational reality.

      • SkepticGoneWild

        Eadler2 stated:

        “James Hansen quotes a value of 0.58 +/- 0.15 W/M2, during a period of solar minimum

        Have you read the latest Hansen paper? I guess not.

        James Hansen published a peer review paper entitled:

        “Earth’s energy imbalance and implications (2011)”

        http://www.atmos-chem-phys.net/11/13421/2011/acp-11-13421-2011.pdf

        Hansen starts out describing the greenhouse effect:

        “The basic physics underlying this global warming, the greenhouse effect, is simple. An increase of gases such as CO2 makes the atmosphere more opaque at infrared wavelengths. This added opacity causes the planet’s heat radiation to space to arise from higher, colder levels in the atmosphere, thus reducing emission of heat energy to space. The temporary imbalance between the energy absorbed from the Sun and heat emission to space, causes the planet to warm until planetary energy balance is restored.”

        So we have this alleged energy imbalance that is causing the planet to warm. The scientific method requires that one conduct experiments or perform measurements to confirm a hypothesis. So we have satellites that measure EM radiation.

        Hansen goes on to state:

        “Earth’s energy imbalance and its changes will determine the future of Earth’s climate. It is thus imperative to measure Earth’s energy imbalance and the factors that are changing it.”

        This is true. Because if you don’t measure the imbalance, you have not confirmed the hypothesis.

        Hansen goes on further:

        “The required measurement accuracy is ∼ 0.1 W m−2, in view of the fact that estimated current (2005–2010) energy imbalance is 0.58 W m-2″.

        Hansen then begins to admit the problems with performing the measurements.

        “The difficulty with the satellite approach becomes clear by considering first the suggestion of measuring Earth’s reflected sunlight and emitted heat from a satellite at the Lagrange L1 point, which is a location between the Sun and Earth at which the gravitational pulls from these bodies are equal and opposite. From this location the satellite would continually stare at the sunlit half of Earth. The notion that a single satellite at this point could measure Earth’s energy imbalance to 0.1 W m −2 is prima facie preposterous. Earth emits and scatters radiation in all directions, i.e., into 4π steradians. How can measurement of radiation in a single direction provide a proxy for radiation in all directions?”

        Hansen goes on further:

        “It is implausible that changes in the angular distribution of radiation could be modeled to the needed accuracy, and the objective is to measure the imbalance, not guess at it. There is also the difficulty of maintaining sensor calibrations to accuracy 0.1 W m−2, i.e., 0.04 percent. That accuracy is beyond the state-of-the art, even for short periods, and that accuracy would need to be maintained for decades”

        The above satellite was in a fixed location. Hansen goes on:

        “These same problems, the changing angular distribution of the scattered and emitted radiation fields and maintaining extreme precision of sensors over long periods, must be faced by Earth-orbiting satellites”

        He then states:

        “The precision achieved by the most advanced generation of radiation budget satellites is indicated by the planetary energy imbalance measured by the ongoing CERES (Clouds and the Earth’s Radiant Energy System) instrument (Loeb et al., 2009), which finds a measured 5-yr-mean imbalance of 6.5 W m−2 (Loeb et al., 2009). Because this result is implausible, instrumentation calibration factors were introduced to reduce the imbalance to the imbalance suggested by climate models, 0.85 W m-2″

        Do you see what they did?!!! The CERES satellite measured an energy imbalance of 6.5 W m−2. The value suggested by climate models is 0.85 W m-2. The measurement error is over 600%. So what do they do? They introduce a phony instrument “calibration” to make their measurement read what the climate models suggest. What the hell?? Putting your thumb on the scale to get the measurement you want? This is besides the problem that the satellites cannot even provide the required accuracy needed. I mean who is fooling who? This is science?

      • It is hard to continue reading much less take seriously someone who starts with the 97% meme.

      • O that 1997 meme! Put the bleme on meme, boys!

      • Response ter PMHinSC gone wild.

      • Yeah, anyone who buys the 97% line either isn’t reading some very basic stuff they really need to read, or doesn’t give a damn about the facts, only a ‘higher truth’.

      • Please give the details about the basic stuff I need to read. If I am not familiar with it, I promise to learn about it. I am quite capable of doing so, because I have a PHD in Physics.

      • eadler2, “Please give the details about the basic stuff I need to read. If I am not familiar with it, I promise to learn about it. I am quite capable of doing so, because I have a PHD in Physics.”

        It seems to me that climate science shelf life is down to around 5 years for a lot of things. I am more into paleo so I stop by Climate of the Past quite often to try and find current work.

        http://www.clim-past.net/volumes_and_issues.html

        It is open access and has online discussions during peer review.

        In Judith’s side bar are quite a few online references that are great and I highly recommend Issac Held’s blog.

        Climate Dialog has an interesting blog format and tends to focus on current “controversies” which can be illuminating, also on Judith’s sidebar.

        Science of Doom also has a lot of good posts. For the Energy budgets I noticed a few errors, one was the water vapor continuum which is a big factor, IMO to consider for impact.

        And i dig up current theses/papers dealing with subjects I expect are in need of some touch up like cloud modeling, especially mixed phase clouds which are a huge source of uncertainty.

      • We won’t hold that against you

      • SkepticGoneWild

        O M G. Tyndall did not measure “absorptivity”. Based upon his 1861 paper and the description of his apparatus, he was measuring “opacity’, not absorption. Secondly, Tyndall held to the discredited theory of aethereal heat transfer. Sure, there were estimates of climate sensitivity in 1896, but these were just hypotheses that were never rigorously tested per the tenets of the scientific method.

        One of the alleged pioneers of AGW theory is Guy S. Callendar. He published a paper in 1937 entitled:

        “THE ARTIFICIAL PRODUCTION OF CARBON DIOXIDE AND ITS INFLUENCE ON TEMPERATURE” [//onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/pdf]

        Of course he predicted that global temperatures would rise with increasing CO2 concentrations.

        At the conclusion of his paper, he stated:

        “The course of world temperature during the next twenty years should afford valuable evidence as to the accuracy of the calculated effect of atmospheric carbon dioxide.” – G.S. Callendar, 1937

        To his credit, his statement accurately reflects the requirements of the scientific method, something that is completely missing in the modern world of climate psuedo-science. But back to his paper. Let’s look at the “valuable evidence” that would reflect the “accuracy” of the calculated effect of increasing CO2 per Callendar’s hypothesis. Here are global temperatures from 1937 to 1957:

        http://woodfortrees.org/plot/hadcrut4gl/from:1937/to:1957/mean:12/plot/hadcrut4gl/from:1937/to:1957/trend

        Hey wait a minute! Let’s give the “Guy” a break. He really meant 30 years!

        http://woodfortrees.org/plot/hadcrut4gl/from:1937/to:1967/mean:12/plot/hadcrut4gl/from:1937/to:1967/trend

        40 years?

        //woodfortrees.org/plot/hadcrut4gl/from:1937/to:1977/mean:12/plot/hadcrut4gl/from:1937/to:1977/trend

        As the great physicist Richard Feynman stated:

        ” It does not make any difference how smart you are, who made the guess, or what his name is – if it [the hyothesis] disagrees with experiment it is wrong.”

        Callendar’s AGW hypothesis examined a time period when CO2 increases were marginal. The 20 to 40 year period after his paper was published saw marked increases in CO2 concentrations, yet global temperatures fell.

        It does not matter what is “universally accepted”. The scientific method does not concern itself with the “consensus”. No, the burden of proof remains with scientists proposing their hypotheses. And so far their hypotheses do not hold up under the rigors of the scientific method.

      • Thanks SGW, great post! I like your style too.

      • You are silly. Opacity is the result of absorption. It doesn’t matter that the existence of an aether as a medium of transmission of electromagnetic waves was a popular theory at the time. It did not invalidate the measurements that Tyndall made or the conclusions that he drew.

        The fact that Callendar didn’t make accurate predictions in the late 1930’s doesn’t invalidate the GHG theory. There are so many other factors involved in the progress of climate that he would have to know in order to make accurate predictions.

        http://www.newscientist.com/article/dn11639-climate-myths-the-cooling-after-1940-shows-co2-does-not-cause-warming.html#.VJHiadK1yjc

        “After rising rapidly during the first part of the 20th century, global average temperatures did cool by about 0.2°C after 1940 and remained low until 1970, after which they began to climb rapidly again.

        The mid-century cooling appears to have been largely due to a high concentration of sulphate aerosols in the atmosphere, emitted by industrial activities and volcanic eruptions. Sulphate aerosols have a cooling effect on the climate because they scatter light from the Sun, reflecting its energy back out into space.

        The rise in sulphate aerosols was largely due to the increase in industrial activities at the end of the second world war. In addition, the large eruption of Mount Agung in 1963 produced aerosols which cooled the lower atmosphere by about 0.5°C, while solar activity levelled off after increasing at the beginning of the century..”

        You would like to discredit these guys, but your arguments are irrelevant and a failures in reasoning.

      • The burden of proof is on the deniers, and they have not been able to meet it.

        Are you channeling Trenberth?

      • The apparent lack of a proximate cause behind the halt in warming post 2001/02 challenges our understanding of the climate system, specifically the physical reasoning and causal links between longer time-scale modes of internal climate variability and the impact of such modes upon global temperature. ~Swanson and Tsonis

        –e.g., our fixation on CO2 clouds our vision but to get this research funded I still must acknowledge that despite such uncertainty we still take as an article of faith that whatever is happening, America’s CO2 is forcing it to happen and the consequences for all humanity will be dire indeed.

      • These abrupt changes often do not last long. To quote the authors,
        “While in the observations such breaks in temperature trend are clearly superimposed upon a century time-scale warming presumably due to anthropogenic forcing, those breaks result in significant departures from that warming over time periods spanning multiple decades.”

        Of course if people don’t want to see the forest and prefer looking at the trees, because it interests them more, that is OK. That doesn’t mean the forest has disappeared.

      • Pinky and The Brain

        ‘A vigorous spectrum of interdecadal internal variability presents numerous challenges to our current understanding of the climate. First, it suggests that climate models in general still have difficulty reproducing the magnitude and spatiotemporal patterns of internal variability necessary to capture the observed character of the 20th century climate trajectory. Presumably, this is due primarily to deficiencies in ocean dynamics. Moving toward higher resolution, eddy resolving oceanic models should help reduce this deficiency. Second, theoretical arguments suggest that a more variable climate is a more sensitive climate to imposed forcings (13). Viewed in this light, the lack of modeled compared to observed interdecadal variability (Fig. 2B) may indicate that current models underestimate climate sensitivity. Finally, the presence of vigorous climate variability presents significant challenges to near-term climate prediction (25, 26), leaving open the possibility of steady or even declining global mean surface temperatures over the next several decades that could present a significant empirical obstacle to the implementation of policies directed at reducing greenhouse gas emissions (27). However, global warming could likewise suddenly and without any ostensive cause accelerate due to internal variability. To paraphrase C. S. Lewis, the climate system appears wild, and may continue to hold many surprises if pressed.’ http://www.pnas.org/content/106/38/16120.full

        http://www.pnas.org/content/106/38/16120/F3.large.jpg

        You forgot the most important bit. Confusing ain’t it?

        The rate of warming in the 20th century was 0.07K/decade and hardly likely – all things being equal – to continue – after a hiatus of 20 to 40 years perhaps – at that extreme rate in the 21st. As natural variability shifts to cooler after a 1000 year high. I would pretty much count on all other things not being equal.

      • Pinky, There is an Indian Ocean temperature reconstruction that appears to be fairly accurate that you might want to look at.

        https://lh3.googleusercontent.com/-Z_Bt6YEkLjA/VEqqqJ81jBI/AAAAAAAALo4/lCQvF-e8tRM/w577-h344-no/oppo%2Bsmooth.png

        I have compared it with a number of things, but this one that extends the current rate of ocean warming to 700 meters is one of my faves.

        https://lh5.googleusercontent.com/-6Tf2glKTcu0/VG330klOFZI/AAAAAAAALxM/M60OPJJ6ITs/w858-h528-no/curry%2Btalk.png

      • Pinky and The Brain

        ‘The global climate system is composed of a num-
        ber of subsystems | atmosphere, biosphere,
        cryosphere, hydrosphere and lithosphere | each
        of which has distinct characteristic times, from
        days and weeks to centuries and millennia. Each
        subsystem, moreover, has its own internal vari-
        ability, all other things being constant, over a
        fairly broad range of time scales. These ranges
        overlap between one subsystem and another.
        The interactions between the subsystems thus
        give rise to climate variability on all time scales.’ http://web.atmos.ucla.edu/tcd/PREPRINTS/Ghil-A_Met_Soc_refs-rev%27d_vf-black_only.pdf

        Actually what they suggest is abrupt shifts between climate states as the internal components realign. Emphatically not cycles.

        The US National Academy of Sciences (NAS) defined abrupt climate change as a new climate paradigm as long ago as 2002. A paradigm in the scientific sense is a theory that explains observations. A new science paradigm is one that better explains data – in this case climate data – than the old theory. The new theory says that climate change occurs as discrete jumps in the system. Climate is more like a kaleidoscope – shake it up and a new pattern emerges – than a control knob with a linear gain.

        The theory of abrupt climate change is the most modern – and powerful – in climate science and has profound implications for the evolution of climate this century and beyond. The system is pushed by changes in greenhouse gases, solar intensity or orbital eccentricity. The climate response is internally generated – with changes in cloud, ice, dust and biology – and proceeds at a pace determined by the system itself. The old theory of climate suggests that warming is inevitable. The new theory suggests that global warming is not guaranteed and that climate surprises are inevitable.

        ‘Recent scientific evidence shows that major and widespread climate changes have occurred with startling speed. For example, roughly half the north Atlantic warming since the last ice age was achieved in only a decade, and it was accompanied by significant climatic changes across most of the globe. Similar events, including local warmings as large as 16°C, occurred repeatedly during the slide into and climb out of the last ice age. Human civilizations arose after those extreme, global ice-age climate jumps. Severe droughts and other regional climate events during the current warm period have shown similar tendencies of abrupt onset and great persistence, often with adverse effects on societies.’ http://www.nap.edu/openbook.php?record_id=10136&page=1

        There are obviously scientists who do have a clue – unfortunately this is not matched by clueless camp followers who merely imagine that they have the inside track on climate science.

      • Pinky,
        It is clear that you are confused, because you want to be.

        Looking at the graph you provide, it seems silly to characterize the increase in temperature during the 20th century as a single trend characterized by a single number 0.07 C/decade. Trend analysis shows breakpoints at 1912, 1940 and 1970. We understand what the principle drivers of these trends are. There is no mystery here. Increases in solar radiance drove the rapid increase in temperature between 1912 and 1940.
        http://www.biocab.org/Solar_Irradiance_English.jpg

        Solar irradiance topped out around 1940. Sulphate aerosals and CO2 have opposite effects on energy uptake by the earth and the temperature increase pretty much flattened out until the 1970’s when CO2 continued to increase and sulphate aerosals topped out and were decreased by pollution controls in the industrialized countries, and global warming took off.
        http://tamino.wordpress.com/2010/08/23/antrhopogenic-global-cooling/

        The best estimate of the trend since then is 0.175 +-.045 C/decade.

      • To people with a control system background – the oceans high heat capacity acts like integral feedback.

        The 20th century insolation looks like an impulse (step) change in forcing.

        The impulse response of a system with a lot of integral feedback is a ramp.

        The high thermal inertia of the oceans means there is a slow climb in temperature to the new equilibrium point.

        That explains most of the 20th century climate.

      • The previous post was addressing eadler2’s TSI graph,

      • PA,
        The situation you describe in control system theory, doesn’t seem to apply to the reaction to the earth’s climate in response to changes in solar irradiation.

        The problem with your claim is that the reaction of the climate system does not have the delay that you claim in resoponse to changes in solar irradiation. The response of temperature to the eleven year oscillations in solar irradiation associated with sunspots is almost immediate.

        . What you have is a 30 year rampup of solar irradiance followed by a 60 year plateau in avarage solar irradiation with the 11 year oscillations of solar irradiance associated with sunspots riding on top of that. . The response of the climate system to anthropogenic sources, solar irradiance and volcanic eruptions has been studied in the following paper by judith Lean. Check out the fig 5 of the following paper by Lean,
        http://www.agci.org/docs/lean.pdf

        The long delays in response to solar irradiance that you claim, which would cause a ramp lasting many decades in response to an impulse are not found.
        The increase in global surface temperature in the first half of the 20th century was about 50% due to solar and 50% due to anthropogenic factors.
        Since 1970, 100% of the increase in temperature was due to anthropogenic factors.

      • eadler2 commented

        Since 1970, 100% of the increase in temperature was due to anthropogenic factors

        Nonsense, almost all of the changes since 1970 are regional changes to Min temp.
        http://content.science20.com/files/images/GB%20Mn%20Mx%20Diff_1.png
        Diff is the annual average of the day over day changes in Min temp, and MXDiff is the annual average of Day over Day max temps.
        Northern Hemisphere Continents
        http://content.science20.com/files/images/Northern%20Continents_0.png
        Southern Hemisphere Continents
        http://content.science20.com/files/images/Southern%20Continents_0.png

      • Micro wrote:

        http://judithcurry.com/2014/12/15/will-a-return-of-rising-temperatures-validate-the-climate-models/#comment-656085

        “Nonsense, almost all of the changes since 1970 are regional changes to Min temp.”

        If what you say is true, you are confirming the theory that GHG’s are the source of warming. When the sun is no longer shining the surface of the earth will cool because upward radiation is continuing, and IR is taking energy from the earth into outer space. GHG’s hinder the progress of this radiation into outer space directly from the warm surface of the earth, absorbing and radiating 1/2 of the absorbed energy back toward the earth’s surface. More GHG’s in the atmosphere would increase the surface temperatures at night. This was understood in 1828 by Joseph Fourier, and verified by John Tyndall’s experiments with IR absorption in 1859.

      • eadler2 commented on

        More GHG’s in the atmosphere would increase the surface temperatures at night. This was understood in 1828 by Joseph Fourier, and verified by John Tyndall’s experiments with IR absorption in 1859.

        Except when you look at yesterdays temp rise, and compare it to last nights falling temps you get something like this:
        YEAR RISING FALLING
        1950 18.92050989 19.04083917
        1951 18.57146076 18.5824181
        1952 18.13525869 18.31130786
        1953 17.82293032 17.85546676
        1954 17.37874226 17.54219243
        1955 17.35886761 17.41322352
        1956 17.49403237 17.5128109
        1957 17.32463934 17.48400653
        1958 17.8007922 17.82826202
        1959 17.42684609 17.59911514
        1960 17.44499532 17.63915625
        1961 17.79018057 17.95813771
        1962 18.0106431 18.27484099
        1963 18.36431864 18.52935222
        1964 17.6737505 17.78752672
        1965 16.82202477 17.08854006
        1966 17.09444771 17.27294385
        1967 17.08224932 17.25138179
        1968 17.1355002 17.20610306
        1969 17.45810359 17.62495587
        1970 17.91987811 18.06958317
        1971 17.14239614 17.12400035
        1972 17.28088128 17.27055483
        1973 17.40699643 17.74821952
        1974 17.34863356 17.56297915
        1975 17.64498828 17.76143356
        1976 17.76924206 18.01606395
        1977 17.63217802 17.85530611
        1978 17.41397723 17.69134026
        1979 17.60990484 17.83886668
        1980 17.47200107 17.76277996
        1981 17.57813869 17.80495907
        1982 17.2756724 17.37584863
        1983 17.2251366 17.28294307
        1984 17.22913325 17.26005321
        1985 17.27491182 17.34171981
        1986 17.19416256 17.34655525
        1987 17.20772097 17.28463402
        1988 17.48544416 17.52294106
        1989 17.57343029 17.62143428
        1990 17.41235551 17.48392811
        1991 16.75187665 16.86799808
        1992 16.68316131 16.83950923
        1993 16.83427424 17.01873129
        1994 17.42155322 17.55040208
        1995 17.14702755 17.23301663
        1996 17.05783232 17.1198493
        1997 17.16277616 17.19990807
        1998 17.08504304 17.14207068
        1999 17.61853025 17.66280313
        2000 17.5794153 17.64004889
        2001 17.87340637 17.89497488
        2002 17.70576698 17.75310414
        2003 17.96045386 17.99200218
        2004 17.70235463 17.73471391
        2005 17.45612756 17.46222204
        2006 17.87285698 17.91873944
        2007 17.93335232 17.95839775
        2008 17.90006438 17.91606502
        2009 17.93078087 17.94478324
        2010 17.62333747 17.64184523
        2011 17.92601114 17.93265255
        2012 18.04235465 18.08422982
        2013 17.96744525 17.97490912

        Over all average Rise=17.51624357F Fall=17.61363578F

        So, there’s been zero loss of nightly cooling since the 50’s as a minimum.

      • eadler2, Almost immediate? Really?

        There is an “almost immediate” response, atmospheric with the roughly 90 day lag, then there is a roughly 27 month lagged response, that would be coupled atmosphere/ocean response related to the QBO, then there is a roughly 8.5 to 10 year response, that would be ocean basin circulation related. Then there is the more mysterious THC trying to equalize hemispheres over not very well known periods. Lots of fluid dynamics.

        To make things interesting, the northern hemisphere ocean/land ratio is different than the southern hemisphere ocean/land ratio and land tends to amplify SST variations by an average factor of two. So to compare the “oscillations” you may have to resort to a bit of data torture by “normalizing” to standard deviation if you want to “see” some of the lagged relationships.

        https://lh6.googleusercontent.com/-SjUFfiUWGD4/VHe_kRL5NVI/AAAAAAAAL3s/UF17ZrxKO0Y/w671-h443-no/tmin%2Bmax%2Band%2Bsst%2Bcasacade.png

        That is just “global” SST with hadcrut Tmax and Tmin. the rather large saw tooth would be an ocean related “oscillation” that would impact land surface temperatures.

        It is a lot more interesting puzzle than you make it out to be :)

      • eadler2

        Here is just the ocean using ERSSTv3b for 90S-30S, 30S-30N and 30N-90N normalized but not detrended. If there were no significant lags everything would be like the blue curve. There is a big difference between the below 30S dynamics and the above 30N dynamics so there are different lags.

        https://lh3.googleusercontent.com/-zMLEba1gCCw/VJGsRIX0liI/AAAAAAAAL7U/47RRZuYzmOs/w784-h498-no/Ocean%2Bregions%2Bnorm%2Bsd.png

        Sea ice dynamics in the NH makes thing more interesting, but the SH has some sea ice dynamics as well that can tend to vary the Antarctic Circumpolar Current,(ACC) both tend to impact the Thermohaline Current (THC).

        Because to the THC, there are weakly damped “oscillations” that can last a few centuries. With land amplification and also polar amplification to consider.

        https://lh6.googleusercontent.com/-Ld74qLnVG6Q/VAtVsQrEKZI/AAAAAAAALfU/HylTK4m3bUI/w709-h471-no/co2%2Btemp%2Band%2Boppo.png

        Kinda fun stuff.

      • “Kinda fun stuff.”
        Kinda irrelevant. The graphs I showed were average global temperatures. The fact that heating of the earth is not uniform is well known to scientists. In fact the Arctic on average has been heating twice as fast as the rest of the earth so what? We know that the sun has not been the cause of global warming since 1970. Its activity has remained relatively level since 1940, and even if there are changes in conditions that are delayed 8.5 to 10 years, it is generally agreed by climate scientists, and shown in the latest IPCC report AR5 report that GHG’s are the responsibel for 100% of global warming since 1970.
        http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/sep/27/global-warming-ipcc-report-humans

      • eadler2, “We know that the sun has not been the cause of global warming since 1970.”

        If the “We” you are referring to are climate modelers, then “we” are clueless. Before the current generation of climate models there was Milankovic with his orbital cycle theory. With December maximum solar insolation and the southern hemisphere having the majority of ocean surface, there is no reason to believe that some portion of the TOA imbalance is not due the current orbital position. Using 65N maximum solar for glacial melt which is mainly a NH situation doesn’t account for the opposite situation where there is increase ocean heat to provide the energy for the snows required to create the glacial mass to begin with,

        https://lh5.googleusercontent.com/-6Tf2glKTcu0/VG330klOFZI/AAAAAAAALxM/M60OPJJ6ITs/w858-h528-no/curry%2Btalk.png

        The current rate of ocean heat uptake is perfectly consistent with that theory and recovery from a little ice age period as far as ocean heat content goes. Most of climate “sensitivity” is due to snow field/ glacial albedo and as that goes to a minimum so does “global” sensitivity.

        Best estimates of “sensitivity” are decreasing, I am afraid you will have to learn to live with that. You will also need to learn that “since” cherry pick a date isn’t particularly “scientific” for a water world. Lots of thermal inertia that has to be dealt with. Or are you forgetting that “the oceans ate my warming” is one of about 50 excuses for the non-existent “pause”?

      • Basil Newmerzhycky

        “50 excuses for the non-existent “pause”

        What Pause are you referring to?
        The one that allegedly began right after 1998?
        Can’t seem to find it when a STATISTICALLY VALID trendline is drawn.

        http://www.woodfortrees.org/plot/hadcrut4gl/from:1980/mean:12/plot/hadcrut4gl/from:1999/trend

      • Basil, the “pause” is a UK thing where the UK met office announced that at least half of the years after 2009 would be the warmest years EVAH and a writer for the Mail published an article showing that since 1997.5 there was no warming. Statistically, a 0.045 trend is not very impressive when the “projected” trend should have been ~0.2C per decade.

        The Hiatus, slow down, oceans ate my warming are all related to the difference between the “projected” trends and the observational trends. The lack of anticipated warming was enough to justify a “peer reviewed” paper explaining that for any trend to be “significant” in the temperature records had to be at least 17 years not the previously acceptable 15 years.

        http://scholar.google.com/scholar?hl=en&q=warming+hiatus&btnG=&as_sdt=1%2C10&as_sdtp=

        That is a Google scholar search for the term “warming hiatus” It appears the “scientific” community recognizes there is less than anticipated warming. Are you one of those anti-science types :)

      • Climate Researcher 

        Go back to this comment.

      • ‘This paper provides an update to an earlier work that showed specific changes in the aggregate time evolution of major Northern Hemispheric atmospheric and oceanic modes of variability serve as a harbinger of climate shifts. Specifically, when the major modes of Northern Hemisphere climate variability are synchronized, or resonate, and the coupling between those modes simultaneously increases, the climate system appears to be thrown into a new state, marked by a break in the global mean temperature trend and in the character of El Niño/Southern Oscillation variability. Here, a new and improved means to quantify the coupling between climate modes confirms that another synchronization of these modes, followed by an increase in coupling occurred in 2001/02. This suggests that a break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred.’ http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/abstract

        So we have a bit of a theory – a new dynamical mechanism for major climate shifts. A change in climate states caused by abrupt changes in ocean and atmospheric circulation. It suggests you need to look at clouds.

        https://watertechbyrie.files.wordpress.com/2014/06/earthshine.jpg

        ‘Earthshine changes in albedo shown in blue, ISCCP-FD shown in black and CERES in red. A climatologically significant change before CERES followed by a long period of insignificant change.’

        It shows that there is an energy dynamic here – in addition to changes in partitioning of energy between the ocean and atmosphere.

        It suggests as well that a new cool regime is more likely than not to persist for 20 to 40 years from 2002. This is something we predicted more than a decade ago – and they still don’t quite get it.

        They insist that oceans are still warming – for instance – despite no real understanding it seems of uncertainties and discrepancies. Here’s my favourite ocean heat graph.

        http://curryja.files.wordpress.com/2014/01/presentation3.jpg

        ‘Time series of annual average global integrals of upper ocean heat content anomaly (1021 J, or ZJ) for (a) 0–100 m, (b) 0–300 m, (c) 0–700 m, and (d) 0–1800 m. Time series are shown using ZIF estimates relative to both ClimArgo (dashed grey lines) and Clim1950 (dashed black lines). Time series are also shown using REP estimate (black solid lines), which are not affected by shifts in the mean climatology (B11). Thin vertical lines denote when the coverage (Fig. 3) reaches 50% for (a) 0–100 m, (b) 100– 300 m, (c) 300–700 m, and (d) 900–1800 m.’

        The 1990’s spike was caused by less cloud – and the early 2000’s spike is a data splicing artifact.

      • … my brain has started to swell… Maxy

        My brain? That’s my second favourite organ.

        https://watertechbyrie.files.wordpress.com/2014/06/blue-rose3-e1418789192370.jpg

      • Max_OK, Citizen Scientist

        I look forward to Viagra for the brain.

      • Basil Newmerzhycky

        Unfortunately for the authors of that 5-year old paper, time has already proved them wrong.

        They stated:
        “break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred”

        They wrote that paper in 2009 (off of data ending in 2008), and since then 2009, 2010 and even 2012-2013 were warmer years, and 2014 is about to become the warmest of all.

        Try Again.

      • Basil Newmerzhycky

        “Statistically, a 0.045 trend is not very impressive when the “projected” trend should have been ~0.2C per decade.”

        Captdallas, the actual rate of change from 1999 thru 2013 was about 0.18 deg…basically a decadal rate of + 0.11 deg C/decade off NASA/GISS data.

        http://www.woodfortrees.org/plot/gistemp/from:1980/mean:12/plot/gistemp/from:1999/trend

        The HADCRUT4 data was a little less, closer to +0.9 deg/decade for the same time period.

        Both were less than some of the “smooth model projection” of +0.2C but at half the rate still well within model variance. Atmosphere, with all its ENSO and decadal noise makes that smooth curve impossible, especially with the strongest El Nino/La Nina couplet in over 100 years. And with much of the past decade being in the cold phase of the Pacific Decadal Oscillation, there certainly is no surprise to a briefly slower upward climb.

        But a true pause??? C’mon…show me something statistically valid that proves a pause of over a decade?

      • Basil, I don’t think anyone starts in 1999, Rose with the Mail used 1997.5 specifically for HADCRU’s benefit. Argo era or 21st century is used quite a bit because it is “cleaner” less volcanic super El Nino influence and it includes more of the state of the art satellite and ocean data., you can pick any spot you like I guess for a pub conversation, but the comparison with projections is the real tell of the tape.

        https://lh6.googleusercontent.com/-0Apkk3K8V6M/VJIY-EDzLAI/AAAAAAAAL7o/j9nGC6DMxfM/w775-h473-no/so%2Bfar%2Bthis%2Bcentury.png

        I just did that for eadler2 but you can pick any point you like. I used Climate Explore for the data.

      • Is there a scientific reason that you and ( Dr Curry) chose to ignore this information and quote a much larger uncertainty?

        So here’s OHC from von Schuckmann.

        https://watertechbyrie.files.wordpress.com/2014/06/vonschuckmannampltroan2011-fig5pg_zpsee63b772.jpg

        Here’s CERES with trend lines. The axis is anomalies in W/m^2.

        https://watertechbyrie.files.wordpress.com/2014/06/vonschuckmannampltroan2011-fig5pg_zpsee63b772.jpg

        The reason the oceans were warming last decade was changes in cloud radiative forcing.

        https://watertechbyrie.files.wordpress.com/2014/06/ceres_modis-1.gif

        It is no more – here’s net CERES to date.

        https://watertechbyrie.files.wordpress.com/2014/06/ceres_modis-1.gif

        It hardly seems to matter to them what the data actually says so much as the narrative.

      • CERES with trend lines. I like Chrome for various reasons – cutting and pasting is not one.

        https://watertechbyrie.files.wordpress.com/2014/06/ceres-bams-2008-with-trend-lines1.gif

      • Basil Newmerzhycky

        Going from 2001 or 2002 cherry-picks a highpoint to highpoint trend.
        But per your request, taking out the ENSO cycle, lets start at 2000 (below).
        Still an undeniable upward trend the past 15 years.
        http://www.woodfortrees.org/plot/gistemp/from:1980/mean:12/plot/gistemp/from:2000/trend

        Only reason it is slower is because most of that period we’ve been in the Cold Phase of the PDO. (below)
        http://www.washingtonpost.com/blogs/capital-weather-gang/files/2013/11/pdo.jpg

        So despite the cool phase of the PDO, we continued to warm, albeit at a slower rate. Well that is about to change because the PDO has begun to shift in the past few months, and in response 2014 has been warming rapidly. I won’t bore you with the “2014 will be warmest year of all” hype, but it is almost certain, and with a greater likelihood of an accelerated warming trend well into the next decade at twice the rate of the previous one, similar to the 0.2 deg C.

        Now that would make for some good pub conversation.:)

      • Dear Basil
        Your PDO plot shows that warming dominated from 1976 to 1999 and then cooling, rather similar to the ENSO plot in my paper. If the temperature pause is explained by the cooling phase, does not the warming phase explain some of the earlier temperature rise? Since the CMIP5 models used by the IPCC do not include either the PDO and ENSO and the tuning of parameters used data prior to 2000, it seems to me that the models will have a biased response to CO2 towards excessive temperatures because of the assumption that CO2 dominated.
        Don Morton

      • Actually if the influence of ENSO and volcanoes are removed from the global average temperature plot, a continuous increase in temperature appears. Here is an example:
        http://tamino.wordpress.com/2011/12/06/the-real-global-warming-signal/

        http://tamino.files.wordpress.com/2011/12/figure05.jpg?w=500&h=499

      • Basil Newmerzhycky

        Hello Don,
        Even considering those oscillations, there continues to be an uptrend, slower during teleconections of the cold phases of the PDO and Atlantic Multi-Decadal Oscillation..

        http://www.woodfortrees.org/plot/gistemp/from:1970/mean:12/plot/gistemp/from:2000/trend

        and accelerated during teleconnected warm phases.

        http://www.woodfortrees.org/plot/gistemp/from:1970/mean:12/plot/gistemp/from:1976/to:1999/trend

        I might agree that the model currently has a slight warm bias, but the trend is true, and given 400ppm CO2 and growing, the 0.2 deg/decade rate could very well be reached with this next upcoming warm cycle as we have just turned positive in the past year in the PDO. Personally I hope not.

      • Matthew R Marler

        Basil Newmerzhycky: I won’t bore you with the “2014 will be warmest year of all” hype, but it is almost certain, and with a greater likelihood of an accelerated warming trend well into the next decade at twice the rate of the previous one, similar to the 0.2 deg C.

        Unambiguous predictions always clarify the discussion. We’ll be alert to see how that turns out.

      • About as “statistically valid” as this one:

        http://www.woodfortrees.org/plot/hadcrut4gl/from:1980/mean:12/plot/hadcrut4gl/from:2001/trend

        When are you guys going to stop applying OLS linear trends to anomaly time-series and then pretending that it means something?

      • sorry, that should be “linear fits”

      • About as “statistically valid” as this one:

        http://www.woodfortrees.org/plot/hadcrut4gl/from:1980/mean:12/plot/hadcrut4gl/from:2001/trend

        When are you guys going to stop applying OLS linear fits to time-series anomaly data and then pretending that it means something?

        (posted earlier but ended up in the wrong place)

      • …as did this one

      • Capt. Dallas,

        My post was in reply to PA’s claim.
        http://judithcurry.com/2014/12/15/will-a-return-of-rising-temperatures-validate-the-climate-models/#comment-656047
        “To people with a control system background – the oceans high heat capacity acts like integral feedback.

        The 20th century insolation looks like an impulse (step) change in forcing.

        The impulse response of a system with a lot of integral feedback is a ramp.

        The high thermal inertia of the oceans means there is a slow climb in temperature to the new equilibrium point.

        That explains most of the 20th century climate.”

        On top of the immediate reaction to solar irradiation, there may be an 8.5 year response as ocean currents redistribute energy.
        However , but that is not the same thing as claiming that the increase in temperature due to an increase in solar energy between 1912 and 1940 was mainly responsible for the increase in temperature between 1970 and the present. If there is an component of the response to a change in solar energy that lasts 8.5 years it doesn’t support ramp of over 30 years.

      • eadler2, ” If there is an component of the response to a change in solar energy that lasts 8.5 years it doesn’t support ramp of over 30 years.”

        The ramp over 30 years or more is a bit complex.

        http://www.clim-past.net/10/921/2014/cp-10-921-2014.html

        The 8.5 to 10 years has a fair amount of peer reviewed backing. The longer term 30 – 60 years more likely has some combination of volcanic, solar and inconsistent feedback from sea and fast ice involved which is not very easy to model. Wyatt and Curry’s Stadium Wave paper makes a little headway in the NH, but when there is the possibility of events a hundred years or more in the past being involved, the data available just ain’t up to the task. Solar data especially has plenty of issues and volcanic data by hemisphere is still coming of age. As far as there being an integral or cumulative response, that is very likely, but you need some concept of “normal” to start working that out. As it is now, temperature response to volcanic forcing leads events in some cases which is obviously wrong and volcanic (aerosols) forcing direct and indirect effects are the second largest source of model uncertainty.

        This, “pause”, hiatus, slow down or whatever has lots of folks reviewing “known” physics that lead to a lot of model assumptions.

      • Capt. Dallas,
        I don’t see anything in the link you provided that claims that global temperatures react with a long delay of 30 to 60 years. I am going with Judith Lean’s analysis which looks at recent data and constructs the impact of AGW, Solar, Volcanic and ENSO based on regression, including delays.
        http://www.agci.org/docs/lean.pdf
        “Global (and regional) surface temperature fluctuations
        in the past 120 years reflect, as in the space
        era, a combination of solar, volcanic, ENSO, and anthropogenic influences, with relative contributions
        shown in Figure 6.22 The adopted solar brightness
        changes in this scenario are based on a solar surface
        flux transport model; although long-term changes are
        “50% larger than the 11-year irradiance cycle, they
        are significantly smaller than the original estimates
        based on variations in Sun-like stars and geomagnetic
        activity. Cycles and trends in the natural and
        anthropogenic influences, also shown in Figure 6, collectively
        impose equivalent (and at times competing)
        cycles and trends on climate. A 2–3 years quasi-cycle
        is attributed primarily to ENSO fluctuations. Decadal
        variability is present in volcanic aerosols (whose peak
        influences of El Chicon and Pinatubo are separated
        by 11 years) and solar variability, and possibly modulates
        ENSO. Only in the anthropogenic influence is
        there a sufficiently large upward trend over the past
        120 years to explain global surface temperature variations
        self-consistently with the space-era components.
        Accordingly, trends in solar irradiance in the past
        century contribute global warming of 10% or less..”

        Take a look at figure 6. It is very convincing.

      • eadler2, “I don’t see anything in the link you provided that claims that global temperatures react with a long delay of 30 to 60 years. I am going with Judith Lean’s analysis which looks at recent data and constructs the impact of AGW, Solar, Volcanic and ENSO based on regression, including delays.”

        The problem with all the curve fits is that they have to start some where. From their start, they can all look impressive. The Lean fit starts in 1900 and I showed before that ~1885 there was a modeled volcanic impact that lagged the event by ~25 years. That delayed impact would be assumed to be something else.

        The Anet et al. papers indicates “half-lives” of ~15 years meaning there would be a lingering effect. They explain a great deal of the difficulties including “super-compensation” which would ocean inertia more rapidly offsetting impacts. When the inertia is in phase with the forcing there would be a corresponding super-amplification.

        https://lh5.googleusercontent.com/-6Tf2glKTcu0/VG330klOFZI/AAAAAAAALxM/M60OPJJ6ITs/w858-h528-no/curry%2Btalk.png

        This plot uses the Oppo et al. Indo-Pacific Warm Pool and what limited ocean heat content data (vertical temperature anomaly) we have to compare the rate of warming required for full recovery from the LIA. The IPWP is a good point for extrapolating global ocean heat content. That would provide a reasonable estimate for when to start a curve fit.

        Lean et al has an impressive curve fit “explaining” things from their starting point “assuming” no lingering effects. Anet et al. provide another model curve fit from their starting point “assuming” no lingering effects. An I provide another curve fit with yet another start date. You can judge how impressed you are, I can’t, I am just providing information.
        https://lh4.googleusercontent.com/-QixN6RvJjP4/VGio57R499I/AAAAAAAALvI/SFCi1UgvHWc/w669-h459-no/oppo%2Bnormal.png

        Here is the rest of the Oppo et al. reconstruction with a number of reference trend lines to help you figure out what you think “normal” should be. The answer depends on what you assume should be “normal”. Lean’s “normal” is 1900, My “normal” is today.

      • I find your syntax so garbled, and your terms so opaque that I can’t figure out what your point is.

        One thing is clear to me. It seems stupid to attempt to fit a straight line to sea surface temperature data that spans a 700 year oir a 2000 year period.
        No one with any sense would consider such a thing.

      • Matthew R Marler

        eadleer2: One thing is clear to me. It seems stupid to attempt to fit a straight line to sea surface temperature data that spans a 700 year oir a 2000 year period.

        There is no duration of records for which you can show that a linear, polynomial, or harmonic regression can produce an unbiased estimate of anything. The best test of all this modeling will come from comparisons of the next 2 decades of measurements with the diverse predictions.

      • eadler2, “One thing is clear to me. It seems stupid to attempt to fit a straight line to sea surface temperature data that spans a 700 year oir a 2000 year period.
        No one with any sense would consider such a thing.”

        I agree. But if you assume that Global surface temperature was “normal” in 1900, that is what you have done. Tamino “Assumes” he is removing ENSO, Solar and Volcanic so that his trend is “:normal”.

        There is no “normal”, just a range of temperatures that are more likely. Pre-industrial, the tail end of the Little Ice Age, was picked as “normal”.

        https://lh5.googleusercontent.com/-6Tf2glKTcu0/VG330klOFZI/AAAAAAAALxM/M60OPJJ6ITs/w858-h528-no/curry%2Btalk.png

        That chart simply compares the current rate of deeper ocean warming with a paleo reconstruction that “could” indicate how long the oceans have been warming.

        https://lh4.googleusercontent.com/-QixN6RvJjP4/VGio57R499I/AAAAAAAALvI/SFCi1UgvHWc/w669-h459-no/oppo%2Bnormal.png

        That is the whole reconstruction. It has an average, a long term trend and a dip where the LIA belongs. If you assume the LIA was abnormal, the red trend line just gives you a ballpark estimate of where you should be after removing ENSO, Solar and Volcanic.

        You think that is stupid but you suck up to Tamino’s removal?

        Fine with me. Now tell me, how many years do you think it would take the oceans to regain ~1 C degrees worth of heat loss? 10 years?

      • http://stateoftheocean.osmc.noaa.gov/atm/images/pdo_long.gif

        The think it’s pretty clear that the PDO is not exclusively red or blue in warm or cool regimes – but that regimes last 20 to 40 years. It is clutching at straws to the point of insanity to decide that the PDO has switched mode just yet.

        https://watertechbyrie.files.wordpress.com/2014/06/blue-rose3-e1418789192370.jpg

      • This study uses proxy climate records derived from paleoclimate data to investigate the long-term behaviour of the Pacific Decadal Oscillation (PDO) and the El Niño Southern Oscillation (ENSO). During the past 400 years, climate shifts associated with changes in the PDO are shown to have occurred with a similar frequency to those documented in the 20th Century. Importantly, phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.

        http://onlinelibrary.wiley.com/store/10.1029/2005GL025052/asset/image_n/grl20961-fig-0003.png?v=1&t=i3v1i79e&s=0a20e523f16b7a8a98d4479d7d813b8ed110cc87

        http://onlinelibrary.wiley.com/doi/10.1029/2005GL025052/abstract

        Sorry – I forgot that eyeballing it in and making cr@p up is considered sane.

      • Delta Dawn (Great Name)

        Are you familiar with the stadium wave theory? Do these proxy climate records support the theory?

        Richard

      • I never used the term “normal”. You have used this without defining what this means. I have no clue what you mean by this.
        The Oppo graph that you reproduced is for a region in the Pacific Ocean, which is different from global average temperatures in the 20th century.

        Other long term proxy reconstructions which include more of the globe have shown that the MWP was not as warm as today. There are about a dozen different ones, using different proxies and different statistical methods. These are all very noisy and none of them are definitive. Despite the noise, Most of them show that the recent global average temperatures are warming at an rate not seen in the last 1000 years, and it is probably warmer today than it was during the MWP.

        What Tamino did was multi variable regression to account for natural effects on the global average temperature to find what he called the real global warming signal. He said he put in parameters to account for delayed effects. It isn’t clear whether the results are going to be sensitive to the point in time where he starts the analysis. He certainly reduced the noise in the trend by a lot, and gets a pretty good steady trend, which is what would be expected from a steady warming effect due to increasing GHG’s in the atmosphere.

      • eadler2, “What Tamino did was multi variable regression to account for natural effects on the global average temperature to find what he called the real global warming signal.”

        No, what he did was attempt to account for natural effects. In order to do that he assumed he understood what he was removing would produce a “normal” as in unperturbed by ENSO, Volcanic and Solar condition. What he did was fine, but not definitive. In fact there was quite a discussion on it at the BlackBoard. Normal is what should be a normal climate. If you assume normal is the climate of 1900, then you would get a different answer than if you assume the climate of the satellite era is “normal” It isn’t that hard eadler2, try to keep up.

        All ocean warming prior to your “normal” would be “natural”. That warming would produce a “natural” water vapor feed back, land based ice reduction albedo etc. etc. So when Tamino removes enso, it is normalized to a period assumed to be “normal”. So whatever natural portion there is would be considered anthropogenic because of his choice of “normal”. Every analysis requires assumptions, there is no getting around that.

        Now if you aren’t happy with the Oppo use consider that ENSO is a very small area of the equatorial Pacific known to have a high correlation with “global” weather. It isn’t that hard to do a correlation of the IPWP region and find that is as good or better than ENSO.

        Here is a link to Oppo et al. http://www.ncdc.noaa.gov/paleo/pubs/oppo2009/oppo2009.html

        You can get an Indian ocean mask at Climate Explorer and openoffice dot org has free spread sheet software. Pretty simple stuff. If you don’t trust my work, you can do it yourself. I encourage it. In fact I encourage reviewing everyone’s work since there are lots of assumptions made by lots of folks. You know what happens when you ASS U ME right?

      • “It isn’t that hard to do a correlation of the IPWP region and find that is as good or better than ENSO.”
        _____
        Indeed, heat content of the IPWP is one of the best long-term indicators of general climate trends and general direction of energy accumulation/dissipation from the system. In short, it is the single best and thermodynamically stable climate proxies we have.

      • Capt’nDallas

        I read the Oppo abstract and the “remote control” statement at its end puzzled me.

        Do you have the paper available by some hook or by crook?

        Thank you

      • I’m in the cozy company residing in moderation. I would like to get out though.

      • Cozy, but nowhere near the cachet of the borehole.
        ===========

      • capt Dallas wrote,
        http://judithcurry.com/2014/12/15/will-a-return-of-rising-temperatures-validate-the-climate-models/#comment-656658
        “What he did was fine, but not definitive. In fact there was quite a discussion on it at the BlackBoard. Normal is what should be a normal climate. If you assume normal is the climate of 1900, then you would get a different answer than if you assume the climate of the satellite era is “normal” It isn’t that hard eadler2, try to keep up.”

        There was no assumption about what was normal. Tamino’s analysis s a multi variable regression to find the best fit for coefficients and delays associated with the TSI, ENSO and Volcanic action during the time of his analysis. He actually varies the starting year of the analysis to examine the effect it has on the residual rate of temperature increase that results from his analysis. He finds no statistically significant change in the extracted residual rate of temperature increase. He finds an almost identical rate of change for 5 different measures of temperature change. No assumption seems to have been made about what constitutes a normal condition of the climate.

        I am not impressed by your reference to some discussions on the Blackboard about something called normal climate. The Oppo paper you are fond of quoting is irrelevant to Tamino’s analysis.

      • eadler2, “There was no assumption about what was normal.”

        There was most definitely an assumption. By not considering prior conditions, he assumed his analysis period was “normal”. Crowley and Unterman recently published the most comprehensive volcanic forcing reconstruction by hemisphere. During the past 1200 years there have been quite a few volcanoes. That would provide an estimate of “average” volcanic aerosol forcing. The satellite era may be close to average or not. If the satellite era forcing is less than “normal” then Tamino got his sign wrong. Anytime anyone attempts to remove part of the climate signals they are making assumptions. If you can’t understand that you have a lot of reading ahead of you.

        As it is, the data Tamino used is outdated, he picked lag periods out of thin air and ignored initial conditions that would impact his trend. Other than that he did a jamb up job, typical of the average run of the mill “Climate Scientist”.

      • eadler2, you have links to 2 Tamino posts.

        Post 1, the “real” global warming signal. Listen closely, You cannot remove parts of climate signals to determine a “real” climate signal. All the signals are interrelated, there are various internal lags, meaning there are initial conditions that cannot be ignored. You will get “A” signal that isn’t the “real” signal, that doesn’t “prove” anything.

        Post 2, Is Earth’s Temperature About to Soar?. That attempts to blame the hiatus on Chinese Aerosols. Tamino used limited outdated data and assumed his butt off.

        If you look around you should find 50 plus reasons that the “hiatus” either doesn’t exist or is caused by something the models missed. They all pretty much acknowledge there is a hiatus :) The simplest reason for the “hiatus” is the models underestimated natural variability and/or overestimated anthropogenic forcing. To that end I provided paleo data that is required for climate scale analysis. Since you don’t like Oppo but like ENSO try this.

        https://lh6.googleusercontent.com/-kqR-D0TcojI/VJDKGQ0qaCI/AAAAAAAAL7A/KzRDNGz97t4/w637-h441-no/ipwp%2Bnino.png

        There is a long term trend in Nino, imagine that! Now if Tamino “removes” the NINO “oscillation” meaning it is detrended and normalized, he didn’t remove all the ENSO signal did he? He removed “something”, got the answer he was looking for and stopped. You, eadler2, lapped it up like a good follower. The heat capacity of the oceans is ~1000 times the atmosphere and is stratified due to temperature/density physics. You have a PhD, try some original work if you cannot at least review blog “science” without bias, and give us an estimate of how long it would take the oceans to recover 1 C worth of energy loss.

      • The simplest reason for the “hiatus” is the models underestimated natural variability and/or overestimated anthropogenic forcing.

        That would only explain the hiatus if it came with a convincing argument for the amplitude and phase of those variations and overestimates.

        Figure 5 of Loehle and Scafetta 2011 gives a very simple reason for the hiatus: the 20-year cycle in their model peaks in 2000 and troughs in 2010, see the thicker dashed line.

        To convince yourself that their 20-year cycle is real, simply subtract 20-year climate (meaning 20-year moving average) from 10-year climate. (WoodForTrees can produce them separately but can’t subtract them so you’ll have to download the resulting raw data and subtract them yourself.) The 20-year cycle will leap out at you, with correct phase though somewhat attenuated amplitude (since a 10-year moving average attenuates a 20-year cycle to 0.637 of its amplitude while the 20-year one kills it altogether).

        The other three components of their model are harder to justify, particularly the piecewise-linear part.

      • Vaughan Pratt, “That would only explain the hiatus if it came with a convincing argument for the amplitude and phase of those variations and overestimates.”

        That isn’t hard. In the NH the ratio of land to ocean amplifies variability more than the models predicted.

        http://static.berkeleyearth.org/graphics/warming-in-the-mid-northern-hemisphere-since-1950.png

        Volcanic forcing is more likely the driver of the pseudo-oscillations than just solar and since they, solar and volcanic, have similar frequencies, may be connected, so you can argue that their longer term impact is underestimated and not really natural variability, but there is a “oscillatory” settling pattern.

        More simple models are including that “oscillation” there just isn’t agreement on the real cause of the oscillation. That doesn’t mean it doesn’t exist.

      • captdallas2 0.8 +/- 0.2 | December 21, 2014 at 7:53 am |

        That graph of yours is atrocious. It offends me. Also, there is no such thing as volcanic forcing. Read “What Warming?” and educate yourself.

      • cd, I can see what the red asterisk denotes: the line through it and the origin is the line of equal amplification ratios for the Arctic and mid-Northern latitudes. But what do the 40 or so blue dots signify? Are they recent volcanoes, each of whose x and y coordinates are determined by notches in global climate in those respective latitudes, or some such thing?

      • Pinky and the Brain

        You have stirred up the local nut cases – and it starts to become clear that this is like attracting like.

        The graph I showed was from a peer reviewed study removing – ostensibly – decadal variation.

        There are a number of ways of doing it. Here’s one from realclimate that excludes climate shifts at 1976/1977 and 1998/2001 – to come up with what they presume is the background rate between 1979 and 1997.

        https://watertechbyrie.files.wordpress.com/2014/06/swanson-realclimate1.png

        But the simplest way is to take a period that includes both a complete cool and warm multi-decadal regime – 1944 and 1998 – and divide it by the temperature rise. And we still get the prospect of no warming for decades yet.

        Your numbers are all over the place wrong and your narrative is utter nonsense – but really it is rather pointless to discuss such simple misconceptions with space cadets. Do yourself a favour and go back to the basics but I can’t be bothered imagining I can educate you. .

      • Pinky and the Brain

        That was a reply to this nonsense – http://judithcurry.com/2014/12/15/will-a-return-of-rising-temperatures-validate-the-climate-models/#comment-656040

        Time to give up on this again obviously.

      • Generalissimo Skippy

        Total Solar Irradiance – btw – explains almost nothing. The change in average radiant flux at the surface is too little to be more than a small part of the puzzle.

        Although it is correct that it peaked in the middle of the 20th century and stayed high for much of the remainder.

      • Noblesse Oblige

        The radiation in the absorption spectrum of GHG’s finally escapes into outer space directly from higher altitudes where the gases are cooler and the rate of radiaton is reduced. If the radiation into space is at a lower rate than the arrival of radiant energy from the sun the planet heats up.

        See this is nuts. More CO2 and the planet will warm until you get a conditional equilibrium. That’s just how it works. Forget all the nonsense about levels. Radiation is from all levels in the atmosphere including the surface.

        What really happens is that you get huge changes in TOA radiant flux all the time.

        https://watertechbyrie.files.wordpress.com/2014/06/loeb2012-fig1.png

        https://watertechbyrie.files.wordpress.com/2014/06/cloud_palleandlaken2013_zps3c92a9fc.png

      • Noblesse:
        “See this is nuts. More CO2 and the planet will warm until you get a conditional equilibrium. That’s just how it works. ”

        I don’t see why you claim what I wrote is nuts. My description of what happens is correct and agrees with what you say.. I was focussing on how increases in GHG’s heat up the planet. I didn’t mention the obvious fact that you stated, that the heating will cease, when the upper atmosphere warms enough to restore the equilibrium between radiation leaving the earth and arriving from the sun.

        Your graphs seem to be an attempt to confuse the issue. It is easy to see that the spectrum of outgoing IF at CO2 and water vapor absorption wave lengths are lower than what would be expected from the surface temperature of the earth.

        http://www.giss.nasa.gov/research/briefs/schmidt_05/

      • eadler2 commented

        It is easy to see that the spectrum of outgoing IF at CO2 and water vapor absorption wave lengths are lower than what would be expected from the surface temperature of the earth.

        But it’s not easy to see change in Co2 and Water absorption lines.

      • Michael Crichton nailed the logic being used by Trenberth (and, eadler2’s reference to the 97% consensus) in the lecture about aliens causing global warming –i.e., the burden of proof is on the deniers, and they have not been able to meet it. The Taliban burned teachers alive in front of their students yesterday because the terrorists believe little girls shouldn’t go to school. It’s tough being a denier in Afghanistan.

      • but a change has been detected

        A couple degrees at TOA I presume?
        Even assuming it’s an accurate measurement, you realize how big the forcing is at the surface? Plus that doesn’t include the fact that surface forcing from clouds is 10 or 20 times Co2 (remember they block the path to space more than half the time, so during the blocked period Co2 forcing is irrelevant)?

      • I was a lot more impressed before I saw it’s all from central pacific, since we don’t know what the surface temp of the same area both times, we don’t know if that why they are different.

        My other reply ended up at the bottom, in the event this ends up under your post here

        eadler2 | December 17, 2014 at 2:53 pm | Reply

        It may not be easy to see, but a change has been detected.

      • SkepticGoneWild

        Eadler2,

        The IPCC even indicates that net anthropogenic forcings for the period from 1940 to the 1970’s were positive. Global temperature should have risen, but they fell. Callendar was conservative. If the current hypothesis of AGW were to have be presented in 1937, the projections would have been for even more warming than Callendar proposed.

        The Callendar “effect” was falsified, as would the current AGW hypothesis. Over 40 years of cooling. Bummer, man.

      • May be you used a too exagerated vertical scale. Visually very clear, but it only represents 0,6ºC for the whole century. Knowing there have been centennial oscillations ¿how to infer that most of those 0,6ºC are not a natural trend, the recuperation from the cold past centuries when glaciers advanced in the Alps?

      •  
        Definition: talking about, the stochastic nature of climate, is another way of saying, what will happen probably won’t in the way we think we want.

      • The conclusion of the Swanson and Tsonis paper is:

        “Finally, it is vital to note that there is no comfort to be gained by having a climate with a significant degree of internal variability, even if it results in a near-term cessation of global warming. It is straightforward to argue that a climate with significant internal variability is a climate that is very sensitive to applied anthropogenic radiative anomalies [cf. Roe, 2009]. If the role of internal variability in the climate system is as large as this analysis would seem to suggest, warming over the 21st century may well be larger than that predicted by the current generation of models, given the propensity of those models to underestimate climate internal variability [Kravtsov and Spannagle, 2008].”

      • Pinky and the Brain

        ‘If as suggested here, a dynamically driven climate shift has occurred, the duration of similar shifts during the 20th century suggests the new global mean temperature trend may persist for several decades. Of course, it is purely speculative to presume that the global mean temperature will remain near current levels for such an extended period of time. Moreover, we caution that the shifts described here are presumably superimposed upon a long term warming trend due to anthropogenic forcing. However, the nature of these past shifts in climate state suggests the possibility of near constant temperature lasting a decade or more into the future must at least be entertained. The apparent lack of a proximate cause behind the halt in warming post 2001/02 challenges our understanding of the climate system, specifically the physical reasoning and causal links between longer time-scale modes of internal climate variability and the impact of such modes upon global temperature. Fortunately, climate science is rapidly developing the tools to meet this challenge, as in the near future it will be possible to attribute cause and effect in decadal-scale climate variability within the context of a seamless climate forecast system [Palmer et al., 2008]. Doing so is vital, as the future evolution of the global mean temperature may hold surprises on both the warm and cold ends of the spectrum due entirely to internal variability that lie well outside the envelope of a steadily increasing global mean temperature.’

        Although the penultimate paragraph suggests that climate may yet surprise us.

      • Pinky and the Brain

        Damn the threading – this was a response to eardlers selective quoting of a paper he obviously fails to understand.

      • Yes, indeed!

      • Not sure where this comment will fall, but is a response to eadler2- when all your arguments are shown to be absurd, appeal to authority…but, but , 97%. Commidy at its finest.

      • None of my arguments have been shown to be absurd. You haven’t made a substantive reply to a single one of them.
        What is comical are the illogical and incorrect arguments made by the AGW deniers on this thread. Pathetic.

      • eadler2, which particular group of deniers are you referring to? The sun done it bunch or the crew that points out how out of date your choice of references are?

      • Matthew R Marler

        Will a return of rising temperatures validate the climate models?

        alarmists have repeatedly cited short-term “trends” and isolated extreme events as evidence of catastrophic warming. A few years ago the Arctic summer ice was in a “death spiral” before returning to near average. Surely a few years of warmer mean temperature will be cited as evidence that we face catastrophes unless we reduce fossil fuel consumption immediately. The last “deadline” was 2012. What will the next be — 2016?

      • Matthew R Marler

        eadler: What is comical are the illogical and incorrect arguments made by the AGW deniers on this thread. Pathetic.

        It is best to quote specific propositions exactly, along with source, so we can tell what you are writing about.

      • Matthew R Marler

        eadler2: eadler: What is comical are the illogical and incorrect arguments made by the AGW deniers on this thread. Pathetic.

        Would you like to answer the question that I posed above?

      • eadler2, The second link you provided is to a Sky Dragon. I think he has a 10 degree black belt in pseudo-science.

        the first though is perfectly logical. Because of the massive inertia of the oceans you would have ramps, both up and down, in response to larger perturbations.

        https://lh5.googleusercontent.com/-d48q2Rqc3Hg/VG_F-lCAhHI/AAAAAAAALxg/u113_LIqXiI/w729-h431-no/giss%2Bv%2Bmodel.png

        There is a good example in this model to observation comparison from ~1885 to 1910. Pretty basic stuff really.

      • Your graphs seem to be an attempt to confuse the issue. It is easy to see that the spectrum of outgoing IF at CO2 and water vapor absorption wave lengths are lower than what would be expected from the surface temperature of the earth.

        ‘Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.’ http://meteora.ucsd.edu/~jnorris/reprints/Loeb_et_al_ISSI_Surv_Geophys_2012.pdf

        *My* graphs complicate rather than confuse.

        Fairly precisely – in accordance with the 1st law of thermodynamics –

        d(W&H)/dt = energy in (J/s) – energy out (J/s)

        W&H is work and heat – and is mostly heat in the oceans. The oceans obviously warm and cool alternately.

        https://watertechbyrie.files.wordpress.com/2014/12/argo-700m.png

        This shows that energy in – at the emitting frequencies of the Sun – is alternately higher and lower than energy out – in the IR frequencies.

        Gavin’s explanation you link to is both simplistic and misleading. Increased scattering of IR photons in the atmosphere can be measured through an aperature.

        e.g. http://www.atmos.washington.edu/~dennis/321/Harries_Spectrum_2001.pdf

        Reduced IR emissions are not discernible against large background variability in radiant flux and don’t exist when d(W&H)/dt = 0. That is when the planet equilibriates – temperatures increase – to higher CO2. Theoretically less than a decade in some views. Thus there may be a small ongoing reduction in IR out – but it is complicated by large changes in the background state.

      • NobLese

        Your mention of “aperture” gives me the opportunity to ask a question. I have some knowledge of electromagnetic energy (EME) but none of molecular absorption. EME has polarity and is effected by the dimensions of an aperture. Do these characteristics impact the climate?

        Thank you,

        Richard

      • ‘ IRIS, flown on Nimbus 4, is a Fourier transform spectrometer (FTS) with apodized spectral resolution
        of 2.8 cm-1 and a nadir field of view that corresponds to a ground footprint of 95 km in diameter. The instrument was launched in April 1970 and recorded data until January 1971 [Hanel and Conrath, 1970; Hanel, et al., 1972]. IRIS recorded spectra between 400 and 1600 cm-1 but wavenumbers above
        about 1400 cm-1 suffer from high noise. TES is an FTS instrument on the AURA satellite launched in 2004 [Beer, et al., 2001]. Global surveys are collected every 2 days, and take roughly 24 hours to
        collect. These global surveys collect four bands of spectra discontinuously over the wavenumber range of 650 – 2260 cm-1; we use three bands over 650 – 1350 cm-1 for this analysis. The data over the 16 pixel detector is averaged, corresponding to a footprint of 5.3×8.5 km.’ https://www.eumetsat.int/cs/idcplg?IdcService=GET_FILE&dDocName=pdf_conf_p50_s9_01_harries_v&allowInterrupt=1&noSaveAs=1&RevisionSelectionMethod=LatestReleased

        The effect – while real and observational proof of changing activity in IR bands in the atmosphere – is seen in very narrow footprints at specific times. Less energy emitted directly into space from the surface and more emission in the atmosphere in all directions as seen by specialist instrumentation.

        Total IR emissions increased in the 1980’s and 1990’s – although this graph seems to confuse eardley. .

        https://watertechbyrie.files.wordpress.com/2014/06/loeb2011-fig1.png

      • … it is generally agreed by climate scientists, and shown in the latest IPCC report AR5 report that GHG’s are the responsibel for 100% of global warming since 1970. eardley

        It is generally considered that stuff like this from eardley is utter nonsense. Yet they keep repeating it. Madness.

        Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

        Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it. http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

        Guess which periods warming was accentuated and when it was hidden.

      • The reference you linked to shows that what I said in my previous post is correct. The GHG absorption lines showed a dip in outgoing radiation from the earths atmosphere between 1990 and 1997.

        That might be 1970. Outgoing IR increased between the 1980’s and 1990’s. eardley’s tendentious interpretation is not correct.

        See above.

      • Noblesse,

        The reference you linked to shows that what I said in my previous post is correct. The GHG absorption lines showed a dip in outgoing radiation from the earths atmosphere between 1990 and 1997.
        You argue that reduced emissions would not be discernible and don’t exist if the earth’s energy is not changing. Since the radiation is reduced, and the heat in the oceans was increasing at the time, we know that the earth was retaining heat. The fact that the reduction occurs in the absorption lines of CO2 and other GHG’s indicates that they are absorbing heat that would otherwise leave the planet. In addition their concentration increased in the atmosphere during that period. All the evidence points to the validity of the GHG theory of warming.
        Where is the evidence to the contrary, and who says that the planet will equilibrate in less than a decade. Ocean heat is increasing and shows no sign of leveling off.

      • “It is best to quote specific propositions exactly, along with source, so we can tell what you are writing about.”

        he’s writing about the standard cranks.. dragon types.

      • Mosher

        Counter-arguements would be more useful than ad hominen attacks. Eadler2 is sounding hysterical. And frankly it appears he is confused about the physics of radiation and heat. SW-LW radiation is electromagnetic energy that can be become heat energy only when converted to kinetic energy under conditions of resistance; mass colliding with mass. When this occurs there results in warmer evenly distributed molecules in the atmosphere; only a few of which can result in increased T. And those few molecules tend to rise because of their warmth.

        Greenhouse gases do not “block” radiation.

        You can choose to ridicule me or counter the arguements.

        Thank you,

        Richard

      • Matthew R Marler

        Steven Mosher: he’s writing about the standard cranks.. dragon types.

        That’s your guess. My guess was that he was writing about everyone who disagreed and does disagree with him.

      • Pinky,
        I refer you to Swanson’s explanation of the significance of the paper on climate regimes that he wrote with Tsonis.

        http://www.realclimate.org/index.php/archives/2009/07/warminginterrupted-much-ado-about-natural-variability/

        It is worth reading in full. Here is what he says about the implications of his research for the future:

        “What’s our perspective on how the climate will behave in the near future? The HadCRUT3 global mean temperature to the right shows the post-1980 warming, along with the “plateau” in global mean temperature post-1998. Also shown is a linear trend using temperatures over the period 1979-1997 (no cherry picking here; pick any trend that doesn’t include the period 1998-2008). We hypothesize that the established pre-1998 trend is the true forced warming signal, and that the climate system effectively overshot this signal in response to the 1997/98 El Niño. This overshoot is in the process of radiatively dissipating, and the climate will return to its earlier defined, greenhouse gas-forced warming signal. If this hypothesis is correct, the era of consistent record-breaking global mean temperatures will not resume until roughly 2020. “

  25. “How do we know that the models representing global or regional climate are sufficiently reliable for predictions of future conditions? First they must reproduce existing observations, a test current models are failing as the global temperatures remain nearly constant.”

    Wrong.

    1. No model reproduces existing observations.
    2. Models produce predictions which will be within some margin of being correct.
    3. The allowable margin of error is decision specific.

    you dont validate models against REALITY.
    you validate models against the SPEC.

    The spec would say “match the trends within 10%”

    The problem is nobody has set the spec.

    Here are two FAILS.

    A) a modeler who says “the best we can do” is the spec
    B) a skeptic who says “match reality” is the spec.

    • What about this model? :)

      http://www.sciencemag.org/content/317/5839/796

      “We present a new modeling system that predicts both internal variability and externally forced changes and hence forecasts surface temperature with substantially improved skill throughout a decade, both globally and in many regions. Our system predicts that internal variability will partially offset the anthropogenic global warming signal for the next few years. However, climate will continue to warm, with at least half of the years after 2009 predicted to exceed the warmest year currently on record.”

      • One of the very first of its kind, and it ‘s attempting to what is widely thought to be impossible.

        Look at it like this. They said the initial years would see natural variation offset AGW (it offset it by more than their model indicated). They said 2014 would be a warmest year (it will not be as warm as their model indicated(.

        In general, not horrible for a first try at the impossible.

      • Pinky and The Brain

        It was in fact a fairly spectacular fail.

        https://watertechbyrie.files.wordpress.com/2014/06/swanson-realclimate1.png

        Some people did a better job of it.

        ‘Using this method, and by considering both internal natural climate variations and projected future anthropogenic forcing, we make the following forecast: over the next decade, the current Atlantic meridional overturning circulation will weaken to its long-term mean; moreover, North Atlantic SST and European and North American surface temperatures will cool slightly, whereas tropical Pacific SST will remain almost unchanged. Our results suggest that global surface temperature may not increase over the next decade, as natural climate variations in the North Atlantic and tropical Pacific temporarily offset the projected anthropogenic warming.’ http://www.nature.com/nature/journal/v453/n7191/abs/nature06921.html

        “The winds change the ocean currents which in turn affect the climate. In our study, we were able to identify and realistically reproduce the key processes for the two abrupt climate shifts,” says Prof. Latif. “We have taken a major step forward in terms of short-term climate forecasting, especially with regard to the development of global warming. However, we are still miles away from any reliable answers to the question whether the coming winter in Germany will be rather warm or cold.” Prof. Latif cautions against too much optimism regarding short-term regional climate predictions: “Since the reliability of those predictions is still at about 50%, you might as well flip a coin.” http://www.sciencedaily.com/releases/2013/08/130822105042.htm

        The particular coin is weighted towards non-warming for 20 to 40 years.

      • Some people did not predict 2014 would be a warmest year. They just drew a flat red line. It has not been flat. And it’s getting less flattish by the day. Current warming rate is over .6C per decade.

        Why will it stop? Because of proxies. LMAO.

      • I love that realclimate graph. Don’t agree? You must be a denier.

        It is difficult to imagine how you take yourself seriously let alone why anyone else should.

    • They have yet to make predictions that do significantly better than a null of no change. That shows no skill.

    • I suppose I should clarify that the they I refer to are the prediction/projections of the IPCC reports. There are certainly individual models that have done better than a no change null and that is evidence of skill, but if they actually have skill or were just lucky is another question.

    • Mr Mosher,

      “The problem is nobody has set the spec.”

      Wrong.

      The spec the models have been tested against are the graphs of projected temperatures from various IPCC reports, which were derived from model outputs. When tested against reality, the projections have all been wildly hot.

    • “you dont validate models against REALITY.
      you validate models against the SPEC.”

      Well, gee, this calls into question why GCM’s are allowed to be a talking point for policy.

      GCM’s – million+ line “climate programs” don’t represent reality and can’t be validated against reality. They don’t reproduce natural cycles, don’t get the temperature distribution right, don’t get the precipitation right, etc.

      About the only thing they do is generate an upward accelerating temperature curve when feed increasing atmospheric CO2 data.

      GCMs are basically very very overcomplicated.compute intensive plotting programs. Plotting the upward curve equation due to input file forcing data should take at most a couple of dozen lines of code.

  26. Max_OK, Citizen Scientist

    Dr.Morton, I am impressed by your longevity. While I would’t want to be your age (81 years) right now, I do hope to eventually reach it and still be as active and prolific as you. I wish you many more years to come.

    I found your piece well organized and your writing clear and concise. I completely agree with what you said about ethanol. I’m sorry to say I disagree with you on just about everything else. Nevertheless, I thank you for expressing your thoughts on some interesting issues and inviting discussion.

    • Seems ter serfs that the model projections or predictions have
      been falsified by observation and the multitude of post hoc
      explanations for there temperature plateau are attempts at
      theory inoculation like attempts to ‘explain the missing hotspot.

      • Max_OK, Citizen Scientist

        beth, your comment seems unrelated to mine. Did you put yours in the wrong place?

      • You say , Max_Okay_ Citizen_er_Scientist, ‘I’m sorry
        I disagree with you on just about everything else.’
        You make no reference to the models discrepancies with
        observations or to the number of post hoc_err -ies that
        are used to protect the way out models from criticism.
        Have you nothing to say regarding why yr okay about the
        discrepancies and/or the past hoc_err_ies?

      • Max_OK, Citizen Scientist

        Sure, I have some things to say.

        Only a sap would expect precise accuracy from climate model projections or predictions.

        Pinpoint projection accuracy isn’t necessary for a climate model to be useful.

        Skeptics have nothing to offer that would replace climate models.

        If skeptics think climate models don’t work right, they could be useful by offering to help fix the models.

      • Max

        When reconstructing historic temperatures from lots of unrelated data I always have Hubert Lambs maxim uppermost in my mind that ‘you can understand the (temperature) tendency but not the precision.’

        I think the same maxim can be applied to Models. Do they meet the maxim or not? Do people-including the modellers- read too much into their results?

        tonyb

      • Pin point accuracy ? What a straw man ! In any case correlation
        with observations is waaaaay out. But never the less guvuhmints squander $$$$$$ on pseudo – science hockey stick models. Not
        ‘evidence – based – policy’ but’ policy – based – evidence.’

      • Models have at their core a set of non-linear equations. Much as Lorenz’s convection model did. From there it is a modest step to the idea of sensitive dependence – perfectly deterministic but seemingly random regime shift in the words of Julia Slingo and Tim Palmer.

        Or indeed James McWilliams.

        ‘Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation.’

        http://www.pnas.org/content/104/21/8709.full

        http://www.pnas.org/content/104/21/8709/F1.large.jpg

        So these models are chaotic in the sense of complexity theory – and unless we get beyond mere definitional issues to the widespread understanding that it is so – then there is nothing left to say.

        Climate is chaotic – because it shifts abruptly. ‘What defines a climate change as abrupt? Technically, an abrupt climate change occurs when the climate system is forced to cross some threshold, triggering a transition to a new state at a rate determined by the climate system itself and faster than the cause. Chaotic processes in the climate system may allow the cause of such an abrupt climate change to be undetectably small.’ http://www.nap.edu/openbook.php?record_id=10136&page=14

        Getting to an idea of what that means for the real system – is the problem.

        ‘‘Prediction of weather and climate are necessarily uncertain: our observations of weather and climate are uncertain, the models into which we assimilate this data and predict the future are uncertain, and external effects such as volcanoes and anthropogenic greenhouse emissions are also uncertain. Fundamentally, therefore, therefore we should think of weather and climate predictions in terms of equations whose basic prognostic variables are probability densities ρ(X,t) where X denotes some climatic variable and t denoted time. In this way, ρ(X,t)dV represents the probability that, at time t, the true value of X lies in some small volume dV of state space.’ (Predicting Weather and Climate – Palmer and Hagedorn eds – 2006)

        Fundamentally – a probability density function of a family of solutions of a systematically perturbed model – rather than an ensemble of opportunity.

        ‘In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?

        Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.full

        Of course if we just listened to Maxy’s cr@pola – we wouldn’t need to understand actual science.

      • Pinky and The Brain, must say that I concur.

      • Don’t be troubled by provocateurs – their words are banderillas and their posts have little content.

      • Pinky – “models have at their core…”

        Great post, well worth scrolling past the provocateur in order to read. Thanks for the links. James C. McWilliams is a credible source, the kind of specialist I have been looking for to shed some light on the AOS models.

        See his background in applied mathematics here:
        http://en.m.wikipedia.org/wiki/James_C._McWilliams

        Thanks again.

    • Max_OK, Citizen Scientist

      Tony,

      I suspect there is a tendency for people to read too much into the results of models. Also, there may be a tendency to expect too much.

      • Max

        I think we agree that people read too much into the precision of temperature reconstructions and the accuracy of models.

        However, on the output from those we are spending many billions on trying to change the world and as many billions in payments to third world countries to follow our lead.

        I can think of very many better things to do with that money although one of those better things would be to create a more robust and secure alterative energy system that meant we did not have to rely on suppliers that hated us or wanted to directly influence us or manipulate us i.e Russia and many of the Arab States.

        Tonyb

      • Max_OK, Citizen Scientist

        But you have to have some basis for planning, and though the future can’t be predicted with absolute certainty, a range of possibilities can help steer you in the right direction.

      • Max

        I would plan against the possibility of warming AND the much greater threat to civilisation of cooling.

        I would also have as my priority other unrelated matters which warrant much greater attention such as the threat from a natural Carrington event or the possibility of concerted cyber terrorist attacks. Both of these could knock out our highly vulnerable electronic infrastructure on which we totally rely, which in turn would cause collapse in a few days.

        tonyb

      • ‘In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.’ IPCC TAR 14.2.2.2

        Fundamentally – a probability density function of a family of solutions of a systematically perturbed model. Where we are instead is opportunistic ensembles with a range of single solutions chosen from amongst many feasible and divergent solutions of many different models. There is no rational basis for choosing one feasible solution over another.

        ‘In each of these model–ensemble comparison studies, there are important but difficult questions: How well selected are the models for their plausibility? How much of the ensemble spread is reducible by further model improvements? How well can the spread can be explained by analysis of model differences? How much is irreducible imprecision in an AOS?

        Simplistically, despite the opportunistic assemblage of the various AOS model ensembles, we can view the spreads in their results as upper bounds on their irreducible imprecision. Optimistically, we might think this upper bound is a substantial overestimate because AOS models are evolving and improving. Pessimistically, we can worry that the ensembles contain insufficient samples of possible plausible models, so the spreads may underestimate the true level of irreducible imprecision (cf., ref. 23). Realistically, we do not yet know how to make this assessment with confidence.’ http://www.pnas.org/content/104/21/8709.full

        Of course if we just listened to Maxy’s cr@pola – we wouldn’t need to understand actual science.

      • Max_OK, Citizen Scientist

        Re post by Pinky and The Brain | December 16, 2014 at 4:13 am |

        If grandma had read Pinky’s post she would have said “that boy needs his mouth washed out with soap.”

        No, granny, those aren’t Pinky’s words. He’s is just quoting scientists who are paid to use big words which may sound obscene but aren’t really. Pinky sometimes forgets to quote everything. For example, he forgot the following from the link he gave us:

        “For many purposes that are well demonstrated with present practices, AOS models are very useful even without the necessity of carefully determining their precision compared with nature. These models are structurally unstable in various ways that are not yet well explored, and this implies a level of irreducible imprecision in their answers that is not yet well estimated. Their value as scientific tools is undeniable, and the theoretical limitations in their precision can become better understood even as their plausibility and practical utility continue to improve. Whether or not the irreducible imprecision proves to be a substantial fraction of present AOS discrepancies with nature, it seems imperative to determine what the magnitude of this type of imprecision is.”

        http://www.pnas.org/content/104/21/8709.full

        OK, what do we know after reading all that? We know the following:

        Models are very useful.

        Their value as scientific tools is undeniable.

        Their plausibility and practical utility continue to improve.

      • Models are very useful: “even without the necessity of carefully determining their precision compared with nature. ”

        Their value as scientific tools is undeniable.: “These models are structurally unstable in various ways that are not yet well explored, and this implies a level of irreducible imprecision in their answers that is not yet well estimated. “, “it seems imperative to determine what the magnitude of this type of imprecision is.”

        “Their plausibility and practical utility continue to improve.”

        My boss says I’m improving too, but for some reason they’re not letting me run the place without him just yet.

      • Curious George

        There is a mantra of an “irreducible imprecision” of models chanted again and again. What does it mean?

        First, let’s assume that the climate really is a chaotic system. That does not necessarily mean that it exhibits jumps; it merely means that an infinitesimally small perturbation – a falling leaf – MIGHT get amplified dramatically and change the future behavior of the system.

        Next, let’s assume that we have a good model – a system of equations that describes faithfully the dynamics of the climate. Then an infinitesimal imprecision in model parameters – let’s say a thermometer reading that is of by 0.00001 degrees – will dramatically influence future results. So far so good.

        Modelers seem to think that that irreducible imprecision gives them a license to get away with murder. If a parameter is off by 2%, who cares? Would a model’s irreducible imprecision force wrong results anyway?

        NO, no, no. With a built-in error the bad model no longer faithfully represents the climate. It will have its own behavior, only remotely resembling that of a real climate. A study of model’s results tells us a lot about model’s results, but nothing about climate. More mathematically, a model’s chaotic attractor will have very little or nothing common with a real climate’s chaotic attractor.

      • Lots of precision in the pause, yet not so much in the models.

        Actually I find little precision in the pause and recognize a similar lack of precision in the models when judged on short intervals, which is a mistake we could well avoid.

        Heisenberg’s uncertainty principle applies, you can not know the temperature to an infinite precision.

        Too much uncertainty in the pause.

      • Bob Droege, Heisenberg’s uncertainty principle does not apply to temperature

      • Can you explain why not Phatboy?

      • Do you know what Heisenberg’s uncertainty principle is?

    • Max_OK, Citizen Scientist | December 16, 2014 at 3:17 am | Reply

      Tony,

      I suspect there is a tendency for people to read too much into the results of models. Also, there may be a tendency to expect too much.

      Sorry Max but I do not find yer conclusions okay regardin’ the
      logic of the situation. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

      A serf.

      • Max_OK, Citizen Scientist

        beththeserf, it’t hard to explain to a sap that only a sap expects a model to predict with 100% accuracy. I am sorry I failed.

        I’m thinking “beththesap” would be more descriptive than “beththesurf.”

      • Max

        I assume that Beth will be asleep at present so you have around 6 hours to hide. Any caves local to you?

        Tonyb

      • Max_OK, Citizen Scientist

        Tony, in another post you said:

        “I would also have as my priority other unrelated matters which warrant much greater attention such as the threat from a natural Carrington event or the possibility of concerted cyber terrorist attacks. Both of these could knock out our highly vulnerable electronic infrastructure on which we totally rely, which in turn would cause collapse in a few days.”

        Holy cow, what a mess that would be !

        Speaking of hiding in caves, if I don’t start shopping for Christmas gifts instead of spending time here at ClimateEtc, I’m going to be in serious trouble. I’m leaving right now.

      • Max

        Speaking of Christmas gifts, as a greenie can I recommend this organisation?

        http://www.coolearth.org/general-news/half-a-million-acres

        I ‘own’ a couple of acres of rain forest to help the environment and secure the livelihood of natives.

        Tonyb

      • There yer go again, as if anyone’s expectin’ 100% accuracy, Max_okay_et_al, we’re talkin’ unprecedented model
        discrepancy_okay?
        bts ‘s’ fer anythin’ yer like’s, okay with me. )
        https://climateaudit.files.wordpress.com/2014/12/ci_glb_tas_1920_twopanel1.png

      • Hey Beth,
        Don’t mix anomalies with percentages or you will get epic fail.

      • Max_OK, Citizen Scientist

        Tony, thanks for the link to the organization for the indigenous communities
        in South America.

        The indigenous of Oklahoma, the Plains tribes, are long gone. Their life style was not compatible with that of the caucasians who helped themselves to their land.

      • Max_OK, Citizen Scientist

        Beth, your insistence on passing judgement on a prediction long before it’s target date shows an irrational lack of patience. Perhaps an analogy would help you understand what I mean. If you bet on a race horse, and your horse was trailing after the first furlong, would you tear up your ticket and go home?

  27. Dr. Morton, why did you not even mention the adjustments to the temperature records?

    These are still growing even today and being applied and most of the published reports show this amount to be at least 0.75°C since WWII and this adjustment keeps pushing older records downward (colder) while assuming today’s records are correct even though this is without any negative adjustments of today’s records for the well known urban island effects about most populous areas where temperature stations coincide. These adjustment are nearly the same magnitude as the warming we are speaking of and how can any real scientist not acknowledge this fact.

    So Dr. Morton, you ask:
    “What if the global climate began to warm again? ”

    Is this ‘further warming’ simply from even more and higher adjustments or is this ‘further warming’ from actual warming sensed in rural thermally unpolluted records?

    I don’t believe you can even answer this question either way without first addressing the adjustments validity since they never cease to date from growing month by month, year by year.

    One such reference, some others are even higher:
    http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

    (BTW: other than that topic I found your article very informative)

    • Wayne
      There are many questions to ask about the temperature data including the adjustments you mention. In this essay I wanted to emphasize the unreliabilty of the models whether or not they agree with any temperature curve.
      Don

    • The strangest thing to me, and something I’ve raised in reply to Steve Mosher’s comments in a number of places and to which he has yet to answer, is the fact that the warmest adjustments occur later in the game, when automated weather stations were being systematically deployed. As ASOS and AWOS became more common in the ’80’s, the adjustments increased apace. Why is there a need to adjust upwards stations that in theory do not suffer from TOBS, or the drawbacks of Stevenson screens? Based on NOAA’s graph, you might even think the pause, in the US anyway, was due to the fact that someone stopped jacking up the data.

      • You are wrong. Use raw data. The answer doesn’t change.

      • @ Steve Mosher
        I refer you to the chart above, titled “Difference Between Raw and Final USHCN Data Sets”. Are you saying that the data in this chart is wrong or misleading in some way? Are you saying that later temperatures have not been corrected upwards? Analysis via the Mark I eyeball suggests that the mid to late ’70’s to ’90’s warming trend might otherwise be entirely accounted for by the adjustments to the data.

    • Dr. Morton:
      Thank you, that’s plenty said, I see your emphasis.

      D.J.:
      That is what I thought, that these adjustments would just stop once made but instead they are even accelerating a bit up to today. So, if the adjusted temperatures are now shown level are we not actually cooling which is exactly what it feels like — I remember well what the ’70s and early ’80s were like being outside every day and seems we are now reverting. So I roughly removed that slope of adjustments to see what they are doing and I get this:
      http://i39.tinypic.com/1118rnl.png

      Make me take pause. As you can tell I don’t place much trust in these one-side data adjustments.

    • Why do you post a chart that isn’t even relevant any more.

    • If you remove all urban stations.. The answer doesn’t change

      • My concern is that the answers do seem to change as you add and subtract stations; HADCRUT is showing some significant differences between new and previous versions

      • Now why do you think they made those particular changes Judy?
        What criteria do they have for removing and adding stations?

    • It seems obvious to everyone but a small subset of climate scientists that historic temperatures cannot be a moving target and continuously decline, rationalization or no rationalization.

      Class 1 stations in unchanged rural areas particularly ones upwind from urban heat plumes and not upgraded (continuous stations) should show the same trend as the general population.

      It is contended that class 1 and 2 stations show significantly less trend than the general population of stations. If this cannot be refuted this is problematic – because adding bad data to good data does not make the bad data better it makes the good data worse.

      • The only thing we know about the future is that past temperatures are going to drop.

      • Doc,
        Your comment above is brilliant. Up there with Kim’s best, and that’s saying quite a lot.

    • The temperature adjustments are done by computers so don’t blame climate scientists, and those computers adjust the outliers (well, those older warm outliers- the cold outliers not so much). All adjustments create additional outliers so the cascading processes continues until, eventually, the little ice age ends at the onset of catastrophic global warming, probably on May 14 of 1943.

  28. Climate Researcher

    There won’t be a “return of rising temperatures” until the end of the current 30 year downturn in the 60 year cycle which is superimposed on a long-term cycle of about 1000 years. The latter should reach a maximum within 50 years and then start cooling for nearly 500 years.

    Carbon dioxide has nothing to do with Earth’s climate. The surface is indeed warmer than the mean radiating temperature, but the higher temperatures are maintained by the effect of gravity which establishes the so-called “lapse rate” as an equilibrium state. This then means that there can be convective heat transfer downwards when solar radiation is absorbed in the upper troposphere or in the clouds. This apparent downward heat transfer is really just establishing a new state of thermodynamic equilibrium with a higher mean temperature due to the new energy arriving when the Sun shines. The reverse happens at night.

    There is no need for James Hansen’s explanation in which he realises that extra energy is needed to explain the warm surface temperature, but quite incorrectly assumes it is supplied by radiation from the colder atmosphere, including some from carbon dioxide molecules. All that is of course utter nonsense, but sadly it is all that politicians need for their weak understanding of physics to lead them to be so gullible as to believe.

  29. “These wide uncertainties show that we do not yet know how effective CO2 will be in raising global temperatures. If ECS and TCR really are close to the lower end of the quoted ranges, temperatures will continue to increase with CO2 but at a rate we could adapt to without serious economic damage. The recession of 2008 did not have a noticeable affect on the CO2 curve in Fig. 1.”

    So in additiion to Astrophysics you are an expert in how to analyse the economic effects of climate change, and comparing it to the cost of mitigation. Where is your curve of total CO2 emissions versus the economic damage, showing the uncertainty due to TCR and ECR values? How did you come to your conclusion regarding the economic tradeoffs?

    • Dear eadler2
      Thanks for the comment. I cannot claim any economic expertise so I should have written ” a rate we might have been able to adapt to”. Nevertheless I was surprised not to see any effect of the 2008 recession on the CO2 curve.
      Don

    • Richard Toll has projected net economic benefit for temperature increases to 2C (the peak of the curve), above which the planet will begin to see net economic loss.

      (Peer reviewed and IPCC certified)

  30. if we ignore chaos completely, and simply ask whether temperature is bound by the central limit theorem there are huge implications for how we analyze the data statistically.

    Has anyone ever demonstrated convincingly that global temperature is bound by the central limit theorem on time scales of interest? If not, why assume that climate will try and converge to an average temperature?

    to the mark one eyeball global average temperature looks like a fractal, which suggests that average and variance will change as the scale increases. In other words, “climate change” may simply be a statistical result of changing the length of the sample.

    • Temperature does not try to converge to an average. Temperature is in a cycle that does go from warm like a Medieval Warm Period to Cold like a Little Ice Age and this cold, warm, cold warm cycle does repeat in a cycle that is on the order of 800 to 1000 years. This is what the data shows us.

      Try to understand that cycle and you will start to understand climate.
      http://popesclimatetheory.com/page75.html

      • Climate Researcher 

        Yes, popesclimatetheory, long-term cycles with superimposed 60-year cycles which explain the current slight net cooling. The long-term cycle should pass its maximum within the next 50 years. The cycles correlate compellingly well with 934-year and 60-year cycles in the inverted plot of the scalar sum of the angular momentum of the Sun and all the planets. This implies that planetary magnetic fields (which reach to the Sun) probably affect solar intensity and/or cosmic ray levels. Cosmic rays are thought to affect cloud formation on Earth.

  31. As I’ve written before – the climate models may well be suffering from a similar situation to analysis models which are attempted to be transformed into machine learning/Big Data analysis: which is to say single kernel, linear models which have to be transformed into highly parallelizable yet also high IO.
    This conversion process is fraught with danger – and the lack of both post-dated data set reproduction and predictive accuracy, combined with increasing complexity not yielding improvements in accuracy, would seem to point to the entire process being a failure.

  32. If you could eject a million particles of dust from a common point in a perfectly still room two feet over a table, no model could tell you the exact path over any time frame than any single one of those dust paricles would follow, but a model could fairly accurately predict a scatter plot of the dust on the table below. The real variables are the average mass of the dust particles, the force of gravity, and the height above the table. This is a very simple dynamical system, but how much more complicated is the climate? A million times more?

    Some say the models are ” running hot”, but the time frame and the poor proxy of atmospheric hear versus full climate system energy accumulation must be taken into account. The models are “running hot” for tropospheric sensible heat over the past 17 years, but in regards to full system energy accumulation, the models have it just about right.

  33. Climate depends on a multitude of non-linear processes such as the transfer of carbon from the atmosphere to the oceans, the earth and plants, but the models used by the IPCC depend on many simplifying assumptions of linearity between causes and effects in order to make the computation feasible. Rial et al. (2004) have discussed the evidence for nonlinear behavior in the paleoclimate proxies for temperature and in the powerful ocean-atmosphere interactions of the North Atlantic Oscillation, the Pacific Decadal Oscillation and the El Niño Southern Oscillation in Fig. 3.

    but the models used by the IPCC depend on many simplifying assumptions of linearity between causes and effects in order to make the computation feasible.

    Duh, and they did not get the right assumptions or model output would look more like real data.

  34. There will be no ”return” of any phony ”global” warming temperature; start facing the reality: exactly same overall temp will be for many million years, as it is today!!! Global warming is phony, and always was, all proven:

    https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

    https://globalwarmingdenier.wordpress.com/2014/12/06/genesis-of-the-warmist-cult/

  35. Max_OK, Citizen Scientist

    Quoting Dr. Morton:
    “Skeptics have used this continuing plateau to question whether CO2 is the primary driver of climate, so if temperatures begin to rise again, we can expect many claims of vindication by those who have concluded human activity dominates.”
    ________________

    That won’t cut it with skeptics, who will go back to claiming the temperature records overstate the warming, as so many of them did before the “plateau.”

    • as so many of them did before the “plateau.”

      “So many” was what? Twenty people? Thirty?

      Since the plateau is now claimed to be 18 years, “before the plateau” means “before 1996”. Who before 1996 was claiming that “global warming is nothing but a hoax and a scare tactic”?

      • ‘Using a new measure of coupling strength, this update shows that these climate modes have recently synchronized, with synchronization peaking in the year 2001/02. This synchronization has been followed by an increase in coupling. This suggests that the climate system may well have shifted again, with a consequent break in the global mean temperature trend from the post 1976/77 warming to a new period (indeterminate length) of roughly constant global mean temperature.’ http://onlinelibrary.wiley.com/doi/10.1029/2008GL037022/full

        Unless pulling it out of your arse is now a recognised part of of the scientific method – not that Maxy would recognise the scientific method if it hit him in the face – then some sort of hypothesis followed by analysis and synthesis is usually required.

        The new regime started in 2002. I talked to a hydrologist in 2003 – who understood the temperature implications and links to Pacific states – that I had just got. In 2007 – Tsonis and colleagues posited a new dynamical mechanism for major climate shifts. By now it is pretty definitive. A break in ocean states likely to persist for 2 to 3 decades.

  36. Thanks for the article. I have an interest in modeling and wondered how the averaging of models for a chaotic system might be ascertained to be somehow representative. Just didn’t find the time to investigate further how this is scientifically justified. Seems it isn’t.

    • You can take an average of all models, but it is probably not a valid approach as models fall into groups that handle dynamics differently. Probably some of those groups will be more accurate or complete, so taking an average of that group is going to yield a better result. Only continued research will tell us which groups have the most accurate dynamics,

      • I am with gatesy on this one. Averaging is usually always the way to go. I coach basketball and I make sure that my players with average skill sets get the most playing time.

        Carry this foolishness to it’s logical conclusion and you can do away with the models and save a lot of money. Just have each group draw their little charts reflecting where they think the climate is headed and take the average.

        See comments of Dr. Brown on wattsup for an elegant explanation on why averaging models is goofy.

      • Not surprisingly – It is not at all like that. There are 55 models at last count I believe in the Coupled Model Intercomparison Project.

        Each of these models has many thousands of feasible solutions – chaotically diverging from small differences in starting points and boundary conditions. Schematically it results in the following.

        http://rsta.royalsocietypublishing.org/content/roypta/369/1956/4751/F2.large.jpg

        Pick a solution – arbitrarily based on what looks good – and email it to the IPCC where they graph it as a ensemble with a range of solutions.

        https://curryja.files.wordpress.com/2014/12/slide2.png

        Wait – what’s this arbitrary selection you say?

        ‘AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior.‘ James McWilliams

        That’s right – they pull it out of their arses.

        https://watertechbyrie.files.wordpress.com/2014/12/roses-19-e1418721660193.jpg

  37. Oh come on now, what’s this all about? Everyone knows that all models are wrong. ;)

    If you’re looking for an alternative to models and will be attending the AGU Fall Meeting in San Francisco this week, do stop by my poster this morning, Tuesday from 8 am to 12:20 pm, in the Global Environmental Change section in Moscone West. You can find me at GC21C-0566 with a poster titled “An Ekman Transport Mechanism for the Atlantic Multidecadal Oscillation”. Or simply google GC21C-0566 2014 Ekman.

    • What, did you give up on your last poster which tried to quantify humanity’s degree of evilness?

    • Is this poster another pile of motivated reasoning?

    • Mais oui, Descates. the models are tres, tres wrong.
      Un serf.

    • So what does the current LOD indicate for the future of the AMO?

      • That it will continue to follow while others lead!

      • JCH, maybe. If you look at the year of emergence as derive by Ed Hawkins, it appears the warming starts in the Indian Ocean and then goes into the Atlantic.

        http://www.met.reading.ac.uk/~ed/bloguploads/obs_yrem_sn1-2.png

        This matches up fairly well with the model study by Lee et al 2011

        http://www.aoml.noaa.gov/phod/docs/Lee_etal_2011_grl_amoc.pdf

      • Steven’s point about warming starting in the Indian Ocean and moving into the Atlantic is a good one, and has some partial truth when considering the warming of the IPWP that has occurred for 60+ years. Eddies of warm water leak out of the Indian Ocean around the southern tip of Africa, and do add warm water to the Atlantic.

      • @steven: So what does the current LOD indicate for the future of the AMO?

        Excellent question. Based on its history since 1700 I’d say that after a very quiet century it went through an increasingly rough period from 1790 to 1920 and then damped down gradually. For those who find the correlation between the LOD and the AMO compelling, if the damping continues the only significant multidecadal fluctuations in the North Atlantic are likely to be radiatively forced. (In part 1 of my talk at last year’s AGU meeting I argued that the global warmings of 1860-1880 and 1900-1945 can’t be attributed to radiative forcing, which would have required the land to warm faster than the ocean when it was the other way round.)

        Geomagnetic secular variation (GSV) could give an additional decade or so of heads-up in forecasting the LOD, in this account anyway, thanks to Hide et al 2000, “Angular momentum fluctuations within the Earth’s liquid core and torsional oscillations of the core–mantle system”.

        @steven: Too bad, I sort of like reading your arguments even if our very first interaction involved you accusing me of being a denier without you being able to identify what I was denying. That was years ago now. I have yet to return the favor by calling you anything.

        You called me condescending (not unfairly) in this 12/2/2010 comment, and then asked me to define “denier” without however suggesting that I had called you a denier or anything else (which as far as I can see I hadn’t).

        The context was your requests for explanations of the extent to which the 1860-1880 and 1900-1945 warmings were different from the 1970-2005 warming, and why observed forcing was less than estimated. My AGU Fall Meeting talk last year (google Pratt GC53C-06) addressed them (belatedly) in parts 1 (as above) and 3 respectively while my poster on Tuesday (google Pratt GC23C-0566 Ekman) expanded on part 1. Sorry I didn’t have those perspectives to hand in 2010.

      • Vaughan, that is the conversation I was thinking of and you are right that you didn’t call me a denier so I am sorry for the comment I made that you had.

      • My argument was that the smaller you made the transient/equilibrium ratio then the higher the ratio of equilibrium/transient warming would be in the future. For instance, separating transient and equilibrium into 50 year periods for simplicity, an 80% transient on a 0.5C total warming would cause 0.4C warming in the first 50 years and 0.1C in the next 50. If you argue for much smaller transient/equilibrium ratios say for instance 40% then you would get 0.2C in the first 50 and 0.3C in the next 50. Now in the latter 50 years if you had 0.5 warming only 0.2C of that would be transient so only 0.12 further warming to equilibrium would be expected. Perhaps you covered all this in your presentations I haven’t had chance to look yet.

      • @steven: My argument was that the smaller you made the transient/equilibrium ratio then the higher the ratio of equilibrium/transient warming would be in the future. For instance, separating transient and equilibrium into 50 year periods for simplicity, an 80% transient on a 0.5C total warming would cause 0.4C warming in the first 50 years and 0.1C in the next 50.

        Sorry, not following. Are you saying that equilibrium climate sensitivity could be determined within 100 years? Normally it’s understood to take many hundreds of years for climate to reach equilibrium after a doubling of CO2.

        I also don’t understand the 80% and 40% numbers in your examples. Transient climate sensitivity is not defined as some percentage of climate sensitivity but as the increase in 20-year climate over 70 years when CO2 is steadily increasing at 1%/yr (presumably over 90 years since a 70-year-long 20-year running mean requires 90 years of data).

        If a given model estimates ECS at s and TCR at s’, s’/s will indeed be some percentage. However if that percentage turns out to be 80% no one is going to ask what the simulations would have looked like had the percentage been 40%. It’s a meaningless question because the percentage is not a control knob in a simulation.

      • Vaughan, the 100 years was just a simple example to get my point across. I see it worked like a charm. Let me take this from our previous argument:

        This is what the paper you promoted says

        “Ocean-caused delay is estimated in Fig. (S7) using a coupled atmosphere-ocean model. One-third of the response
        occurs in the first few years, in part because of rapid response over land, one-half in ~25 years, three-quarters in 250 years, and nearly full response in a millennium.”

        The trend from 1910 to 1960 was about 0.1C/decade.
        The trend from 1960 to 2010 was about 0.1C/decade.

        They both include what apears to me without doing any calculations to include fair shares of positive and negative AMO phases.

        The difference is the second trend includes warming to equilibrium from the same forcings that caused the first trend.

        The warming is decelerating.

      • Just the paper reference was from our earlier argument BTW.

      • This is what the paper you promoted says

        Since I hadn’t promoted any paper in that thread from 2010 it took me a while to figure out what paper you were referring to. Turned out it was a paper Andy Lacis was promoting, namely Hansen et al 2008. Andy quoted a figure of 1500 years from the paper, all I said was to suggest taking model-generated numbers like that with a grain of salt.

        The trend from 1910 to 1960 was about 0.1C/decade.
        The trend from 1960 to 2010 was about 0.1C/decade.

        According to WoodForTrees the respective trends are 0.089 °C/decade and 0.135 °C/decade. On the basis of those two numbers alone, which show a trend increase between those two half-centuries of 0.046 °C/decade, I would have said that the warming is accelerating. Furthermore based on those two numbers alone and nothing else, the best estimate for warming for 2010-2060 should be another increment by 0.046 °C like the previous one, bringing the warming to 0.182 °C/decade. (Which is a lot lower than the last three years which trended up by 0.537 °C/decade.)

        They both include what apears to me without doing any calculations to include fair shares of positive and negative AMO phases.

        On that basis the best estimate for 2010-2060 would presumably be more of the same, hence no impact either way on the 0.182 °C/decade estimate.

        The warming is decelerating.

        The numbers seem to be saying the opposite. What is it about them that persuade you that the warming is decelerating?

      • Do you disagree with the paper? By how much and in what way?

        I didn’t use wood for trees. I looked at the chart of hadcrut 4 and determined that for the running mean the warming was about 0.5 for each 50 year period.

        http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4.png

        Clearly the same within the uncertainty range anyway:

        http://www.metoffice.gov.uk/hadobs/hadcrut4/diagnostics.html

        Now the paper that you may or may not agree with states 50% of the warming in the first 25 years. That would be the average number of years for the first data set from 1910 to 1960. The next 25% come in the following 250 years. I doubt that is evenly spaced. So for the next 50 years what would you say? 10%? 15%? So roughly between 0.04 and 0.07C? Take that off the latter half of the century and add it to the former half since that was when the forcing occurred and you have a warming deceleration.

        Feel free to come up with your own numbers. You may not like my choice of 1960 for instance. What do you think the percentage of warming should be from a forcing after 25 years in the following 50 years? If you wish to stick with what you have done already then how much of that 0.046C/decade would you like to attribute to the forcing from the first half of the century?

      • Do you disagree with the paper? By how much and in what way?

        I’m not sure why you think I would have any position on that paper as it was Andy Lacis who was promoting it, not me. In any event I don’t understand the paper’s relevance to your claim that the warming is decelerating, since you’re basing that claim on what’s happened up to now, for which we have billions of data points, whereas the three or four numbers you quoted from the paper were produced by model runs projecting hundreds of years into the future. What new information could you infer about the performance of IBM stock in the 20th century based on forecasts of IBM for this century and beyond, or about past earthquakes based on projections of future earthquakes?

        Clearly the same within the uncertainty range anyway:

        Excellent point, it’s definitely worth taking the uncertainty into account. Matthew Marler could do a better job of that than me. The question is, how should one convert those 95% error bars at each of the smoothed annual averages into 95% error bars on the two slopes of the least-squares trend lines. Matthew, do you know of any quick back-of-the-envelope methods that would give a reasonable estimate?

        Trend lines tend to be pretty robust and I would therefore be very surprised if the error bars for each of the 0.089 and 0.135 trend slopes came anywhere near overlapping. Moreover even if they touched, the probability that the two trendlines have the same slope is 0.05 squared or 0.0025. since you have to be at the 0.05 end of each, and if not independent (the assumption justifying multiplying the probabilities) they’re more likely to be anticorrelated than correlated making that probability even less.

        But even in the unlikely event that the two trend lines have the same slopes, I still don’t see how you were able to infer that warming is decelerating.

      • Vaughan, that’s why I told you to do your own numbers. It isn’t important if it is decelerating or accelerating more slowly for the purposes of this argument. It is only important that when climate scientists decide to claim more warming per molecule for the future that they start by calculating and adjusting down how much each molecule has done recently.

      • If the two trend lines had the same slope and one contained feedback and the other didn’t, the one with feedback would indicate a slower warming rate because the feedback would produce less feedback than an initial pulse. If that wasn’t the case I suspect we would be quite warm or cold all the time. Now you could argue that the first half of the 20th century also contains feedback. It could get very messy depending on how accurate you wanted to get.

      • Vaughan, I used the running mean. Wouldn’t you consider that a superior method than using point to point where the individual years would matter more? We can’t avoid involving weather but no reason to search it out.

      • that’s why I told you to do your own numbers.

        And I did, but you seemed not to like them.

        It isn’t important if it is decelerating or accelerating more slowly for the purposes of this argument.

        Whether it was decelerating or accelerating was important to me to the extent that at least I understood the claim and could therefore say something about it. You began with “My argument was that the smaller you made the transient/equilibrium ratio then the higher the ratio of equilibrium/transient warming would be in the future” which I didn’t understand and so grasped at the nearest straw that I did understand.

        It is only important that when climate scientists decide to claim more warming per molecule for the future that they start by calculating and adjusting down how much each molecule has done recently.

        I don’t understand that either. What are the units you’re using for how much a molecule does? Joules? Degrees?

        Back in 2010 (which this discussion appears to be a continuation of) you wrote “I have been saying this for almost 2 years to climate scientists and have yet to recieve a response that explains where I am making my error.” Could this have been because you were accusing them of “adjusting down”? It’s a patient teacher who’s willing to carry on in the face of disruptive accusations of manipulation of data.

        If the two trend lines had the same slope and one contained feedback and the other didn’t, the one with feedback would indicate a slower warming rate because the feedback would produce less feedback than an initial pulse. If that wasn’t the case I suspect we would be quite warm or cold all the time. Now you could argue that the first half of the 20th century also contains feedback. It could get very messy depending on how accurate you wanted to get.

        Nor do I understand that. What does it mean for either a trend line or a period of time to contain feedback?

        I used the running mean. Wouldn’t you consider that a superior method than using point to point where the individual years would matter more?

        How did you get a slope from a running mean without connecting two points? A running mean merely smooths, it doesn’t give a trend line, unlike linear regression, meaning least-squares fit of a straight line. I would consider linear regression superior to point-to-point, though I’m not sure if it makes much difference to use linear regression with smoothed or unsmoothed data.

      • “Could this have been because you were accusing them of “adjusting down”? It’s a patient teacher who’s willing to carry on in the face of disruptive accusations of manipulation of data.”

        You should read what I said again. I am complaining because I don’t think they are doing a step I would consider important. That isn’t exactly manipulation of data.

        “And I did, but you seemed not to like them.”

        I don’t mind them but since they aren’t my numbers don’t expect me to answer for why they don’t match my results. If you wish to do a point to point analysis of the running mean go ahead.

        What produces a warming trend if not for the forcing and the feedbacks.

        I don’t think my argument is all that complicated. If you make the transient response to be less of the total response you will get more warming in the future. That means the response to forcings in the past also had more warming in the future and that future is currently the present.

      • You should read what I said again. I am complaining because I don’t think they are doing a step I would consider important. That isn’t exactly manipulation of data.

        Ah, I see. I read it again and realized I’d managed to completely misinterpret what you said, very sorry about that. Let me see if I understand it now. You wrote

        It is only important that when climate scientists decide to claim more warming per molecule for the future that they start by calculating and adjusting down how much each molecule has done recently.

        I’m now reading this as saying that, on the reasonable assumption that forecasts should take into account recent history, any forecast of future warming must first “adjust down” recent history. Did I understand you correctly? And if so, what did you mean by “adjust down”, why is it needed (e.g. what methodological principle is it based on), and how should it be done?

        I don’t mind them but since they aren’t my numbers don’t expect me to answer for why they don’t match my results.

        My numbers (0.089 and 0.135 °C/decade for respectively 1910-1960 and 1960-2010) don’t match yours (0.1 and 0.1) because your method of obtaining them was to eyeball the 95% error bars and claim to be able to picture trend lines of equal slope lying within them. I don’t see how that approach can yield remotely as plausible a fit as obtained with the least-squares best fit. Just squinting and saying that 0.089 looks close enough to 0.135 to call them both equal to 0.1 is not a terribly compelling argument.

        If you wish to do a point to point analysis of the running mean go ahead.

        Why would I wish to do that when it’s more robust to fit a trend line L to the data D that minimizes the standard deviation of the residual, D minus L? That is the exact technical definition of linear regression (there are other wordings). What I wanted to know was how you obtained a trend slope from a running mean. A running mean does not produce a trend slope, it converts time series A into time series B each of whose points is the mean of its neighborhood in A. How do you get a slope from time series B?

        What produces a warming trend if not for the forcing and the feedbacks.

        Well, there’s convection and conduction. But let’s go with the IPCC, which only defines radiative forcing.

        I don’t think my argument is all that complicated. If you make the transient response to be less of the total response you will get more warming in the future. That means the response to forcings in the past also had more warming in the future and that future is currently the present.

        That part of your argument is not as complicated because it contains no numbers. But even from that 30,000 foot level, if by “total response” you mean ECS then you’re comparing apples and oranges. ECS is the eventual temperature rise following a doubling of CO2. TCR is the rise in 20-year climate (meaning 20-year running mean of temperature) during 70 years of CO2 rising at 1% per year. (In this context, by temperature I mean annual global mean surface temperature, 1-year climate for short.)

        I don’t believe there’s any way of defining “transient response” and “total response” consistently with how modelers interpret ECS and TCR that allows them to be compared the way you want to. But let’s at least take a stab at it.

        How about defining “total response” so that the doubling in the ECS definition matches that in the TCR definition? The scenario is then that the climate system starts out in equilibrium with CO2 and everything else steady, then the CO2 abruptly starts rising at 1% for each of 70 years, and then just abruptly stops (and has therefore essentially doubled, $100 dollars compounded at 1% over 70 years accruing to $200.68). Transient response is the rise in 20-year climate during the 70 years while CO2 is changing, while total response is that plus the eventual further rise in temperature thereafter, namely when equilibrium is once again reached, with no further changes to CO2 (since ECS is defined only for a doubling).

        (One could complain that starting TCR from an equilibrium state, while technically not ruled out by the IPCC definition, is not as natural as spinning the model up well in advance so as to “acclimatize” the climate system to a steady 1%/yr rise in CO2, but let’s ignore than nicety for now.)

        If those definitions are not what you had in mind, please define “transient response” and “total response” in a way that allows them to be compared the way you’ve been doing. This presumably entails defining both in a single scenario, otherwise I don’t see how to make sense of a comparison.

        If however they’re consistent with how you’re looking at things we can move on to what you asked earlier:

        Now the paper that you may or may not agree with states 50% of the warming in the first 25 years. That would be the average number of years for the first data set from 1910 to 1960. The next 25% come in the following 250 years. I doubt that is evenly spaced. So for the next 50 years what would you say? 10%? 15%? So roughly between 0.04 and 0.07C? Take that off the latter half of the century and add it to the former half since that was when the forcing occurred and you have a warming deceleration.

        We can then try to come up with numbers for your scenario that we can both agree to. As things stand they look wrong to me, but perhaps I’m just not understanding your argument.

      • Ok, I see why I can’t do what I was attempting to do regarding trends. If I had said the amount of warming before and after 1960 I would have had more solid ground to stand on and then you could have said 1960 was cherry picked and I couldn’t have argued against that very well. I’ll try to get to the rest of your response in a reasonable amount of time. It’s going to be hectic around here for the next few days.

      • Actually let me do a quick example so that you can see where I’m going. Unlike climate models the earth didn’t have a spin up time. It began warming. The time and amounts may be fairly disputed but everyone agrees with the basic concept that we were cooling and we switched to warming except maybe a very few people. So I’m going to take some numbers I don’t claim to be accurate and run through 2 evenly warming time periods and then make a conclusion on what might happen in a 3rd.. I’m going to use 50% for the amount of warming from a forcing in the first time period, 22% for the second time period, and 8% for the 3rd. I’ll just use a random number of 2 for the total change from the forcing a;though the total number isn’t important for the argument just the percentages over a reasonable amount of time.

        So for the first time period you have 1 or 50% of 2.

        The second time period you still get 1 warming but now 0.44 is from feedbacks from the forcings of the first time period so only 0.56 is from additional forcing.

        For the 3rd time period we will hold the warming from additional forcing at 0.56. We get 0.16 from the feedbacks of the forcing of the first time period and 0.12 from the second period forcing as feedbacks.

        The conclusion is that the warming was decelerating during the second time period but that could only be identified as a change in trend in the 3rd.

      • I shouldn’t do things in such a hurry.

        The 3rd time period should be 0.56 0.16 and 0.25. Still showing a deceleration in the 3rd time period but not by much. A 4th time period if you held the warming from additional forcing steady would show a much larger drop.

      • @steven: Clearly the same within the uncertainty range anyway:

        This was your argument for why it was statistically sound to approximate the rates of warming for HadCRUT4 from 1910-1960 and 1960-2010 as both being the same 0.1 °C/decade. Since Matthew Marler hasn’t stepped in to judge its soundness I decided to test it myself with the following Monte Carlo simulation.

        My procedure was to take the 50 annual temperatures for 1910-1960 and randomly perturb each of them with a normal distribution having a (generous) standard deviation of 0.1 °C. This gives 95% error bars that are 0.4 °C wide, somewhat wider than those shown at

        http://www.metoffice.gov.uk/hadobs/hadcrut4/diagnostics.html

        I then fitted a trend line to this perturbed data to determine the slope.

        I repeated this 10,000 times to give 10,000 slopes, and took their standard deviation, which turned out to be 0.00976 °C/decade.

        I then repeated all of the above for 1960-2010 and obtained a standard deviation of 0.00974, essentially the same.

        This makes the 95% error bars about 0.04 °C/decade wide. These two error bars are therefore

        1910-1960: 0.068-0.108 °C/decade
        1960-2010: 0.116-0.156 °C/decade

        While a slope of 0.1 °C/decade lies comfortably within the 1910-1960 error bar it is well outside the error bar for 1960-2010.

        Conclusions:

        1. The most likely rates of warming are 0.088 and 0.136 °C/decade for the respective half-centuries, and with 95% confidence these could each be low or high by at most 0.02 °C/decade.

        2. 1960-2010 certainly warmed faster than 1910-1960. Furthermore the probability of 0.025 that the latter was above its high limit of 0.108, multiplied by the equal probability that the former was below its low limit of 0.116, gives a probability of 0.000625 that the trends are closer together than 0.008 °C/decade. That’s a chance of 1 in 1600.

        So I don’t accept your premise that the two half-centuries warmed at the same rate, on the ground that it’s extremely unlikely.

        Your quantitative argument about a 50-year-delayed feedback from the first half-century showing up in the second half starts from this premise. I therefore can’t accept it either.

      • So I don’t accept your premise that the two half-centuries warmed at the same rate, on the ground that it’s extremely unlikely.

        The two half centuries have different time histories.

        The former has a significant volcanic memory (relaxation period ) lasting to around mid century.

        http://onlinelibrary.wiley.com/doi/10.1029/2008JD011673/abstract

        The later has a different solar history,the aerosol age and hfc.
        The impacts on each hemisphere differs (the SH having significant circulation changes from solar and hfc ) trying to ascertain global metrics is problematic.

      • You can also have an accelerating trend while having a decelerating forcing:

        1 0.44 0.16

        0.9 0.39

        0.8

        1 1.34 1.35

        It really just depends on the feedbacks and when they occur and at what magnitude. I don’t have an argument for a certain amount of feedback at any particular time. I just have an argument that it matters.

      • I’m not sure why that didn’t appear as I typed it but the first forcing, using my same system as before produces 1 in the first time period. 0.44 in the 2nd and 0.16 in the 3rd.

        2nd forcing starts in the 2nd times period at 0.9 for the second time period and 0.39 for the 3rd.

        3rd forcing starts in the third time period and produces 0.8.

        Each time period has a deceleration of warming from new forcings yet an increasing warming trend.

        You could also come up with a series of forcings and feedback where this would have negligible effect. I’m not a scientist so I ask those that are what does a particular forcing feedback scheme mean for the amount of warming in the second half of the 20th century for forcings from the first half. Do you have a model? If so what does your model say it should be?

      • Yes, I was wondering why you were so certain I was wrong about the trend so I started looking at what I was doing and more closely and saw some rather glaring errors in my thinking for which I appreciate you making me think about.

      • I don’t have an argument for a certain amount of feedback at any particular time. I just have an argument that it matters.

        I agree that feedbacks (or any other source of delays in impact) make it harder to tell what’s going on, which in turn makes it harder to forecast future temperature.

        In the meantime I modified my Monte Carlo program to use the actual 95% error bars shown in

        http://www.metoffice.gov.uk/hadobs/hadcrut4/diagnostics.html

        in place of a fixed standard deviation of 0.1 °C/decade.

        There are actually several sets of error bars depending on which uncertainties to include. I took the ones that incorporated measurement and sampling uncertainties, bias uncertainties, and coverage uncertainties, namely columns 11 and 12 of

        http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.3.0.0.annual_ns_avg.txt

        which give respectively the lower and upper limits of the error bar for that year. I converted each such pair to a standard deviation by dividing their difference by 3.92. This turned out to narrow the error bars for the two slopes in question, namely to

        1910-1960: 0.077—0.099
        1960-2010: 0.128—0.144

        The first one is 0.022 wide and the second 0.016, reflecting lower uncertainty in the more recent data, presumably due to improvements in coverage, measurement, etc.

      • You could also come up with a series of forcings and feedback where this would have negligible effect. I’m not a scientist so I ask those that are what does a particular forcing feedback scheme mean for the amount of warming in the second half of the 20th century for forcings from the first half. Do you have a model?

        Only an extremely simple one, nothing like the dozens of sophisticated General Circulation Models currently participating in CMIP5.

        If so what does your model say it should be?

        My model is naive, speculative, and based on observations reconciled by the following five premises.

        1. A climate sensitivity of 3 °C/doubling of CO2.

        2. Atmospheric CO2 as observed at Law Dome before 1960 and Mauna Loa thereafter.

        3. The assumption of an ocean-induced delay of 25 years, of the kind Hansen first wrote about in 1985 but more quantitative. (This is somewhat less than the 50 years you’re asking about but in the same vein. It takes into account that excess CO2 above 280 ppmv continues to increase exponentially, which you seem to be ignoring but which is important if the figure of 25 years is to remain at all constant from one decade to the next.)

        4. The odd-numbered solar cycles, which for reasons unknown (to me anyway) have a stronger effect on global climate than the even ones.

        5. The geomagnetic secular variation (GSV) as a diagnostic of erratic motions of the Earth’s core, which seem to have a time constant on the order of 60 years, likely determined principally by the core’s moment of inertia and the lower mantle’s Young’s modulus.

        I claim that these five factors are sufficient to explain all the interesting features of the 165 years of HadCRUT4 smoothed to a 10-year running mean.

        I’ve not tried to model faster-moving changes such as those related to ENSO, mainly because I’m not thus far persuaded that those have much bearing on the mean global temperature for say the decade 2100-2110.

      • Vaughan,
        Here’s my “concern” with your 5 points, they are truly global effects, that should for the most part effect the entire globe more or less equally. Yet the global temperature series you’re comparing it to is mostly made from regional surface temperature events that do not have a global impact, until it’s all smeared around the world during the production of the temperature series you’re using.

        I see this all of time with ENSO events.

      • I remember 4 and 5 from discussions here. I hope your hypotheses work out for you.

        I’m not ignoring the increase in CO2. I have no reason to disbelieve that CO2 is a GHG. What I’m saying is determining how much warming you would have today from forcings in the past from numbers that your beliefs produce would be a good check on your beliefs. It could change what you believe for the future. It may not, I don’t know. It just seems like a logical process to go through to me.

      • @Mi Cro: Here’s my “concern” with your 5 points, they are truly global effects, that should for the most part effect the entire globe more or less equally. Yet the global temperature series you’re comparing it to is mostly made from regional surface temperature events that do not have a global impact, until it’s all smeared around the world during the production of the temperature series you’re using.

        What’s your measure of “mostly”?

        If variance of global temperature, 70-80% of that is attributable to a single “event”, namely global warming over the past 70 or so years.

        Variance is an additive measure in the case of independent events, meaning that var(T1 + T2) = var(T1) + var(T2). This makes it a suitable metric for breaking down global climate into its independent parts.

        I see this all of time with ENSO events.

        If you have an example of a global multidecadal fluctuation that you can tie to regional shorter-term events like ENSO I’m all ears.

      • “What’s your measure of “mostly”?
        http://www.science20.com/virtual_worlds/blog/is_global_warming_really_a_recovery_from_regional_cooling-121820
        Down at the bottom are the annual station daily rate of change averages for all of the stations on the various continents, for min/average/max temps.
        “If variance of global temperature, 70-80% of that is attributable to a single “event”, namely global warming over the past 70 or so years.”
        See the charts referenced, most of the changes in temp that ends up in global averages are regional.
        “Variance is an additive measure in the case of independent events, meaning that var(T1 + T2) = var(T1) + var(T2). This makes it a suitable metric for breaking down global climate into its independent parts.”
        What if they are dependent? The way I process stations is to calculate a difference based on yesterday’s temp.
        I see this all of time with ENSO events.

        “If you have an example of a global multidecadal fluctuation that you can tie to regional shorter-term events like ENSO I’m all ears.”
        No, but the big swings in GAT trace back to big regional swings in min temp.

    • blueice2hotsea

      GC21C-0566 Details

      V.Pratt, hope you can share the poster as a head post at CE.

    • Is your poster available online? Link?

    • The meeting is keeping me busy this week. I’ll upload my poster to the AGU website by Friday.

      As far as a post is concerned, if CE were moderated to the standards Judy promised when she started it that would make sense. However it’s become clear that her other obligations will continue to make this impossible as long as she keeps operating it all by herself. It’s hard to argue that CE is advancing the understanding of climate etc. when half its comments are ad hominem arguments and other Latin-named fallacies. Which is a pity since Judy’s original premise of a blog that would give an equal hearing to all sides is a really great one. I’ve made this point privately to Judy a while ago but I expect it’s going to take more than just one voice crying in the wilderness.

      • Matthew R Marler

        Vaughan Pratt: It’s hard to argue that CE is advancing the understanding of climate etc. when half its comments are ad hominem arguments and other Latin-named fallacies.

        It’s easy to skip the writers who are never sensible or on point. Anyone interested in learning can indeed learn by following links and following some of the most interesting interchanges. I am sure that you, or I, or Prof Curry, or any number of people could moderate this in such a way as to ensure that it is more sensible or well informed, but it is better not to have too much moderation.

      • It’s easy to skip the writers who are never sensible or on point.

        Speak for yourself, Matthew (“easy for you, difficult for me”).

        It’s hardly surprising that the few serious scientists left standing here after four years are those with exceptionally effective phaser shields.

        It’s all too easy to interpret the “Etc.” in the name of this blog as referring to the climate created by the disproportionate number of Latin-named fallacies

      • I would suggest arbitrary and wholesale deletions of anything that is judged to be cr@p, especially repetition of the same cr@p for the 1000th time.
        There should be a bottom line in quality control. Content less twaddle should be less than acceptable.

        This would result in most of Pratt’s comments disappearing – but sacrifices have to be made.

      • Too bad, I sort of like reading your arguments even if our very first interaction involved you accusing me of being a denier without you being able to identify what I was denying. That was years ago now. I have yet to return the favor by calling you anything.

      • Matthew R Marler

        Vaughan Pratt: Speak for yourself, Matthew

        Yes. I do speak only for myself.

        I put up another comment on Romps over at RealClimate. So far no one has responded to it. 161
        Matthew R Marler says:
        16 Dec 2014 at 8:57 PM

        To continue my questioning from 146: Suppose that for each parcel of warm air that rises, a constant fraction of total energy is converted or transformed to CAPE (which is proportional to the integral, with respect to log pressure, of the difference between the parcel temperature and the surrounding air temperature, the integration being across the height that the air parcel rises). And suppose, as the authors do, that a constant fraction of CAPE is converted to lightning. In that case, the rate at which CAPE is being formed is approximately proportional to the rate at which moist warm air is rising, which in turn is approximately by the rainfall rate, since water is conserved. Consequently, the lightning rate is a constant proportion of CAPE*P, where P is the precipitation rate.

        I would appreciate it if someone has another analysis.

        Now if a 1C increase produces a 2% increase in rainfall rate (as in a paper by Held and Soden, 2005, Journal of Climate), and a 12% increase in CAPE*P, then increase in CAPE following a 1C temperature increase must be about 10%. The figures are aggregates, so they do not approximate any particular lightning storms particularly well.

        If I am wrong, I would like to find out exactly how and why before I write something foolish, or write something new and foolish.
        – See more at: http://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-4/#comment-621047

        What do you think of it? I think that their analysis has implications for how the evapotranspirative cooling of the Earth surface will change in response to CO2. I think that the implications come from the assumptions that they made in their analysis, on explicit, one implicit that I wrote explicitly.

      • @MM: Now if a 1C increase produces a 2% increase in rainfall rate (as in a paper by Held and Soden, 2005, Journal of Climate), and a 12% increase in CAPE*P, then increase in CAPE following a 1C temperature increase must be about 10%.
        […]

        What do you think of it?

        Your arithmetic is fine: 1.12/1.02 = 1.098 or about 10%. Furthermore both numbers (2% for ΔPrecipitation and 10% for ΔCAPE) are nicely bracketed by the respective values in the 11 CMIP5 GCMs listed in Romps’ Table 1. Given the considerable variation between models, and that you’re in the middle, you should be in great shape with those numbers. But what will you do with them?

        @MM: See more at: http://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-4/#comment-621047

        Comment 621047 is by Hank Roberts on methane as 70% of all coal-related emissions. Did it usurp a different comment with that number?

      • Matthew R Marler

        Vaughan Pratt, that number was stuck in when I copied and pasted from RealClimate. My second post appears to be this one:

        http://www.realclimate.org/index.php/archives/2014/12/unforced-variations-dec-2014/comment-page-4/#comment-620955

        I think that the Romps et al modeling of lightning rate has implications for how to model the effect of CO2 and warming on the rate of evapotranspirative cooling of the Earth surface, specifically I think that there is an implication of multiplying CAPE by rainfall rate. I outlined this a while ago and Pat Cassen recommended some readings that apparently contradict my conclusion (I say “apparently”, because they did not specifically address the rates of the cooling processes, but some thermodynamic considerations and the change in rainfall rate). Net cooling by dry thermals is about 17 W/m^2; by evapotranspiration about 80W/m^2; and by LWIR about 63W/m^2 (from Trenberth et al, reproduced in Randall’s book and elsewhere.) Granted that those are globally averaged figures, and Romps et al studied and modeled the Eastern US. Most of the alarmism is based on the change in the rate of radiative cooling, but an increase in temp or DWLWIR ought to change the other two rates as well.

        Romps et al calculate a 12% increase in lighting strike rate with a 1C increase in temp; Held and Soden calculate a 2% max increase in rainfall rate with a 1C increase in temp. It looks to me like Romps et al’s calculation entails a 12% increase in the rate of evapotranspirative cooling of the Earth surface, if the proportionalities that I refer to do in fact remain constant, as I think their analysis implicitly assumes. It looks to me like there is a disparity between these published papers in the implications for the change in the rate of global cooling after a 1C increase in temp.

        To clarify something, I am not claiming that either paper is wrong.

      • @MM: It looks to me like Romps et al’s calculation entails a 12% increase in the rate of evapotranspirative cooling of the Earth surface, if the proportionalities that I refer to do in fact remain constant, as I think their analysis implicitly assumes.

        The 12% refers to an increase in lightning. How does more lightning cool the Earth faster by any means, let alone evapotranspirative cooling?

      • @Pratt

        How’s the attempt to duplicate the Woods experiment coming along?

        Some experimental science sure would be nice instead of narrative science with no falsifiable predictions.

        Which kind are you taking to AGU this year?

      • Vaughan Pratt | December 20, 2014 at 2:12 am |

        @MM: It looks to me like Romps et al’s calculation entails a 12% increase in the rate of evapotranspirative cooling of the Earth surface, if the proportionalities that I refer to do in fact remain constant, as I think their analysis implicitly assumes.

        The 12% refers to an increase in lightning. How does more lightning cool the Earth faster by any means, let alone evapotranspirative cooling?

        How did you manage to ignore the phrase “if the proportionalities that I refer to do in fact remain constant”?

        One might easily guess that more lightning probably means more rain and more rain means more evapotranspiration. If the proportion of lightning to rain remains constant.

        Read harder.

      • Except that when you look at the reported precipitation from surface stations, there is no real trend, looks more drunken walk, I will note that reported precipitation is less rigorous than temps, and we all know how poor that is over longer time periods.

  38. Donald Morton, thanks for sharing your knowledge in a well-thought-out article that clarifies many contentious issues. In particular you took great pains to avoid bias and emotional language. Congratulations.

    I’m sure that you took your largely amateur audience into account. You will have simplified, and used less technical language than you would have in writing for your scientific peers. However. There is nothing about having a Ph.D. that makes it easier to understand long, complex and loosely-constructed sentences. I would have enjoyed the article far more, and probably learnt more too, if you had chosen a clearer, less academic style.

    Thanks again for the many hours of preparation you put into this.

  39. Don’t be silly. The climate models create ~66% more than real lower atmosphere warming by the fake ‘back radiation’ idea, taught in US Atmospheric Science for ~50 years, coupled with the fake single -18 deg C OLR emitter idea, which provides an imaginary negative Down flux in the bowdlerised two-stream approximation (blame Sagan for this).

    They then exaggerate upper atmosphere cooling by ~36%. The final part of the fraud is that the GISS-origin models use ~35% more low level cloud albedo than reality as a fitting parameter in hind-casting. By purporting sunlit areas of ocean are much hotter than under clouds, exponential evaporation kinetics purports imaginary high future humidity – really decreasing as the atmosphere adapts to higher [CO2]/.

    This is because the real warming from well mixed GHGs is exactly offset by the water cycle. There has been AGW from increased aerosols (Sagan got that wrong too, along with lapse rate physics).

    BOTOM LINE we are entering the new Little Ice Age and it will be quite severe by ~2040.

  40. –Whether global temperatures rise or fall during the next decade or two, we will have no confidence in the predictions of climate models. Should we wait, doing nothing to constrain our production of CO2 and similar gases while risking serious future consequences? —

    It needs to be pointed out that we aren’t doing nothing.

    Governments *are doing* nothing that could possibly affect total CO2 emissions, true, but the public has already spent over the many years, trillion dollars paying for all kinds of idiotic activity which is **said** by corrupt/stupid/evil/greedy/fraudent government policy makers to be done for the purpose of reducing CO2 emissions.

    It’s not matter of doing nothing. Rather we should reverse all the government policies which are a oppressive regime of regressive tax and are wasting trillions of dollars wealth.

    It should noted that if you are paying trillions of dollar on something and not producing what is promised, the trillion dollars of activity paid for, will also generate CO2.
    Or the massive pointless activity not only causes more hardship and poverty, it in addition also will generate massive amount of CO2 emission
    had not been the law.

    Now what has actually lower CO2 emission is more efficient systems of getting what needs to be done, done. [Or the opposite of wasting trillions of dollars on the corrupt, perverted, pointless, and destructive activity].

    And in terms broad direction the most significant reductions is CO2 emission has been the long term use of nuclear energy to make electrical energy and more natural gas use rather than coal.

  41. … until there is a new rise in the temperature curve, we have time to pause and assess which projects can be useful and which may be ineffective or even harmful. Here are some proposals.

    –e.g., Stop listening to those in Leftist academia who refuse to admit that the abandonment of the scientific method was ideologically motivated to fool the people.

  42. A fan of *MORE* discourse

    Climate Etc asks  “Will a return of rising temperatures validate the climate models?”

    http://imgs.xkcd.com/comics/frequentists_vs_bayesians.png

    Bayesians answer  “Heck YES!”

    A hearty Bayesian “good on `yah!” goes out to James Hansen …

    https://www.youtube.com/watch?v=lXTPKGuQhzQ

    … for a sustained track record of reasoned, respectful, science-based, policy-oriented predictions!

    \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Fan

      James who? Why have you never mentioned him before?

      Suggestion for Christmas present

      http://www.coolearth.org/general-news/half-a-million-acres

      Tonyb

      • Matthew R Marler

        a fan of *MORE* discourse: A good cartoon. How does the Bayesian, or anyone else for that matter, know the probabilities of the outcomes of the roles of the dice? Who would make a measuring device that intentionally rolled the dice before reading out the measurement?

        For more thought along those lines, read “A Comparison of Bayesian and Frequentist Methods of Estimation” by Francisco J. Samaniego; and his considerations of when Bayesian estimates are less accurate than frequentist estimates (almost always, especially in multiparameter cases or cases without a lot of a priori frequentist information).

        … for a sustained track record of reasoned, respectful, science-based, policy-oriented predictions!

        And consistently wrong to date!

      • A computer scientist with some familiarity with probabilistic algorithms would say “ask it twice more”.

        If it said “yes” every time, rather than duck under a desk (if that’s the protocol when your Sun goes nova) I’d question the premises.

    • A fan of *MORE* discourse

      Politics is finite in relation to individuals …

      https://www.youtube.com/watch?v=W01rRJeCwKI

      … individuals is infinite in relation to politics.

      Conclusion  The history of politics is internal to the history of the mind.

      Conclusion  Allow your imagination free reign, TonyB!

      \scriptstyle\rule[2.25ex]{0.01pt}{0.01pt}\,\boldsymbol{\overset{\scriptstyle\circ\wedge\circ}{\smile}\,\heartsuit\,{\displaystyle\text{\bfseries!!!}}\,\heartsuit\,\overset{\scriptstyle\circ\wedge\circ}{\smile}}\ \rule[-0.25ex]{0.01pt}{0.01pt}

    • Usually everyone including myself hates it when I explain a joke. But I’ll do so anyway for the nova detector.

      The frequentist and the Bayesian both estimate the probability of the hypothesis H that the Sun has gone nova based on the evidence E from the detector.

      The frequentist makes a completely unbiased judgment based on E alone.

      The Bayesian has a prior probability for H, namely P(H), and modifies it to P(H|E), the probability of H given the new evidence E from the detector, by multiplying P(H) by P(E|H)/P(E), namely the probability of a yes when the sun has exploded (namely 35/36) divided by the probability of a yes (not sure how to compute that but it has to be at least 1/36). So P(E|H)/P(E) can’t be more than 35, but P(H) is presumably insanely tiny whence so is P(H|E).

      • Of course a frequentist would get 100 results and make a rational decision – not merely assume that the Sun hadn’t exploded. So it’s not a realistic scenario – merely posited for some esoteric joke that nerds might find amusing because jocks wouldn’t get it.

        You’re such a nerd Pratt.

      • Of course a frequentist would get 100 results and make a rational decision

        Quite right, which is why I wrote, A computer scientist with some familiarity with probabilistic algorithms would say “ask it twice more”. And they’d have the correct answer long before you got your 100 results.

        But your shortening of Jass to Jas shows that at least you’re ahead in the whack-a-mole game you’re playing with Judith. You’ve been a bad little mole — mol — mo, Rob.

      • Of course a frequentist would get 100 results and make a rational decision

        Quite right, which is why I wrote, A computer scientist with some familiarity with probabilistic algorithms would say “ask it twice more”. And they’d have the correct answer long before you got your 100 results.

        But your shortening of your latest moniker shows that at least you’re ahead in the whack-a-mole game you’ve been playing. You’ve been a bad little mole — mol — mo, Robert.

      • Of course a frequentist would get 100 results and make a rational decision.

        Quite right, which is why I wrote, A computer scientist with some familiarity with probabilistic algorithms would say “ask it twice more”. And they’d have the correct answer long before you got your 100 results.

  43. John Smith (it's my real name)

    #2 “discuss what are optimum global temperatures and CO2 concentrations…”

    any opening bids?
    just curious

    crank back to 350 ppm?
    that process should be fun
    what do the models predict for global temps at 350?

    • The problem with this is that the details matter. Trying to pick an optimum temperature is not going to be easy, because the climate scientists have the resolution of one temperature for the whole earth. That resolution tells us practically nothing about what the temperature is in Kansas, Lima, or London. Warming at the poles is a very different thing than warming a desert. Selecting an optimum temperature is not feasible for this reason.

      Also, who says that is the real criteria anyway? In my opinion what we should be trying to control (if we can control anything) is the RATE of change. Is there a rate of warming that is net beneficial? Does the data suggest that the warming rate will be higher than that? These are the questions that need to be asked as far as I can tell.

      • I believe that the rate of change is pretty well controlled. There is much discussion about whether temperature has changed measurably in the last 14-16 years.

        That small a change would seem to be sufficiently controlled.

      • PA commented

        I believe that the rate of change is pretty well controlled. There is much discussion about whether temperature has changed measurably in the last 14-16 years.
        That small a change would seem to be sufficiently controlled.

        Here’s the day over day change by year, for stations worldwide.
        http://www.science20.com/sites/all/modules/author_gallery/uploads/1871094542-global.png
        You can see the annual average of the daily change in max temps averages out to near zero for the last 60 plus years, which leaves the changes in min temp as the cause of the changes in surface temp.
        But, you can plot out the daily change for an area from the spring and fall solstice, which is the highest warming rate per day to the highest cooling day, and the fall to the following spring, the highest cooling day to the highest warming day. This is the rate of change in response to the changing length of day. You can get this slope for warming and cooling rate of change by year, plot this out to see how it’s changed.
        http://www.science20.com/sites/all/modules/author_gallery/uploads/543663916-global.png

      • I hit submit too soon.
        You can see the slope does change, but it looks like it’s in the process of changing direction. As a minimum, the rate of change in surface temps shows it hasn’t changed much the last 15-20 years.
        Also that Max temp is and has been pretty stable, and minimum temps have all of the activity. You can break the globe up into bands, or boxes and see regionally where this activity occurs, it is scattered around at various times and places.

      • “Also that Max temp is and has been pretty stable, and minimum temps have all of the activity.”

        You hit hit a the core issue. A 1-2°C temperature rise that has only a small effect on daytime maximums is a benefit and not a problem. The best argument for global warming is daytime heat stress and lack of daytime warming kills that argument.

        There doesn’t seem to be a viable argument for harm from 1-2°C of warming.

    • Paleontology has already outlined the colder limits of climate for a healthy biome, it has not yet tested the warmer limits.

      A warmer world sustains more total life and more diversity of life.

      It’s really that simple.
      =========================

      • Just assertion or do you have any cites to back yourself up.

        What about all the artifacts found in Doggerland?

      • @kim: A warmer world sustains more total life and more diversity of life.

        This is why Venus is teeming with a biodiversity Earth won’t match until it has burned up all the carbon in the lithosphere.

    • Warmer?

    • @ John Smith (it’s my real name)

      #2 “discuss what are optimum global temperatures and CO2 concentrations…”

      Hi John,

      I think that a helpful first step would for the experts to provide a ‘calibration curve’ for the Earth’s thermostat. I. e., a plot of atmospheric CO2 vs Temperature of the Earth.

      Then it would be a simple matter for the world’s politicians to get together, decide on the ‘Optimum Temperature of the Earth’, pick the corresponding CO2 percentage off the cal curve, issue the appropriate world wide ‘carbon’ regulations required to set the CO2 thermostat, and live happily ever after.

      Easy peasy!

      But lets be sure we proceed in the logical order: regulations LAST.

      • John Smith (it's my real name)

        thanks Bob Ludwick
        good plan
        however, as a “denialist” prone to “conspiracy ideation”
        I’m just a little nervous about how who issues and enforces “worldwide” regulations
        perhaps the IPCC should have police powers
        and a SWAT team or two
        with vest that read CARBON ENFORCEMENT
        :)

      • They can go further than that and give temperature change according to emissions. It makes the choice more stark. More emissions means more temperature rise.
        http://www.nature.com/ngeo/journal/v7/n10/images/ngeo2254-f1.jpg

      • Ummm. JD …

        You don’t appear to have noticed that the models have been increasingly wrong the last 14 years and have just about soared into irrelevance.

        That would make those IPCC charts just a plotting exercise rather than useful information.

      • @ John Smith (it’s my real name)

        “….I’m just a little nervous about how who issues and enforces “worldwide” regulations….”

        John!!! You are jumping the gun again. We don’t have to worry about the enforcement angle until the world’s politicians have agreed upon the official procedure for measuring the temperature of the earth, the optimum temperature of the earth, the precise percentage of CO2 that would establish and maintain that ideal temperature, the number, location, and specifications for the CO2 monitoring stations, who would be in charge of the world CO2 monitoring system, who would pay for it, the thousands of pages of regulations and taxes that would require enforcement, who would pay for the regulatory bodies, who would collect the CO2 taxes, how the taxes would be distributed etc.

        No, what we have to worry about is every little tinhorn progressive dictator wannabe issuing local taxes and regulations to ‘save the planet’ while, as a totally unanticipated side effect, giving him the power to punish his enemies and reward himself and his friends.

        While examples of the ‘unintended consequences’ abound, examples of ‘planet saving’ will require extensive research and a willing suspension of disbelief.

  44. The paper by Swanson and Tsonis which is the subject of your post, is about how natural ocean cycles affect the evolution of global temperatures. The point they make may be summarized by the following quote:
    “While in the observations such breaks in temperature trend are clearly superimposed upon a century time-scale warming presumably due to anthropogenic forcing, those breaks result in significant departures from that warming over time periods spanning multiple decades.”

    So they are talking about natural variability superimposed on a long term trend due to anthropogenic forcing. Clearly statistical analysis does not indicate that the long term anthropogenic warming trend has been broken, despite the short term changes due to natural regime change..

    • Presumably… Neat word. Kinda rolls off the tongue. Pre..’sumably.

      https://lh3.googleusercontent.com/-Ol5d6_9lckQ/VI-p0n9qpyI/AAAAAAAAL6s/CYMl90cR4F4/w706-h449-no/21st%2Bmean.png

      So are they “pre..’suming” that the century plus length trend that appears to start around 1700 is anthropogenic? I reckon that is possible, but it would imply that something other than CO2 equivalent forcing would be involved. If the current less than anticipated rate of warming continues until temperatures intersect that longer term trend, would that be news worthy?

      • Captain

        I have always said that the temperature has been rising since at least 1700 . If the rise is of anthropogenic origin caused by our co2 it surely means that humanity is unable to live on this planet as it is certain we shall never see 1700 co2 levels again with 7 billion people all striving to improve their standard of living

        Tonyb

      • tonyb, I agree completely. My biggest question would be what would be a reasonable estimate for a LIA bounce? If we knew that we could get a rough estimate of how much anthropogenic influences really had. I still suspect that indirect land use effects like changing soil moisture, ash fallout and erosion would have some impact in the warming direction, but the somewhat haphazard paleo reconstruction approaches tend to smooth out what information there is.

      • Captain

        Here is extended cet

        http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/

        The warmest temperatures in the record are probably the first three Years at the very start of this record. However it is not within the MWP and I would say the warmesT temperatures I have researched are in the middle part of the 14th century and much of the 11 th and 12 th century which were some Half to one degree centigrade warmer than today

        So as you can see the current bounce up is less than the bounce down in the first place.

        Tonyb

      • tonyb, with an annual record you would find all sorts of variations. With most of the Paleo I am stuck with roughly 50 year averages at best except for a few high frequency corals.

        https://lh6.googleusercontent.com/-kqR-D0TcojI/VJDKGQ0qaCI/AAAAAAAAL7A/KzRDNGz97t4/w637-h441-no/ipwp%2Bnino.png

        That is a comparison of the Oppo IPWP with the Emile-Geay Nino region. the agree pretty well for that period, but Nino region can get seriously out of phase with the IPWP thanks to the hemispheric seesaw. A lot of the southern hemisphere started warming 5000 years ago which didn’t have much impact on Global “surface” but did on ocean heat content. The MWP was mainly an NH event because the SH marches to a different tune and doesn’t have as much land amplification. What is really needed for the NH are glacial area reconstructions.

      • “I have always said that the temperature has been rising since at least 1700 . If the rise is of anthropogenic origin caused by our co2….”
        —–
        Doubtful. The rise in CO2 as the LIA bottomed out and temps began to rise was initially both from natural sources and then more anthropogenic from about 1750 onward. Looking something like this:

        http://www.co2science.org/subject/other/figures/lawdome.jpg

      • ‘The record clearly demonstrates that i) [CO2] were significantly higher thanusually reported for the Last Termination and ii) the overall pattern of CO2 evolution through the studiedtime period is fairly dynamic, with significant abrupt fluctuations in [CO2] when the climate moved frominterstadial to stadial state and
        vice versa. A new loss-on-ignition chemical record (used here as a proxyfor temperature) lends independent support to the Hässeldala Port [CO2] record. The large-amplitude fluctuations around the climate change transitions may indicate unstable climates and that “tipping-point”
        situations were involved in Last Termination climate evolution. The scenario presented here is in contrast to [CO2] records reconstructed from air bubbles trapped in ice, which indicate lower concen-trations and a gradual, linear increase of [CO2] through time.’ http://www.academia.edu/2949675/Stomatal_proxy_record_of_CO2_concentrations_from_the_last_termination_suggests_an_important_role_for_CO2_at_climate_change_transitions

        https://watertechbyrie.files.wordpress.com/2014/06/steinthorsdottir-fig-9.png

        We know there are big increases in CO2 flux with relatively small temperature increase. If you are actually capable of it – that is not emotionally committed to being an idjit – you have to wonder.

      • Rgates

        We can see co2 versus temperature in better context with extended CET (note CET has risen again this year)

        Also note that the first three years of the reconstruction are probably the three warmest consecutive years in the record.

        http://wattsupwiththat.files.wordpress.com/2013/05/clip_image0028.jpg

        tonyb

      • Tweedle Dee, Thanks for the stomatal proxy paper.

    • Matthew R Marler

      eadler2: Clearly statistical analysis does not indicate that the long term anthropogenic warming trend has been broken, despite the short term changes due to natural regime change..

      Lots of statistical analyses have been performed, and the indications that you get depend on the assumptions that you make about the natural variability (i.e., the variability independent of the CO2). That variability and the mechanisms that produce it are not known well. Based on modeling of natural variability, two of the ideas that can not be ruled out are: 100% of the warming since 1850 is independent of anthropogenic CO2; 100% of the warming since 1850 has been caused by anthropogenic CO2.

      An equally important idea that can’t be ruled out on current data: the near future is not the mirror image of the recent past: the effect of the next doubling of CO2 can not be predicted from the modeling results based on assuming some CO2-caused warming to date.

      These are among the reasons why the models can’t be relied upon before extensive testing with respect to out-of-sample data.

    • “Clearly statistical analysis does not indicate that the long term anthropogenic warming trend has been broken …”

      Well, it is very hard to break something that has never existed but as hypothesized in a conjecture.

  45. Small government advocates assure me climate changes all the time and thus all the hub bub is nothing to worry about. Proof:

  46. This is OT, but interesting WRT the Climategate emails. From the article:

    The other case is a U.S. Court of Appeals for the D.C. Circuit ruling from 1969 involving a lawsuit filed by then-U.S. Sen. Thomas J. Dodd against two investigative reporters, Drew Pearson and Jack Anderson, over articles they published based on leaked documents that ex-staffers had purloined from the Connecticut lawmaker’s office.

    “When information is on a matter of public concern, the court held, the fact that it was illegally leaked doesn’t make publishing it an invasion of privacy,” writes Mr. Volokh, summing up the decision.

    http://blogs.wsj.com/law/2014/12/15/sonys-legal-threats-in-hacking-scandal-face-first-amendment-hurdle/?mod=WSJ_hpp_MIDDLE_Video_Third

  47. Thanks so much for this very thorough summary, and assessment, of the evidence and the models. As should be clear, “the science” is very far from being “settled.”

    With respect to your point 7, “Cease claiming that rising temperatures are causing more occurances of extreme weather because the evidence is not there,” I’ve been doing some digging of my own on this matter. To my surprise, in case after case, I’ve discovered that many of the earlier assessments were not only premature, but in many cases wrong. For a summary of what I discovered, see the following blog post: http://amoleintheground.blogspot.com/2014/12/are-we-headed-for-disaster.html

  48. The on/off nature of climate change has been ignored for far too long. It first became apparent in the 1940 singularity when the temperature fell at the rate of -0.15C/ decade despite rising CO2 (see my theoretical model underlined above). In the 20th century there were two periods of rising temperature: 1910 to 1940 and 1970 to 1998, although the second period was a consequence of the first, owing to the ocean’s transport delay.

    To explain these differences in models you need to look inside the CO2 molecule and understand what happens to excess neutrons, because it is these heavy particles that vibrate and absorb excess IR photons and are the probable reason for the 1910 to 1940 temperature rise.

  49. Over on Climate Audit, Steve McIntyre is questioning the validity and accuracy of the IPCC AR5 CMIP5 climate model ensemble, saying that consideration should be given to constructing models which exhibit lower GHG sensitivity than do current models. Here on Climate Etc., the question is being asked, “Will a return of rising temperatures validate the climate models?”

    Much attention is now being focused on 2014 as the hottest year on record, with the implication that — pause or no pause in the central trend of observed global mean temperature — global warming continues apace per IPCC model projections.

    The trend in peak hottest years starting in 1998 and continuing on through 2005, 2010, and now 2014 is roughly 0.1C per decade, as is illustrated in the graphic shown below, which is an adaption of the Ed Hawkins graphic referenced by David Apell several weeks ago in a comment he posted in response to the “Spinning the ‘warmest year’” article.
    .

    .
    As shown in the above graphic, if a trend of peak hottest years starts in 1998 and is then extrapolated at a rate of +0.1 C per decade out to the year 2035, the extrapolated trend just skirts the lower boundary of the model ensemble range interval described by IPPC AR5 RCP (all 5-95% range). Furthermore, the extrapolated trend passes comfortably through the center of the “indicative AR5 assessed likely range for annual means” between 2015 and 2035.

    When a continuing series of peak hottest years occurs every four or five years, each peak a little hotter than the last, then there is an obvious temptation to view that particular metric as being a more useful indicator of the validity of the AR5 climate models than would be the central trend of the observations.

    Just for one prominent example, when NOAA and/or NASA cite 2014 as being the hottest year on record, in a context of stating their official positions concerning climate change — as were 1998, 2005, and 2010 similarly cited — then for purposes of verifying the AR5 model ensemble, what they are really saying is that the trend of peak hottest years is what matters most to them as climate scientists, not the central trend of observed temperatures.

    Regarding Steve McIntyre’s opinion that lower-sensitivity climate models ought to be constructed, it is not likely the mainstream climate science community will ever buy into that. If the current ensemble of AR5 CMIP5 climate models fits their basic needs through the use of a conveniently opaque interpretation of current trends in peak hottest years, then why would climate scientists have any real incentive to create low-sensitivity climate models which more faithfully track the observed central trend of global mean temperature?

  50. Basil Newmerzhycky

    Perhaps a better name for this thread would be “Will a New Set of All-Time Warming Years End the Pause Fallicy Once and For All” starting with 2014.

    In “Spinning the Warmest Year” I issued a challenge for anyone to re-graph either the HadCrut4 or GISS global temp dataset without 1998.I see there were no takes. I suspect the “pause” in doing so is because many fear what will disappear with the removal of one tiny year.

    I’ve said it before, if someones long-term climate outlook, their take of a so-called pause relies solely on one year…16 years ago, then they really don’t have a case.

    • Unlike El Niño and La Niña, which may occur every 3 to 7 years and last from 6 to 18 months, the PDO can remain in the same phase for 20 to 30 years. The shift in the PDO can have significant implications for global climate, affecting Pacific and Atlantic hurricane activity, droughts and flooding around the Pacific basin, the productivity of marine ecosystems, and global land temperature patterns. #8220;This multi-year Pacific Decadal Oscillation ‘cool’ trend can intensify La Niña or diminish El Niño impacts around the Pacific basin,” said Bill Patzert, an oceanographer and climatologist at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. “The persistence of this large-scale pattern [in 2008] tells us there is much more than an isolated La Niña occurring in the Pacific Ocean.”

      Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.” http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

      Basically – there’s science that suggests that climate shifted in 1998/2001 to a new regime starting around 2002 and likely to persist for 20 to 40 years. On the other hand there are twits suggesting that it not statistically significant based on pure silliness. What can you possibly tell them that they can comprehend?

      http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/plot/hadcrut4gl/from:1979/mean:12/plot/hadcrut4gl/from:2002/trend

      https://watertechbyrie.files.wordpress.com/2014/12/roses-19-e1418721660193.jpg

    • Basil Newmerzhycky: This is the trend-line chart that skeptics fear as much as dracula fears holy water.

      Basil, the trend line in the HadCRUT4 plot indicates a warming of roughly 0.1C per decade between 1999 and 2014. However, current rates of increase in CO2 emissions are running at the high AR5 RCP8.5 scenario.

      At current rates of increase in CO2 emissions, isn’t it perfectly appropriate to be thinking that the HadCRUT4 trend line between 1999 and 2014 should be running at least at 0.2C per decade, or even higher, not at 0.1 C per decade? What explains the difference?

      • Basil Newmerzhycky

        Beta,
        Most of 1999 thru 2013 was in the cold phase of the PDO, as well as some other factors which have slowed the trend. A slower trend would be expected for that short time interval, why is that newsworthy? Climate models don’t try to time decadal or other oscillations.

        What is more significant is that there has been NO Cooling trend and even the coolest temperatures from 200-2014 are WARMER than any previous year..

        Wen the warm phase of the PDO kicks in fully the rate will likely rise more again. Not looking forward to it, we’re already experiencing the effects of an altered early winter storm track in CA…6 of past 8 years have been severe drought years…with only a moderate El Nino or La Nina bringing rainfall amounts anywhere near normal..

  51. Another suggestion for modelers, start with no premise, at least not with the premise that co2 is THE control knob, and our models will prove it!

    • Barnes, they don’t start with that premise. It is an emergent property from radiative transfer physics associated with the gas amounts observed in the atmosphere.

      • “It is an emergent property from radiative transfer physics associated with the gas amounts observed in the atmosphere.”

        The water vapor multiplier is a fiction not physics (they aren’t even spelled the same). The physics say that you will be lucky to the nameplate amount of forcing from CO2 on the ocean. Radiative forcing only accounts for about 1/3 of ocean heat loss – when radiative heat loss is reduced the ocean simply loses more heat the way it loses the majority of its heat already.

        Global warming theory in a nutshell is if you plug a couple of holes in a sieve it stops up the sieve – what happens in reality is the sieve fills up an incrementally tiny amount and the extra head pressure forces the same amount of water out of the other holes.

      • Barnes, yes, as PA says, it ends up being the H2O that is important. H2O largely responds to the surface temperature via equilibrium thermodynamics, another area of basic physics. So first you get warming by CO2 then the H2O response, which gives you the total effect. Arrhenius understood this much around 1900.

      • Pretty much you get a change in CO2 and then climate does whatever it freaking likes.

      • Jim D | December 17, 2014 at 12:48 am |
        Barnes, yes, as PA says, it ends up being the H2O that is important. H2O largely responds to the surface temperature via equilibrium thermodynamics, another area of basic physics. So first you get warming by CO2 then the H2O response, which gives you the total effect. Arrhenius understood this much around 1900.

        Arrhenius took his lumps in the Gravito-Thermal discussion thread.

        The Tropical Tropopause Layer (TTL) is cooling, the stratosphere is drying, the TLT (temperature lower troposphere) trend seems warmer measured from the ground than the basically flat trend of the first couple of kilometers measured from space. Since balloons and satellites show the same trend – the bulk of the lower troposphere isn’t getting warmer.

        The kicker is cloud cover:
        CloudCoverTotalObservationsSince1983.gif

        Clouds alone would explain the vast majority of the last 30 years.

        Hard to say whether CO2 is driving this, the new temperature regime, natural ripples in global climate or all three or something else. Until we can sort out the effects and attribute cause about all you can say is more CO2 ought to make things very near the ground a little warmer mostly at night.

      • PA commented on

        Until we can sort out the effects and attribute cause about all you can say is more CO2 ought to make things very near the ground a little warmer mostly at night.

        There isn’t much evidence this is even happening.
        http://content.science20.com/files/images/Global%20Annual%201940-2010.jpg
        The R and F lines are yesterday’s ringing temp, and last nights falling temp, you can see they are almost an identical match, and haven’t really changed much over time.
        There is some evidence mornings are a little warmer, but I believe that’s matched by the afternoons being a little warmer as well.

  52. Pingback: Re: Will a return of rising temperatures validate the climate models? | Kirk M. Maxey: Blog and Website

  53. I think the contribution of Mr. Morton is a very good explanation of which is the “state of the art” , very useful to try to make understand people that not all is so settled.

  54. I know I’m repeating, but all this of chaos remembers me this sort of toys, where some big balls (multidecadal oscillations, eruptions, etc.) can disrupt the movement. All these toys have an spiral and a battery inside, so they are “forced”, maybe some little changes in the forcing (becuase of losing potential in the battery, for example), may change the exact movements the balls will do, but it will be anyway a chaotic system governed by those big balls and gravity. Just when someone is able to predict exactly the movement of one of those toys, I’ll begin to think that they are in the (long) way to predict climate in the next centuries.

  55. Max_OK, Citizen Scientist

    Dr. Morton raises some interesting questions in his proposal #2.

    “2) Discuss what are optimum global temperatures and CO2 concentrations before we set limits because cold weather causes more fatalities than hot weather and there is evidence that the present warm climate and enhanced CO2 are contributing to increased plant growth (Bigelow et al. 2014).”
    _______

    One could argue the temperatures of the 20th Century were optimal, because civilization thrived during that period, and future growth in CO2 concentrations that push temperatures higher is a gamble. Is this a gamble we need to take?

    I have seen evidence winters cause a rise in fatalities, but I’m not sure very cold winters cause any more fatalities than mild winters. Nor have I seen evidence areas of the globe with warm climates have lower fatality rates than areas with cool climates.

    Enhanced CO2 contributes to plant growth in greenhouses as does the control of fertilizer, watering, humidity, and temperature, which operators try to keep at optimum levels. Obviously if there is an optimum temperature for a plant, warmer would not be better. In the field, where growing conditions are not well controlled, more CO2 alone should still contribute to plant growth, although the extent of its contribution is not well known. Unfortunately, rising atmospheric CO2 could also result in temperatures rising beyond levels to which plants are accustomed, with negative net results.

    • Max_OK, Citizen Scientist

      I apologize for posting my comments on Dr. Morton’s proposal # 2 in the wrong place. My comments are unrelated to the posts immediately above.

    • Max,_OK, Cub Reporter writes:

      “One could argue the temperatures of the 20th Century were optimal, because civilization thrived during that period”

      One can argue lots of things, but it’s a lot more effective if your argument proceeds logically. Yours above is a classic, post hoc ergo propter hoc. Alarmist reasoning is rife with this particular logical fallacy. In fact, it’s based on one.

      You have no way of knowing how civilization would have done under a warmer climate regime.

      • Max_OK, Citizen Scientist

        That would be my point. Because we don’t know how civilization would do in a warmer world, pursuing polices that would cause a warmer world is a gamble.

        As I have always said, the climate contrarians who want to pursue such polices are mostly older people who won’t be around long enough to suffer the consequences of a bad bet. In effect they are betting with other peoples’ money, the future generations who do have something to lose. Don’t tell me the elderly put the interest of coming generations ahead of their own, because if they did, they wouldn’t be so opposed to reforms in Social Security.

        IMO, the selfish nature of climate contrarians makes them little better than thieves, an opinion reinforced when I recall many condone hacking. On the plus side, contrarians are valuable source of amusement.

  56. John Smith (it's my real name)

    Tonyb said way up thread

    “I have always said that the temperature has been rising since at least 1700 . If the rise is of anthropogenic origin caused by our co2 it surely means that humanity is unable to live on this planet”

    being new to what’s really going around this subject
    I had not realized how oddly, to me at least, anti-human secular humanism is, or has become
    never imagined meself uttering such, but there ya’ go

    the data arguments go back and forth
    this is really the core of the thing

  57. I would be more impressed with everything global warming alarmists believe in if I was still in Sunday school.

  58. Grid sizes won’t help. The Navier Stokes equations in 3d do not converge on the true solution as the numerical grid size and time step go to zero.

  59. Basil Newmerzhycky

    I challenge ANYONE attempting to say there has been a “pause” in the warming since 1998, please explain that to me after looking at the true trendline from 1998 thru 2013 below:

    http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:12/plot/hadcrut4gl/from:1999/trend

    I am seeing some really bad, statistically flawed “connect the peaks only” graphs from cherrypicked time periods on the skeptic side in order to manufacture something that statistically is not true.

    • Basil

      If you visit this link

      http://www.metoffice.gov.uk/research/news/recent-pause-in-warming

      You will see three articles by the UK met office acknowledging the pause. I was in discussion at the met office with several of their scientists this time ladt year and they spoke freely bout the pause. Surely you are not a denier?

      Tonyb

    • nottawa rafter

      Basil
      The reference to pause, hiatus or plateau is ubiquitous, not in denier circles, but in warmist publications. There are too many in position of power who have used the term to deny its existence.

      • Basil Newmerzhycky

        I’m not asking anyone to politicize “who says what”.
        I’m asking people to do their own thinking.
        For legit climate researchers a pause might refer to a slower rate of warming, compared to some climate models.

        For deniers, seems their only foundation is bad statistics, not knowing how to draw a true trendline, by starting at a cherry-picked year, a record warm El Nino that likely produced a 1-year warmth spurt 30 years ahead of its time.

        Again, can you draw a legitimate trendline that shows a pause?
        I don’t think so

      • Basil

        So the Met office are the deniers now?

        Read their three pdf’s in the link I provided. They seem to think there was a pause.

        Tonyb

      • Climate Researcher 

        Basil

        The world is still warming due to a natural long term (934 year) cycle which will reach a maximum in less than 50 years. After that it will cool, just as it did after the Roman warming period and the Medieval warming period. No valid physics can prove that carbon dioxide warms Earth’s surface. So what’s your problem?

    • no such thing as a true trend line.

      Trend lines are manufactured by making choices as an analyst. Those choices can be well defended or not.

      Here is one thing we know. Fitting a straight line to temperature data is non physical. That is, we know the underlying data generating process is not linear.

      It’s better if you accurately describe what fitting a model to data really is.

      • You commented on trend lines and what they are. When looking up trend line we find a lot about technical analysis. Day traders. Not so much the scientific meaning of them. However this one struck me:
        http://www.liberatedstocktrader.com/wp-content/uploads/how-to-draw-trend-lines.jpg
        Say it’s tracking average temperatures. The support lines are things that warm the planet and the resistance ones are those that cool it. Say there are at least two opposing forces that bound temperature and they do change over time. We have an idea which kinds of lines the warmists have a lot of faith in, same with the skeptics. This above plot seems consistent with the Tsonis 2007 temperature plot in a general way. The cooling lines representing whatever it was that gave us arguably flat temperatures. And I am not saying the CO2 effect ever collapses, but it’s possible the water vapor one does. I realize the price of the stock is somewhat determined by human psychology but so are the trend lines anyone draws in most situations. While financial technical analysis of the kind used on the above plot may illustrate something, I don’t put much faith in it, I’d caution my clients not to hire or pay someone who said it will make them money.

    • If you start from a particularly low point – it is a particularly silly delusion.

      http://www.woodfortrees.org/plot/hadcrut4gl/from:1979/mean:12/plot/hadcrut4gl/from:2002/trend

    • Matthew R Marler

      Basil Newmerzhycky: the true trendline from 1998 thru 2013 below:

      On what do you base your claim that you have found the “true” trendline for forecasting, say, the next 20 years? Of all trendlines produced to date, does it just happen to be your favorite?

      • Basil Newmerzhycky

        A “true” trendline is one that is based on linear regression, averaging both peaks, midpoints, and troughs in the dataset. There are various ways of doing this. One of the basic ones is shown here again, using 1998 (skeptics favorite year).

        http://www.woodfortrees.org/plot/gistemp/from:1980/mean:12/plot/gistemp/from:1998/trend

        Now you could expand wherever you want, but the worst trendline is one simply starting at a top (or bottom) of a specific point. That’s pure statistical garbage that would get one kicked out of a statistics 101 class.

        Here’s an example of a pure garbage trendline that so often appear in Rupert Murdoch tabloids, posing as science.:

        http://papundits.files.wordpress.com/2014/09/pause10_thumb.jpg

      • Matthew R Marler

        Basil Newmerzhhycky: A “true” trendline is one that is based on linear regression, averaging both peaks, midpoints, and troughs in the dataset.

        No it isn’t. If the data are adequate and the true line is a straight line, then linear regression via least squares will produce an unbiased and normally distributed estimate of the slope and intercept. With probability 1, they will not equal they true intercept and slope, though with enough data they will be close enough.

        In your case, you biased the effort from the start with an arbitrary subset of the data.

    • Basil Newmerzhycky:
      Gavin Schmidt’s plot: https://curryja.files.wordpress.com/2014/12/gavin_page_1.png
      It may be read as confirming the pause I think. The brown peaks advance to the right. Those are the warming regime ones. The red peaks hardly advance to the right. They are focused. So looking at what’s different between the two sets we have a healthy advance and then a lackadaisical one.

      • Basil Newmerzhycky

        Odd chart, but seems to confirm that the “peak” warm years most recently have a greater anomaly magnitude (bottom axis) than peak years of decades past. Nothing else from that chart really jumps out at me, other than the statistically incorrect way of only looking at “peaks” in datasets, when in reality you’re supposed to look at both peaks and bottoms to get true trend averages. But thanks for showing it Ragnaar

      • Basil Newmerzhycky:
        The 8 brown data points show records being broken by greater margins and the 4 red data points show records being broken by small margins. Nearly a pause. It isn’t a good idea to be only looking at peaks or new records, unless we are looking for a rough boundary level. I am not looking the curve profiles, just that they’ve almost stopped advancing.

      • Basil Newmerzhycky

        I see what you’re saying. Not sure why that’s significant, other than since we’ve been in a temporary cold phase of the Pacific Decadal Oscillation from about 2000-2013 the record warm years had a slightly lower magnitude. The much bigger fact that stands out is that there was no significant cooling trend during this time (when there should have been if we are in a natural cycle) and that we continued to have warmer years in 2005 and 2010 and soon to be 2014.

        Here’s a good one to ask Gavin.
        Ask him for a similar curve spread of all record breaking cold years in the past 100 years.

        Unfortunately it will be a blank map, as we’ve not had any since about 1910. That’s an incredibly strong fact supporting that the warming of the past century is not part of any natural cycle.

  60. Pingback: Climate Change Establishment Painted Into a Corner | evilincandescentbulb

  61. Climate Researcher 

    The world is still warming at a long-term rate of about half a degree per century, but it will reach a maximum within 50 years, then cool for nearly 500 years with superimposed 60 year natural cycles.

    Until you take comfort in the valid physics which explains all temperatures in tropospheres, surfaces, crusts, mantles and cores of planets and satellite moons throughout the Solar System and no doubt beyond, you will continue to argue about what is a fictional hypothesis and, in the process, you will be unduly concerned when the next 30 years of warming occurs between 2028 and 2058.

  62. Threading is broken. My above comment was addressed to Johnathan Abbot who said,
    “Yeah, anyone who buys the 97% line either isn’t reading some very basic stuff they really need to read, or doesn’t give a damn about the facts, only a ‘higher truth’.”

  63. Basil Newmerzhycky

    Climate Researcher | December 17, 2014 at 6:36 pm |

    “The world is still warming due to a natural long term (934 year) cycle which will reach a maximum in less than 50 years. After that it will cool, ”
    ———————————————————————————————

    Climate Researcher, where are you getting that from?
    Please show me a link to a PEER REVIEWED research paper that says the warming of +0.15 Deg C per decade we’ve been seeing the past 50 years is part of a 934 year cycle (lol not 933 or 935???) . I’ve seen some trash “tabloid science” articles refer to something like that but without any legit references nor any valid temperature charts.

    • Climate Researcher 

      How about you, Basil, show me a “peer-reviewed” paper that confirms your “+0.15 Degree C per decade” calculated over a time span of 60 years so as to eliminate the effect of the 60-year cycle clearly demonstrated in this peer-reviewed paper.

      Dr Roy Spencer’s temperature plots show November 2014 as +0.33 degree above the 1980 mean. That’s less than 0.1 Deg C per decade. Go back 60 years and you get a lower rate still such as 0.05 Deg C per decade in the above linked paper. The 934-year and 60-year cycles in the scalar sum of the angular momentum of the Sun and all the planets correlate compellingly well with temperatures over the last 2,000 years. You’ll find plots of such in this paper. Far better than any correlation with carbon dioxide levels my friend.

  64. Basil Newmerzhycky | December 17, 2014 at 7:37 pm | Reply
    Unfortunately for the authors of that 5-year old paper, time has already proved them wrong.

    They stated:
    “break in the global mean temperature trend from the consistent warming over the 1976/77–2001/02 period may have occurred”

    They wrote that paper in 2009 (off of data ending in 2008), and since then 2009, 2010 and even 2012-2013 were warmer years, and 2014 is about to become the warmest of all.

    Try Again.

    I don’t know what you are talking about – and I suspect you don’t either.

    http://www.woodfortrees.org/plot/hadcrut4gl/from:1975/plot/hadcrut4gl/mean:12/from:1975/plot/hadcrut4gl/from:2002/trend

  65. Basil Newmerzhycky commented : “Statistically, a 0.045 trend is not very impressive”
    Basil, do you really believe that: they can measure to a thousandth of a degree precision, monitoring with 6000 thermometers, on the WHOLE planet?!?!?! 0,045C, is to a thousandth of a degree precision!
    Basil, do you believe that: by using ”only” the hottest minute in 24h, and ignoring all the other 1339 minutes in 24h, which fluctuate differently – anybody knows what was last year’s global temp by +/- 3-4C?!
    Basil, do you believe that: by not evenly distribution of those few thousand thermometers, any honest person would talk and compare one unknown with another temp unknown?!?! Wake up Basil, it’s 100% con, scam!!!

  66. Climate Researcher commented: ”Until you take comfort in the valid physics which explains all temperatures in tropospheres”

    Climate Researcher, obviously you have never used the ”honest” laws of physics; otherwise you would have known that: ”overall” global temp is ALWAYS the same! Temperature Self Adjusting Mechanism (TSAM) is infallible! B] monitoring temp only on few thousand places, for the WHOLE planet, is a sick joke!!! using only the hottest minute in 24h and ignoring all the other 1439 minutes = is a scam, con and more scam!

    2] Brazil and Sahara have SAME amount of CO2, but completely different climates; because H2O regulates the climate, not CO2!!! If you don’t know which is good or bad climate – ask the trees – trees don’t tell lies as you people. One oak-tree knows more about climate than all the Warmist and most of the skeptics combined!!! Climate is in constant change – some places for better, others for worse!! ”global” warming on the other hand is: 100% con, scam, crap! Cheers!

    • Climate Researcher 

      Yes stefanthedenier, it is indeed correct that increasing the percentage of water vapor in Earth’s atmosphere leads to lower mean daily maximum and minimum temperatures. My published study confirms this from 30 years of temperature data from three continents. The physics which explains it is also now well understood by a few physicists who have read and understood the explanation. Not one of them has been able to fault that physics.

      So I don’t know what you are talking about, in that we at least agree that the most prolific greenhouse gas water vapor cools rather than warms. Perhaps you have some other “explanation” for all planetary temperatures.

      But you’d be hard pressed to prove that the Earth has not warmed since the Little Ice Age when they skated on the River Thames.

      You’ll get nowhere arguing against the alarmists or the lukes until you understand kinetic theory, thermodynamic equilibrium, entropy and energy potentials. Until the valid physics is understood and widely promulgated they will “prove” anything they like from temperature data.

  67. First, I want to point out that speaking of just the “current plateau” is like driving a car with only one cylinder working. This “current” plateau is one of two plateaus that have existed since 1979. There was one that in the eighties and nineties that was also as long as the one we are experiencing now. It is obvious that the author has not bothered to read my book “What Warming?” because it is shown in it as Figure 15. Since yjey didn’t bother to do their homework I end up having to teach them. In the satellite view the eighties and nineties are wave train comprising five El Nino peaks with La Nina valleys in between. My graph had monthly resolution which let me you use a transparent magic marker to outline the temperature trend. This eliminates the distraction from the cloudiness variable. The global mean temperature of of such a wave train is defined by placing a dot on the halfway mark between an El Nino peak and its neighboring La Nina valley. I marked all those points and found that they defined a horizontal straight line from 1979 to early 1997, approximately 18 years. That is the same length as the current plateau we are experiencing now. In the satellite view these two plateaus are separated by the super El Nino. And the two plateaus are not t the same level: The current one is a third of degree Celsius higher than the earlier one on the left was.It got that way because in 1999, immediately after the super El Nino left, there was a quick but intense step warming. In only three years it raised global temperature by a third of a degree Celsius and then stopped, having established the current level pof the plateau. This is half as much warming in three years as what IPCC assigns to a century. It is highly probable that its source was the mass of warm water carried across the ocean by the super El Nino. It is not clear, however, how it has been maintained at that level for 14 years. I could had not appeared. This is feasible because it is almost certain that the extra warm water it brought could not have originated from the regular ENSO water supply. It is likely that in the absence of the super El Nino the step warming would not have happened either, the right side would not have been uplifted, and we would now be looking at a 24 year long wave train of El Nino peaks. That step warming, by the way, is the only warming we have experienced since 1979. Currently GISS, NCDC and HadCRUT are all showing the eighties and nineties as a warming period they call “late twentieth century warming.” That warming is totally phony as is easy to prove by comparison with satellite temperatures. These ground-based data should be banned from use in any climate publications. speculate further by imagining what could have happened if the super El Nino

  68. Basil Newmerzhycky commented : ”This is the trend-line chart that skeptics fear as much as dracula fears holy water”

    Basil, why should skeptics fear a shameless lie? ”Skeptics” invent their own lies – lie is a lie, isn’t it? Both camps are barking up the wrong tree!

    Basil, you should re-fraze it: ”Warmist fear stefanthedenier’s REAL proofs and facts, as the devil fears the cross” Tell your peers, if they are brave enough, to face the truth and reality, not to hide behind the sick, misleading propaganda, including you Basil, compare them with real truth / proofs: :https://globalwarmingdenier.wordpress.com/2014/07/12/cooling-earth/

  69. ‘Delta Dawn fails to mention Classical Liberals are market worshipers who believe it’s a sin to violate free-market principles, and will insist on putting the market first regardless of the damage this may do to people. Delta Dawn sees this as proof Classical Liberals are smart. I see it as proof they are nuttier than fruit cakes. Maxy

    Classic liberals in fact have a commitment to democracy and the rule of law. These are the essential freedoms long fought for and hard won. The role of government is the protection of the citizenry against the brutal – with police and armies. It includes civil defence against the ravages of nature. Laws evolve in the cut and thrust of democracy to make markets fair, the protect consumers and workers, to protect natural environments amongst other things. The optimum size of government to maximise economic growth is some 22% of GDP.

    In economics the role of governments is to enforce rules that maintain fair markets – and to set the pace of the market through management of interest rates.

    All this is a little bit subtle for Maxy – who is the epitome of the progressive intellect. Enough said.

  70. Pingback: Climate Change Establishment Painted Into a Corner | Gaia Gazette

  71. Basil Newmerzhhycky, statistics or a trend line has never made, nor will make, anything warmer…. dream along. When the world does not warm for a decade or more there is no ‘global warming’ occurring. Comprehend? It ended long ago if it even occurred over and above the rebound from the colonial cold period and for me, this is still in question.

    Your trend line only indicates a possibility, nothing more.

  72. @DCM: In this essay I will refer to the present leveling of the global temperature as a plateau rather than a pause or hiatus because the latter two imply we know temperatures will rise again soon.

    Read this just now. Amusingly I made this exact point to a Rutgers climate scientist at the AGU meeting this afternoon, using “peak” in place of “plateau”. He agreed. Neither of us could understand why a committed skeptic would settle for “hiatus” in place of “peak”.

  73. Pingback: Will a return of rising temperatures validate the IPCC’s climate models? | Fabius Maximus

  74. the threading appears to be broke again

  75. Antonio (AKA "Un físico")

    At last, after one yerar posting in this blog, I have found someone that agrees with myself. In: docs.google.com/file/d/0B4r_7eooq1u2TWRnRVhwSnNLc0k , I explain why current CMIP5 models are not reliable and why they have not predictive capabilities.
    All this issue is a huge shame for IPCC authors and authorities.

  76. The negative phase can last mere years. What is insane is to believe it last a few decades with ACO2 close to 400 ppm and rising.

  77. eadler2, Maybe this will help you out a little bit.

    https://lh5.googleusercontent.com/-um0MLLV1bHM/VJQmVEzG5OI/AAAAAAAAL80/GQcK0Hin9Xw/w767-h535-no/Equatorial%2BImbalance.png

    That is tropical SST imbalance at the margins, 20-30 South minus 20-30 North. Earth has two hemispheres isolated by the Coriolis Effect. That makes timing a big deal. A volcano impact in December would be different than in June. These imbalance regimes can last 30 to 60 years with longer century to millennial scale shifts related to orbital cycles, since it is the Thermohalide circulation that will try to eliminate the imbalance. Very slow process.

    When someone tries to “remove” ENSO or Volcanic based on an ideal sphere with nearly instantaneous equalization, they are removing something, but not really demonstrating they understand what they are removing. It is pretty obvious there are climate regimes related to equatorial energy imbalances.

    J.R. Toggweiler with the GFDL has published on this shifting of the “Thermal Equator” in this paper. http://www.sciencemag.org/content/323/5920/1434

    This general hemispheric imbalance shows up in longer term paleo as the “seesaw”. That would be a “climate” reference. NINO and PDO and QBO are shorter term and would be “weather” references.

    Now you can elect not to “see” longer term climate impacts, but that doesn’t mean they don’t exist.

  78. eadler2 commented

    If what you say is true, you are confirming the theory that GHG’s are the source of warming. When the sun is no longer shining the surface of the earth will cool because upward radiation is continuing, and IR is taking energy from the earth into outer space. GHG’s hinder the progress of this radiation into outer space directly from the warm surface of the earth, absorbing and radiating 1/2 of the absorbed energy back toward the earth’s surface. More GHG’s in the atmosphere would increase the surface temperatures at night. This was understood in 1828 by Joseph Fourier, and verified by John Tyndall’s experiments with IR absorption in 1859.

    Well, what I’m saying is what surface measurements show to be the facts, but it has nothing to do with proof of AGW.
    http://www.science20.com/files/images/global_1.png
    Does the Min temp anomaly look anything like a slight rising trend?

  79. RiH008, “Do you have the paper available by some hook or by crook?”

    http://www.atmos.albany.edu/facstaff/blinsley/home_page/Oppo,Rosenthal%20&%20Linsley%202009.pdf

    R. Gates, Perhaps eadler2 will try digging into his skeptical side a bit more so he can see that there valid range of uncertainty that doesn’t preclude ~50% “natural”, 50% Anthro with a 30% margin of uncertainty :) He seems to have a very select “peer” group.

    • Capt’nDallas

      What a wonderful paper, easy reading (although I personally need to go slowly), lots of connectivity to papers previously mention by yourself and Rob Ellison, and with a time line that has been in controversy since Mann decided that the MWP and LIA had to go.

      Thank you

  80. eadler2, In case you get over your approximation phobia, here those silly trends from the Oppo IPWP with BEST land and the NH/SH ocean SSTs

    https://lh4.googleusercontent.com/-1NJDnTCNvUQ/VJRvAKjVNyI/AAAAAAAAL9I/ENnsfD-D7aA/w771-h507-no/oppo%2Band%2Bthe%2Bcrew.png

    I don’t have error bars since this is busy enough, but the 1900 temperatures for both land and sea are close to the 2000 year trend which includes the LIA influence. The 0 to 1200 trend would be the “normal” excluding the LIA which appears to have been colder due to a series of large volcanoes. If you think colder is better and lots of volcanoes is normal, then you would assume that “pre-industrial” would be somewhat LIAish. Now if Tamino is removing volcanic influence, which was needed for the LIA and included in the models, he would need to remove volcanic influence all the way back to around 1200 AD which would put us on the light blue 0-1200 trend line +/- a touch of course. If he just removes volcanic for the satellite period, then there is likely some residual effect from prior to his selected start date which would influence his “adjusted” trend. There is no particularly good way to “remove” parts of climate convincingly.

    The big problem is we have fantastic accuracy for the satellite era which degrades to something like +/- 1 C in “pre-industrial” and we are basing CO2 impact on “pre-industrial” which is basically unknown to the degree of precision being bandied about. This is where the technical term SWAG, (Scientific Wild A$$ Guess) should become part of all good skeptics vocabulary. Is it warming? yes. Does CO2 have an impact? Yes. Is 110% of the warming since 1950 +/- 10% anthropogenic? LOL, you gots to be kidding.

  81. Berényi Péter

    There is a much more serious and way deeper problem with the current computational climate modelling paradigm than failure to replicate observed temperature trends.

    As long as the solar constant is constant indeed, annual cumulative insolation of the two hemispheres matches exactly. That’s because of a peculiar geometric property of Keplerian orbits, which cancels variations along a tropical year.

    Now, physical properties of the two hemispheres are very different, because most land masses are located North of the equator. Water being much darker than land, clear-sky albedo of the Northern hemisphere is some 1.8% higher. In spite of this fact all-sky albedo of the two hemispheres matches within observational error (the difference is less than 0.03%, that is, almost two orders of magnitude lower).

    That’s because of higher abundance and/or reflectivity of clouds in the Southern hemisphere.

    The upshot is that energy input to the climate system is symmetric with respect to the equator on an annual scale.

    This symmetry is not replicated by computational climate models, which in itself is sufficient to falsify the underlying paradigm.

    Q.E.D.

    Journal of Climate, Volume 26, Issue 2 (January 2013)
    doi: 10.1175/JCLI-D-12-00132.1
    The Observed Hemispheric Symmetry in Reflected Shortwave Irradiance
    Aiko Voigt, Bjorn Stevens, Jürgen Bader and Thorsten Mauritsen

    • “Now, physical properties of the two hemispheres are very different, because most land masses are located North of the equator. Water being much darker than land, clear-sky albedo of the Northern hemisphere is some 1.8% higher. In spite of this fact all-sky albedo of the two hemispheres matches within observational error (the difference is less than 0.03%, that is, almost two orders of magnitude lower).

      That’s because of higher abundance and/or reflectivity of clouds in the Southern hemisphere.

      The upshot is that energy input to the climate system is symmetric with respect to the equator on an annual scale.”

      Very interesting. What this tells me is that even with different energy input (SH absorbs more energy due to lower clear sky albedo) clouds regulated the albedo of both hemispheres to be equal, ie clouds actively control the surface temps, actually I should say that water vapor actively regulates surface temps. Just as I noted it helps regulate nightly cooling, it’s also regulating max temps with clouds.

      Surface measurements show a very regulated max temp that has change very little globally over the last 60+ years, and a minimum temp that strongly responds to regional ocean surface temps changing.
      http://www.science20.com/files/images/global_1.png

      Kim I hear your song again!

  82. I think this situation is much simpler than many people seem to think.

    1. We’ve always known that the climate models had questionable ways of dealing with the various types of feedbacks. Back in the early 90’s, I discussed this with some of the senior people in the field who were sure that there was a warm bias in the models.

    2. When I see figure 2 in this essay, there is a clear confirmation of that warm bias and very little reason to put much trust in those models.

    3. Then, looking at the temperature data in figure 1, I would consider more carefully the atmosphere’s powerful tendency to revert to the long-term mean. The pattern then strongly suggests that we’re at a peak for the global temperature anomaly and that we’ll likely have some cooling.

    • Did those senior people distinguish between ECS and TCR? Temperature data is closer to the latter, so judging a modeled ECS by temperature data will create the impression of a warm bias.

      • I’m just a simple forecaster, but, to me, those concepts sound like an excuse to continue believing the increasingly-suspect models while ignoring the people who have been correct all along. Len Snellman, the best forecaster I ever met, would have called this “meteorological cancer” at its worst.

    • Super, you wrote: Then, looking at the temperature data in figure 1, I would consider more carefully the atmosphere’s powerful tendency to revert to the long-term mean. The pattern then strongly suggests that we’re at a peak for the global temperature anomaly and that we’ll likely have some cooling.
      The long term temperature is a cycle that goes above the mean and then below the mean. Temperature never returns to the long term mean, it just crosses the mean to the other bound. The Polar Ice Cycle does not seek a mean. It drives temperature from above the mean to below the mean. Then the Polar oceans freeze and turn off snowfall and the sun drives temperature from below the mean to above the mean.

      Temperature varies in a cycle that goes warm, cold, warm, cold, warm, cold, etc. You can see this by just looking at actual official NOAA and NASA data. You can see this in the Ice Core Data.

      I agree with you that we are near a peak, but this peak, like the Roman and Medieval Peaks, will last a few hundred years and then will drop below the mean to a minimum similar to the Little Ice Age.

      Temperature does not try to maintain a mean. Earth Temperature cycles above and below the thermostat setting, just like in my house. My house warms until the Air Conditioner comes on and then cools until the Air Conditioner turns of. The Thermostat, the Set Point for Earth, is the temperature that Polar Sea Ice melts and freezes. Snowfall is turned on by thawed Polar Oceans and turned off by frozen Polar Oceans. The temperature is fixed by the temperature that oceans freeze and thaw. That does not change. The temperature record shows that the temperature is bounded close above and below this set point.

      If you disagree, offer a different theory for why the temperature has been so well bounded for ten thousand years.

      • Consensus Theory has nothing that produces a well bounded cycle for the past ten thousand years. Consensus Theory Models go out of bounds in just a few years while real temperature stays well inside the bounds of the past.

      • I don’t disagree. I suppose that another way of expressing all this is to say that the climate seems to be a remarkably stable system and that it is puzzling that so many people are so certain that it is unstable.

  83. Capt. Dallas:
    That would provide an estimate of “average” volcanic aerosol forcing. The satellite era may be close to average or not. If the satellite era forcing is less than “normal” then Tamino got his sign wrong. ”

    Tamino did not need to estimate anything about average volcanic aerosal forcing. He got satellite data on the actual aerosal optical depth.
    Here is his description of the inputs for the natural factors affecting temperature.

    “The impact of el Nino is characterized by the Multivariate el Nino index (MEI), that of volcanic aerosols by Aerosol Optical Depth (AOD), and solar output by Total Solar Irradiance (TSI).”

    There is no assumption about what is a normal El Nino, volcano or solar variation. In addition as I pointed out. The inputs to his regression are ACTUAL MEASURED VALUES. In addition the results of his regression are not very dependent on the starting year, which shows that your objection about the starting point is not valid. The adjectives “average” or “normal” do not enter into what he has done. There is no logic to your claim.

    Go back and read his post:
    http://tamino.wordpress.com/2011/12/06/the-real-global-warming-signal/

  84. eardler2, Here is a link to a Crowley et al. paper, an aerosol optical depth reconstruction.

    http://www.geosciences.ed.ac.uk/homes/tcrowley/crowley_PAGESnote_volcanism.pdf

    https://lh6.googleusercontent.com/-gussuUInH4Y/VJTm3VY5f7I/AAAAAAAAL9c/CFwXhrxhFFc/w542-h593-no/crowley%2Baod.png

    that is Crowley’s

    http://tamino.files.wordpress.com/2010/08/sulf3.jpg

    that’s Tamino’s

    Notice a difference?

    Tamino assumed “normal” was next to nothing and that everything was due to human emissions. He found a hockey stick and started playing. Now if you want to blame emissions, there is stratospheric chlorine that can be blamed but that doesn’t fit all that well either.

    That Oppo paper you don’t like happens to fit the Crowley et al. Aerosol optical depth/volcanic forcing reconstruction fairly well. It kinda looks like the LIA was caused by a series of volcanic events and the ~60 year “oscillation” inspired by more volcanic activity while the globe was recovering from the LIA. Considering the heat capacity of the oceans and the current measured rate of ocean heat uptake, there is another independent set of data that tends to agree with primarily natural, i.e. volcanic forcing, causing about half of the instrumental era warming by slowing down a bit.

    Tamino has one convenient ice core GISP2, and a bit of arm waving. Since human sulfate emissions tend to rain out,( remember acid rain?) they don’t appear to have much impact of aerosol optical depth and climate. In direct effects on clouds is another story that doesn’t have an ending just yet.

    Personally, I would love for human sulfates to be a big player because that is an easy, pretty inexpensive fix. Most of the world has already dealt with them btw. Wouldn’t hurt to inspire the ROW to do the same, but don’t count on a huge change in climate.

  85. I visit the SW of Western Australia every year about this time and there is one thing that I have strongly noticed.

    If one is to extract all the human made sources of heat generation and retention – heat islands, warm machinery, traffic, central heating etc. (and this would include any supposedly produced by CO2) then that results in one looking at the weather \ climate in the sparsely populated areas of the world. ie. it results in looking at just the natural effects on the climate. Of course one can go directly to these areas and examine what is happening.

    A good place to look at is the South West of Western Australia (Bunbury and below) – it is sparsely populated, it is away from the more populated Eastern States and it is far away from the heavily populated Northern Hemisphere. As such it provides a good simple measure on what is happening to the world’s climate :-

    * average temperatures down ~1.8’C.
    * frequent cold Antarctic wind. It is most out of place – feels odd.
    * cool springs, late summers. Even Perth has experienced this.
    * Antarctic ice shelf increased massively. No urban heat islands to keep the ice down.

    So my conclusion is – there is no global warming – only a spot of global cooling going on. What is happening in the Northern Hemisphere is purely urban heat island effect.

  86. In general, I would be interested in seeing the response among “skeptics” should 2014 turn out to be the warmest year on record, given that:

    2005-2014 will be the warmest 10 year period on record.
    2010-2014 will be the warmest 5 year period on record.
    2014 will be the warmest year on record.

    Odd way for a “hiatus” to be acting.