Macroweather, not climate, is what you expect

by Shaun Lovejoy

When the International Meteorological Organization defined the first climate normal from 1900 to 1930, the belief was that the climate was constant, and that the newly defined climate ‘normal’ would give a close approximation to the climate.

Today, we still use the 30 year ‘normal’ period – referring now to 1960-1990 – but we acknowledge that it is not immutable, that even without anthropogenic effects, that even due to natural causes, the climate changes.   But what then is the climate and what is the difference between the climate and the weather?  Mark Twain once quipped: “Climate lasts all the time and weather only a few days”.  This is close to the popular dictum: “the climate is what you expect, the weather is what you get” [Heinlein, 1973].  Since most people ‘expect’ averages, this concurs with the reigning idea that the climate is a kind of ‘average’ weather.  Similarly, the use of Global Climate Models (GCM’s) to predict the climate is based on the idea that for given boundary conditions (solar output, atmospheric composition, volcanic eruptions, land use), that “the climate is what you expect”.

While all this sounds reasonable, it turns out that it is not based on analysis of real world data.  In recent papers “What is climate?” and “The climate is not what you expect” ([Lovejoy, 2013], [Lovejoy and Schertzer, 2013]) we used a new kind of ‘fluctuation analysis’ to show that – at least up to 100,000 years – that that there are not two, but rather three atmospheric regimes each with different types of variability.  In between the weather (periods less than 10 days) and the climate (periods longer than ≈ 30 years), there is a new intermediate ‘macroweather’ regime.

What does it mean to define a regime by its type of variability?  An illustration makes it intuitive.  Consider Fig. 1 whichs shows examples from weather scales (1 hour resolution, bottom), macroweather (20 days, middle) and climate (1 century, top).

macroFig. 1: Dynamics and types of scaling variability: Representative temperature series from weather, macroweather and climate (bottom to top respectively). Each samples is 720 points long and was normalized by its overall range  (bottom to top: 27.8o C, 16.84 o C, 7.27 o C, dashed lines indicate means). The resolutions are 1 hour, 20 days and 1 century, the data are from, Lander Wyoming, the 20th Century reanalysis and Vostok (Deuterium paleotemperatures, antarctic) adapted from [Lovejoy, 2013].

The daily and annual cycles were removed and 720 consecutive points from each resolution are shown so that the differences in the characters of each regime are visually obvious.  At the bottom, the weather curve ‘wanders’ up or down resembling a drunkard’s walk, typically increasing over longer and longer times periods.  In the middle, the macroweather curve has a totally different character: upward fluctuations are typically followed by nearly cancelling downward ones (and visa versa). Averages over longer and longer times tend to converge, apparently vindicating “the climate is what you expect” idea: we anticipate that at decadal or at least centennial scales that averages will be virtually constant with only slow, small amplitude variations.  However (top) the century scale climate curve displays again a weather-like (wandering) variability. Although this plot shows temperatures, other atmospheric fields (wind, humidity, precipitation, etc.) are similar.

There are thus three qualitatively different regimes, a fact was first recognized in the 1980’s and that has been confirmed several time since, but whose significance has not been appreciated [Lovejoy and Schertzer, 1986], [Pelletier, 1998], [Huybers and Curry, 2006].  The old analyses were based on analyzing temperature differences and spectra, and these suffered from various technical limitations and from difficulties in interpretation.  The new technique works by defining fluctuations over a give time interval by the difference of averages over the first and the second halves of the interval; it is thus very easy to interpret.  In the ‘wandering’ weather and climate regimes, the averaging in the definition isn’t important, fluctuations are essentially differences.  In the cancelling ‘macroweather’ regime, the differences aren’t important, the fluctuations are essentially averages.

Whereas in the weather and climate regimes, fluctuations tend to increase with time scale, in the macroweather regime, they tend to decrease.  For example, over GCM grid scales (a few degrees across), the average fluctuations increase on average up to about 5oC (≈9o F) until about 10 days.  Up until about 30 years they tend to decrease to about 0.8o C (≈ 1.4o F), and then – in accord with the amplitude and time scale of the ice ages – they increase again up to about 5o C (9o F) at ≈100,000 years.  In macroweather, averages converge: “macroweather is what you expect”.  In contrast, the ‘wandering’ climate regime is very much like the weather so that – at least at scales of 30 years or more – the climate is “what you get”.   Conveniently, we see that the choice of 30 year time period to define the climate normal can now be justified as the time period over which fluctuations are the smallest.

In a nutshell, average weather turns out to be macroweather – not climate – and climate refers to the slow evolution of macroweather.  This evolution is the result of external forcings (solar, volcanic etc.) coupled with natural (internal) variability: i.e. forcings with feedbacks.  In the recent period, we may add anthropogenic forcings.

Why ‘macroweather’, and not ‘microclimate’? It turns out that when GCM’s – which are essentially weather models with extra couplings (to oceans, sea ice etc.) are run in their ‘control’ modes (i.e. with constant atmospheric composition, solar output, no volcanoes etc.) – that fluctuation and other analyses show that they well reproduce both the weather and the macroweather types of wandering and canceling behaviours, so that macroweather is indeed long term weather.  To obtain the climate they must at least include new climate forcings, but probably also new internal climate mechanisms; a recent fluctuation study shows that the multicentennial variability of GCM simulations over the period 1500-1900 are somewhat too weak [Lovejoy et al., 2012a].

But what about the current period with strongly increasing levels of greenhouse gases?  Although we’ve only started looking at this, initial results comparing fluctuation analyses of surface temperature measurements and historical GCM runs (1850-present) show that there is a good level of agreement in their multidecadal and centennial scale variabilities [Lovejoy et al., 2012b].  However, the reason for the agreement appears to be that over this period, most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability.  It is worth recalling that fluctuation analysis only indicates the types and overall levels of variability at different time scales, it does not directly validate any specific model predictions.  However, if further research confirms our conclusions, it would mean that anthropogenic induced warming is the most important factor in explaining the variability since 1850.

References

Heinlein, R. A. (1973), Time Enough for Love, 605 pp., G. P. Putnam’s Sons, New York.

Huybers, P., and W. Curry (2006), Links between annual, Milankovitch and continuum temperature variability, Nature, 441, 329-332 doi: 10.1038/nature04745.

Lovejoy, S. (2013), What is climate?, EOS, 94, (1), 1 January, p1-2.

Lovejoy, S., and D. Schertzer (1986), Scale invariance in climatological temperatures and the spectral plateau, Annales Geophysicae, 4B, 401-410.

Lovejoy, S., and D. Schertzer (2013), The climate is not what you expect, Bull. Amer. Meteor. Soc., (in press).

Lovejoy, S., D. Schertzer, and D. Varon (2012a), Do GCM’s predict the climate…. or macroweather?, Earth Syst. Dynam. Discuss., 3, , 1259-1286 doi: 10.5194/esdd-3-1259-2012.

Lovejoy, S., D. Schertzer, and D. Varon (2012b), Response to R. Betts: Interactive comment on “Do GCM’s predict the climate… or macroweather?” Earth Syst. Dynam. Discuss., 3, C778–C783.

Pelletier, J., D. (1998), The power spectral density of atmospheric temperature from scales of 10**-2 to 10**6 yr, , EPSL, 158, 157-164.

JC comment:  This post originated from a conversation with Shaun Lovejoy at the AGU meeting, where I invited him to do a guest post based on his recent two papers.  Since this is a guest post, comments will be moderated STRICTLY for relevance and civility.

198 responses to “Macroweather, not climate, is what you expect

  1. In global terms the CET area is relatively small with more or less uniform climate, and yet correlates well with the N. Hemisphere’s and even global temperature trends.
    http://www.vukcevic.talktalk.net/CET-NV.htm

  2. Paul Matthews

    Most of the links need fixing.

  3. “Similarly, the use of Global Climate Models (GCM’s) to predict the climate is based on the idea that for given boundary conditions (solar output, atmospheric composition, volcanic eruptions, land use), that “the climate is what you expect”.

    I thought “GCM” stood for General Climate Models?

    • General Circulation Models is a common use when they are not only used for climate but also global circulation studies in a constant climate.

  4. Philip Richens

    Perhaps nice to have an explanation rooted in the data that makes sense; even if the conclusion is not quite what we all might have wanted.

  5. Last line:

    However, if further research confirms our conclusions, it would mean that anthropogenic induced warming is the most important factor in explaining the variability since 1850.

    And if it doesn’t?

    [Think “CERN”.]

    Max

  6. Shaun Lovejoy
    Thanks for showing this temperature variability over differing time periods and addressing our collective memories of macroweather.
    Burt Rutan further compares the magnitude of the projected century long anthropogenic global warming to daily and annual temperature extremes.

    Hurst Kolmogorov Dynamics aka Climate Persistence
    Is your “smallness” a measure of an arithmatic progression over recent human memory compared to a geometric measure over geologically significant periods?
    e.g., Please compare your results in light of climate persistence, technically Hurst Kolmogorov Dynamics, such as quantified by Kousoyiannis et al. etc.

    How is your measure of “smallness” quantifiably different from Kousoyiannis et al. showing similar Hurst Kolmogorov Dynamics from short term to the longest geological records? See:
    Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, doi:10.1007/s10712-012-9208-9, 2012.
    Note also that from ice core records, Koutsoyiannis et al found that the standard deviation in temperature measured by Hurst-Kolmogorov dynamics is about double that found by conventional statistics.
    Markonis, Y., and D. Koutsoyiannis, Hurst-Kolmogorov dynamics in paleoclimate reconstructions, European Geosciences Union General Assembly 2010, Geophysical Research Abstracts, Vol. 12, Vienna, EGU2010-14816, European Geosciences Union, 2010. See presentation Slide 10.
    Pacific Decadal Oscillation
    Re: “the choice of 30 year time period to define the climate normal can now be justified as the time period over which fluctuations are the smallest.”
    By what relative statistical measure do you define the “smallest” “fluctuations”?
    Where there is a major natural oscillation, averaging with that oscillation period shows much lower residual variations than averaging over half the oscillation period. Averaging over 24 hours removes diurnal variations, 12 months removes annual cycles, and 11 years removes the Schwab solar cycle – compared to averaging over half periods of 12 hours, 6 months or 5.5 years. Similarly the impact of the 22 year Hale Cycle on climate such as by WJR Alexander et al., Linkages between solar activity, climate predictability and water resource development, Journal South African Institution of Civil Engineering Vol 49 No 2, June 2007, Pages 32–44, Paper 659.

    Please review your choice of 30 years in light of the ~60 year Pacific Decadal Oscillation. I expect that you will find using a 60 year period to average will show lower fluctuations over the longer warming trend from the Little Ice Age than averaging over the PDO half period length of 30 years. See:
    Syun-Ichi Akasofu On the recovery from the Little Ice Age, Natural Science, Vol.2, No.11, 1211-1224 (2010) doi:10.4236/ns.2010.211149 http://www.scirp.org/journal/NS/
    Similarly see publications by Don Easterbrook. e.g. Joseph D’Aleo and Don Easterbrook, Relationship of Multidecadal Global Temperatures to Multidecadal Oceanic Oscillations, Evidence-Based Climate Science. Ch. 5, 2012 DOI: 10.1016/B978-0-12-385956-3.10005-1

    Best wishes on clarifying your presentations.

    • 1) The 60 year period:
      A quick response to this: the claimed 30 year period is only a gross average over a time scale that varies not only geographically but also from epoch to epoch. Some this is indicated in my paper:
      http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/AGU.monograph.2011GM001087-SH-Lovejoy.pdf. Therefore I will not quibble about 60 years, although spectral analysis suggests that the variability continues to lower frequencies.

      2) Exponents

      Let me try to address the question of the Hurst exponent and the exponents estimated in some of Koutsoyiannis’s publications. The original Hurst exponent (rescaled range) combines the variance and range in a difficult to analyze (and interpret) manner. The problem is that if the series has statistics which are much more extreme than Gaussian, then one may get an exponent, but the interpretation is in general unknown (I’m thinking of the more general multifractal case, relevant in the weather and climate and to a lesser extent in the macroweather regime)!

      While invoking Hurst, Koutsoyiannis actually uses a different statistic, the Aggregated Standard Deviation (ASG, this includes your reference to Markonis, Y., and D. Koutsoyiannis, but also for example Anagnostopoulos, G. G., D. Koutsoyiannis,et al 2010). As pointed out in my book (Lovejoy and Schertzer 2013) and in my response to Pielke (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/ESDD.comment.14.1.13.pdf):

      “[The ASG] is a special case of the tendency structure function with exponent HASG (their “H”, our subscript) given by HASG = 1+ psi(2)/2= 1+H-K(2)/2 (see the discussion paper for definitions of the structure function exponent psi(q) and the cascade exponent K(q)). However the key limitation of the ASG is that it only correctly estimates the fluctuations (and exponents) over the same restricted range as the tendency structure function i.e. -1<H<0 (for quasi-Gaussian processes, for 0< HASG <1). In other words, it is fine for macroweather but not for weather or climate applications. The Haar fluctuation (which is essentially the same but with a crucial extra differencing), is a vast improvement."

      I have been working in the area of scaling processes for over thirty years and part of the difficulty is the plethora of different definitions and exponents. Often it is claimed that methods such as the ASG are essentially the same as Hurst's, but this is only true in rather special cases and these cases are usually not stated nor understood! As long as the key point being made is the existence of long range statistical dependencies, then this problem is less serious, although if the range of validity is not respected (e.g. here, -1<H<0), then one gets spurious results.

      References
      Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis (2010), A comparison of local and aggregated climate model outputs with observed data, Hydrol. Sci. J. , 55, 1094–1110.

      • Thanks Shaun for your clarification of the issues and the link to your paper to read.

      • David, Shaun, thank you for referring to my works.

        Shaun, no need to worry about my estimations of the Hurst parameter or to doubt if they are fine for weather and climate. I am as much careful as I can. I try to delve into the very definitions of the concepts I use and I try to assess statistical biases and uncertainties. That is why I use the concept of standard deviation for which these properties can be inferred theoretically. I avoid (and do not advise) using any statistical quantity like if it were a deterministic one. Some published studies use high order moments. But you may see, as an example, in slide 7 in a presentation in http://itia.ntua.gr/985/ that the 5th moment can be underestimated by two orders of magnitude (!) with respect to the expected value.

        Also, I try to use estimators that respect the mathematically necessary bounds of parameters. For example the Hurst parameter has an upper bound equal to 1 (perhaps you use a different sign convention–I use the original one as used by Hurst and Mandelbrot). This cannot be violated (like in a correlation coefficient which cannot be greater than 1). Many estimators neglect this and many papers publish values of the Hurst parameter greater than 1, which reflects a fault or inconsistency of the algorithm. You may see additional material about proper algorithms (and comparisons thereof) in a recent paper, http://itia.ntua.gr/983/

        In other words, I do not agree with your statement “In other words, [aggregated standard deviation] is fine for macroweather but not for weather or climate applications”. The standard deviation is good for everything. The standard deviation is in most cases (if not always) “aggregated” (think about monthly or annual mean temperatures, hourly or daily rainfall, etc.), not instantaneous, so “aggregated standard deviation” is nothing more than “standard deviation”.

      • “no need to worry”. Hm. Echoes of “full stop” in on a previous thread. Criticisms among experts are best when the tone is dispassionate.

      • BillC, thanks for your comment and apologies to all if my tone was inappropriate.I agree, the tone should be dispassionate.

      • Demetris, I’m sure you are careful and it is true that the ASD is defined for any series (sorry about the typo in my comment!). You yourself mention that your exponent is valid in the range 0<HASD<1 corresponding to the fluctuation exponent H in the range -1<H<0 as claimed (using HASD=1+H i.e. ignoring intermittency corrections K(2)/2 which are not too large: in the range 0.05 to 0.1 for weather and climate regimes). My point is simply that weather and climate regimes involve exponents 0<H1 which you admit you could not observe. Everything depends on the nature of the scaling; to a first approximation it depends on the spectrum. Ignoring again intermiitency, the ASD works for spectral exponents “beta” in the range: -1<beta1 (beta=1+2H-K(2), i.e. without intermittency, K(2)=0 so that beta=1+2H). In comparison, the Haar fluctuation correctly characterizes the fluctutations for -1<H<1, equivalently 0<HASD<2, or -1<beta0 (which would correspond to HASD>1, then the estimates of the ASD at time scale deltat are no longer dominated by frequencies of the order 1/deltat, rather they are determined by the lowest frequencies present in the series. Hence one does obtain estimates, but they no longer reflect the scaling properties of the series, rather they reflect artefacts of the particular data set (its length, resolution etc.).

        See the appendix of Lovejoy, S., D. Schertzer, 2012: Haar wavelets, fluctuations and structure functions: convenient choices for geophysics, Nonlinear Proc. Geophys., 19, 513–527, doi:10.5194/npg-19-513-2012 (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/Haar.npg-19-513-2012.final.pdf).

      • Thanks Shaun. It is good that we clarify some misunderstandings between us. I feel I have to clarify one more thing. I did not “admit you could not observe” [by the aggregated standard deviation approach] Hurst coefficients out of the mathematical bounds. I said just the opposite. That there are algorithms (including variants of the aggregated standard deviation approach, or based on whatever concept—this is unimportant to me) which yield values out of bounds. So, you may “observe”, based on such algorithms, values out of bounds. Does this mean that the real value is out of bounds? Not in my view. It simply means that the algorithm is inconsistent, imperfect, erroneous, incorrect, mistaken, or whatever you wish. Assume that I give you an estimator of the linear correlation coefficient. You apply it for two data series x and y and it gives you a value of 1.5. Would you say that these data are extra correlated with a correlation greater than 1? (I would simply say that the algorithm is incorrect).

        I add a couple of notes to avoid further communication gap. The Hurst coefficient refers to the asymptotic scaling for large scales (tending to infinity) or low frequencies (tending to zero). The bounds apply to this. For small time scales or high frequencies there may be scaling areas (e.g. in the spectrum) with exponents which are different from H and do not necessarily obey the asymptotic restriction in bounds. These can well be captured/described by the aggregated standard deviation vs. time scale approach (I use the term climacogram for the latter), whatever their value is. After all, power spectrum, autocorrelation function and climacogram are one-to-one transformations of one another, so whatever properties are seen in one of them, they reflect in the others. The problem is that all, when estimated from data, are affected by biases and uncertainties. People often forget this.

        I think I can prove mathematically my above claims (as a start, you can see another comment I inserted here, which certainly shows that a blog is not the right place for mathematical proofs :-). I hope to be able to make a paper to contain proofs and further clarifications.

      • This is clearly not the place for technical debate, so I’ll see if this is clear.

        Consider a Gaussian process: it can clearly have any spectral (scaling) exponent beta in the range -Infinity<beta<Infinity (just take a white noise and give it a power law filter, any exponent you like). However all real space estimation techniques (such as the ASD, or Hurst's rescaled range – or the Haar fluctuations, or the Detrended Multifractal Fluctuation Analysis) can only yield exponents over a finite range. Outside these ranges (for the reasons I indicated earlier), they simply give spurious results in the sense that they do not reflect the underlying scaling exponent beta.

        The problem of course is not the results themselves, but their interpretation!

      • Shaun, I think I disagree with both points that you write, (1) the mathematical consistency of unboundedness (the non-inequality you write) and (2) that boundedness reflects a limitation of the methods. The ASD can well give results out of the bounds, but what I try to emphasize is that such results are wrong because the unboudedness cannot hold—it is not consistent mathematically.

        But I agree, this is not the place to discuss it. What I propose is this. In a couple of months or so there is the EGU conference. I have submitted an abstract for a poster in one of the sessions you (co)organize. I will try to include the technical stuff related to this in the poster, so you may visit my poster to discuss it and see if we can resolve/bridge our disagreements.

        Meanwhile, if you can give me a rigorous example to include in my study, which you believe supports your claim, I will try to include it. I do not mean an algorithmic procedure (do this, do that). I mean, give me the analytical equation of a power spectrum (covering all frequency domain, from zero to infinity) or, if you prefer, the autocorrelation function (covering lags from zero to infinity) that supports your claim. To make the discussion more pragmatic, I propose to exclude runaway processes (e.g. those whose mean or variance tends to infinity with increasing time).

  7. Shaun, how about climate is the time evolution of the probability distribution of the state variable being examined. The analysis of the time series of the mean, variance, skewness, and kourtosis provides a means for classifying what we might call regimes of climate. We should make no a priori determination of what the relevant regimes as they should be derived from the observations. The 30-year ‘normal’ needs to be abolished as it imposes a constraint not consistent with the data.

    • I think that your comment is more or less answered in the second paragraph of this quote from “The climate is not what you expect” (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/climate.not.13.12.12.pdf)

      “There are two basic problems with the Twain/Heinlein dictum […”The climate is what you expect, the weather is what you get”..]…and its variants. The first is that they are based on an abstract weather – climate dichotomy, they are not informed by empirical evidence. The glaring question of how long is “long” is either decided subjectively or taken by fiat as the WMO’s “normal” 30 year period. The second problem – that will be evident momentarily – is that it assumes that the climate is nothing more than the long-term statistics of weather. If we accept the usual interpretation – that we “expect” averages – then the dictum means that averaging weather over long enough time periods converges to the climate. With regards to external forcings, one could argue that this notion could still implicitly include the atmospheric response i.e. with averages converging to slowly varying “responses”. However, it implausibly excludes the appearance of any new “slow”, internal climate processes leading to fluctuations growing rather than diminishing with scale.

      To overcome this objection, one might adopt a more abstract interpretation of what we “expect”. For example if the climate is defined as the probability distribution of weather states, then all we expect is a random sample. However even this works only inasmuch as it is possible to merge fast weather processes with slow climate processes into a single process with a well defined probability distribution. While this may satisfy the theoretician, it is unlikely to impress the layman. This is particularly true since to be realistic we will see that one of the regimes in this composite model must have the property that fluctuations converge whereas in the other, they diverge with scale. To use the single term “climate” to encompass the two opposed regimes therefore seems at best unhelpful and at worst misleadling.”

      Does this answer your question?

  8. This recent NASA news story should be the undoing of the AGW story.

    http://science.nasa.gov/science-news/science-at-nasa/2013/08jan_sunclimate/

    • Oliver:
      Unfortunately, the global warming alarmists will find a way around the NASA article’s conclusions, perhaps by some further claim such as their statement that global warming is causing the rapid buildup of ice in the Antarctic (and yes, water freezes into ice when you heat it!). I don’t expect them to read Mr. Lovejoy’s article or the NASA item, because these are people that just won’t listen to opposing views or evidence that disproves their contentions. All they will do is try to suppress it and keep it from the general public. And the leftist-controlled news media will probably make sure that Mr. Lovejoy’s work or the NASA story don’t get out to the public

    • The article contains the names of several prominent climate scientists,

  9. Dr. Lovejoy, this is really interesting. I guess I should read the papers first, but thinking about your statement that “However, the reason for the agreement appears to be that over this period, most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability”, I was wondering, how do you determine this for the data (measurement data, I suppose)? If this conclusion were based on agreement between measurements and model output, it could be false, since model development over the years has involved a lot of comparison to measurements.

    • That struck me, too. How do you determine this for the data?
      ==============

    • This is an excellent question, the answer has two parts. The first concerns the models. I’ve only looked at a few Last Millenium simulations (ECHO-G, EFS, Miroc, GISS-ER2), and mostly at the latter. There is no doubt that prior to 1900 (for example 1000-1900 or 1500-1900) that the Last Millennium simulations give fairly low variabilities at multidecadal, but especially at multicentennial scales. This is true for various volcanic and solar reconstructions and even more true of “control runs” (no “forcings”). However, over the period 1880 -present they have on the contrary very high low frequency variability and the only key change is the addition of anthropogenic Greenhouse gases (and aerosols). For the models the story is therefore fairly straightforward and agrees for example with the findings of van Oldenborgh et al 2012 that the main skill in the decadal scale GCM predictions is in their Greenhouse gas responses.

      The bigger problem is the data. Over the period 1880-present, the model and instrumental data variabilities are not too different (see for example the figure in http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/esdd.Betts-3-C778-2012-3.pdf which is actually from GISS-ER2 last millenium simulations – not historical simulations). It is for the northern hemisphere, land only, hence the its variability is a little higher than the comparison with the global data curve (which includes the less variable ocean).

      However, to estimate the pre-1900 multidecadal, multicentennial variability we need proxies. Probably the best so far are the “post 2003” multiproxies (analysed in http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/AGU.monograph.2011GM001087-SH-Lovejoy.pdf) which indicate that the pre-1900 natural variability is quite a bit lower (although still higher than the Last Millenium GCM’s (see http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/esd-2012-45-discussions.version3.SL.pdf).

      Certainly if we take the multiproxies at face value, then the variability over the period 1880-present is much higher than over the period 1500-1900. Of course this doesn’t prove that the cause is anthropogenic. However, there is nothing special about the variability of the solar or volcanic forcings over the period 1880-present compared to the period 1500-1900 (that is, if we accept the corresponding reconstructions, for this, see http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Sensitivity.2012GL051871.proofs.SL.pdf). Therefore, it is hard to see what else could be at work.

      References:

      van Oldenborgh, G. J., F. J. Doblas-Reyes, B. Wouters, and W. Hazeleger (2012), Decadal prediction skill in a multi-model ensemble, Clim. Dyn. doi: 0.1007/s00382- 012-1313-4.

      • Willis Eschenbach

        Shaun, I fear that your argument fails basic logic. The question asked, which I also pointed to above, was in regard to your statement that:

        most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability.

        In response, you say that 1) the models say that recent variability is greater than pre-1850 variability, and 2) solar and volcanic forcings haven’t changed, and therefore, 3) any increase in variability must be anthropogenic …

        Surely, as a scientist, you must see the huge logical holes in that chain of reasoning. But in case you don’t:

        1. The models have not been tested, verified, or validated. Their output is not evidence of anything but the prejudices, beliefs, and errors of the programmers. Your use of these, not as some kind of buttress to your argument but as your main and only argument, simply means you have a very inflated and incorrect idea of the value of climate model output.

        2. I don’t seem to find “it is hard [for Shaun Lovejoy] to see what else could be at work” in the list of logical supportive arguments. There are many things in the climate for which we have no explanation. Should we therefore assume that the cause of everything in the climate system for which Shaun Lovejoy lacks an explanation is “anthropological”?

        I thought I wouldn’t find anything odder in the way of logical claims than the folks who note that if you take a tuned climate model model and pull out the anthropogenic forcings used to tune the model, it does poorly … which to them proves that humans done it. Unfortunately, the fact that the same model does just as poorly when you pull out the natural forcings doesn’t seem to have occurred to those folks … you can’t test a tuned model by pulling out part of the forcing used to tune it, duh.

        But that is less foolish than your argument, which is that if Shaun Lovejoy finds it hard to see what else might be at work, it must be humans …

        w.

      • Willis Eschenbach

        Oh, yeah, Shaun, regarding your statement that “it is hard to see what else could be at work” to increase recent variability in your results, I neglected to give you a partial list of what else could be at work.

        First, the computer models could be feeding you garbage. The odds are actually quite large on that one.

        Second, the forcings used in the pre- and post-1950 computer runs may be different.

        Third, there may be some simple (or complex) error in your mathematical calculations.

        Fourth, the data that you are using may be in error.

        Fifth, the “multiproxies” may be wrong. This one also is getting good odds from Jimmy the Greek.

        Sixth, you may just have happened to pick proxies or computer runs that support your claims, while others might falsify it.

        So no, it’s not hard to see what else could be at work. I’ve given you six things that could be at work just off of the top of my head. Give me a day and I’ll give you a dozen more.

        As a result, Shaun, I’d be extremely cautious of making your underlying claim, which was that if Shaun Lovejoy can’t figure out what else could be at work, it must be anthropogenic … you have demonstrated conclusively that that logic doesn’t work.

        w.

      • Willis Eschenbach

        Erratum: should have been “pre- and post-1850”, not 1950.

        w.

      • Science is not logic, it is inductive, not deductive. In science one must at least provisionally accept the simplest hypothesis consistent with the known facts (Occam’s razor). Refusing Occam’s razor is a failure of scientific logic.

        If we accept the data (including multiproxies) as roughly correct, then fluctuation analysis shows that the low frequency variability of the global temperatures from 1880-present is much larger than over the period 1500-1900. At the same time, the variability of the main natural forcings that have been proposed to account for the natural variability at these scales – solar output and volcanic eruptions – show that the variability of these forcings (1880-present compared to 1500-1900) has not changed by much. On the other hand the observed changes in greenhouse gas concentrations and aerosols is large and could explain the change in the variability. Therefore one is obliged to accept – at least provisionally – the hypothesis that the latter is the cause.

      • Shuan Lovejoy, “Science is not logic, it is inductive, not deductive. In science one must at least provisionally accept the simplest hypothesis consistent with the known facts (Occam’s razor). Refusing Occam’s razor is a failure of scientific logic.”

        At what point do you sharpen the razor? The hypothesis is actually a lump of assumptions. The first is the easiest to verify, a doubling of CO2 or equivalent non-condensable WMGHGs will produce 1 to 1.5 C increase in the temperature of some radiant surface. Where is THAT surface and how much has THAT surface increased? Supposedly, that surface is below the tropopause at a temperature of roughly -25C degrees. Since we are dealing with radiant heat transfer, it would be nice if THAT surface was isothermal and encompassed the entire globe, but you can’t have everything since the energy to maintain that surface is supplied by a near ideal black body that only covers approximately 70% of the true surface and because of internal energy transfer “wall” transfer if you will, the closest you have is the “average” temperature and energy of the oceans to provide that energy in “night”, Tmin mode.

        Since the ocean average temperature is ~ 4C with an equivalent energy of about 334 Wm-2 which happens to be the “DWLR” estimated within a few Wm-2 and since that surface would provide the energy for the entire surface produce an effective total energy of ~ 233.8 Wm-2, it seems like assumption number one could be tweeked a touch. Since there is no true isothermal surface to produce the “ideal” black body at the surface and the minimum temperature and energy that meets that requirement on Earth is ~ 185K ~67Wm-2 equivalent energy, it might be possible that the wrong thermodynamic frame of reference was selected.

        In the real world Occams’ Razor is “where did I screw up”? In thermodynamics, ASS U ME is Occams equivalent. Starting at the proper frame of reference can simplify most opportunities.

        BTW, “wall” energy transfer has the same impact as scattering on radiant transfer, remember it is not a purely radiant problem. When you have scattering or advective energy transfer above and below a poorly selected radiant frame of reference, things don’t go well. There is a check though with all the data available.

        http://berkeleyearth.lbl.gov/auto/Regional/TMIN/Figures/antarctica-TMIN-Trend.pdf

        The Antarctic Tmin, which happens to average below the -25C approximate ERL temperature, would respond inversely to radiant forcing at the true surface. Just like it should. You can also use Stratospheric temperatures which will decrease with radiant forcing change. After removing the low frequency internal variability you might find a less than anticipated impact. Quite likely because of the less than ideal black body that only covers ~ 70% of the planet. I think Solomon has a recent paper on that issue. Santers was also a touch mystified as I recall.

        Once you start considering thermal mass and heat capacity along with albedo, things might start looking up.

        This is just a blog comment though, take it with a grain of salt :)

      • :-) Forewarning. Comments/questions that follow are presented by one with limited formal education in physical science disciplines.

        Regarding “it is hard to see what else could be at work”, have you variables not included robustly in GSMs?

        In referring to 1880 – present, I’m inclined to infer that your reference to anthropogenic causes is backhand reference to rising industrialization and CO2 emissions. Said rise in CO2 emissions is substantially correlated with rising human population, extensive land use change, and emergence of seasonal coastal “dead zones” around the globe everywhere rivers outlet to oceans, etc. If I’m not mistaken, GSMs do not (or have not historically) model non-CO2 anthropogenic influences robustly (if at all).

        Do solar reconstruction studies and GSM modeling of solar influence adequately handle variations in wave length composition of TSI? All I have read about GSMs used for purposes of the IPCC’s 2007 AR4 report lead me to believe the magnitude of variation in TSI is likely greater than what has been observed in the satellite era and that effects of variation in radiation wavelength is not modeled at all. Since shorter wavelengths reaching earth’s lower atmosphere/surface appear likely to penetrate deeper and be absorbed by both soils and bodies of water, failure to model variation in quantity of shorter-than-infrared-radiation strikes me as committing omitted variable error.

        Every description of GSMs and every discussion of GSM outputs that I have read mention one and only one heat source – the sun. Yet, a nuclear furnace beneath our feet generates and releases “geothermal” heat into the oceans and, to a lesser extent, the atmosphere. Additionally, release of that heat at mid-oceanic ridges drives “continental drift” causing/contributing to earth quakes which also release either thermal or kinetic energy or both.

      • Thank you for your elaborate answer, dr. Lovejoy. It has not convinced me, however. The last sentence of my question was intended to mark the pitfall to avoid, but it seems that you stepped into it. Modelling has always been a cyclic process in which comparison to observations leads to hypotheses, model adjustment, etc. This appears to be largely ignored in most of the literature which treats observations in verification as if they were independent from the models. Wherever inverse modelling is applied, cyclic reasoning is never far away (in particular if you are not aware of it). Van Oldenborgh et al 2012 actually found very little skill in hindcasts beyond the linear temperature trends.

        About proxies etc.,surely you know a lot more about them than I do. But scanning your paper, comparing figs. 9 and 10 seems to show that the Haar fluctuation rms curve determined from recent data matches the rms curve determined over large time scales (Vostok) better if the recent trend is not removed? That would indicate rather that the 20th century temperature pattern is unexceptional.

        Again, I will definitely read your paper when I have some time; it seems well worth it.

      • Willis, the changeover from LIA to “recovery” is itself a mechanism which a) lacks explanation, and b) accounts for increase in both warming and variability. In other words, pre- and post-1850 are not the same natural regime.

  10. Philip Richens

    A question! Which 20th C solar reconstruction was used in the model on which the 1850-present comparison was based?

  11. If there’s a large difference in some “characteristic time constant” between the atmosphere and the oceans, you might expect to see a (temporally) intermediate regime with reduced variability.

    • Yes, the ocean also being a fluid has analogous “ocean weather” and ‘Macro-ocean weather” with the time scale being about 1 year rather than 10 days for the weather – macroweather. This is straightforward to confirm empirically (see e.g. fig. 1 in http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/qj.stolle.2012_1916_Rev1_EV_crx.pdf), and is easily explained by its 100,000 fold lower energy fluxes. Fluctuations in macro-ocean weather decrease a bit fast than in macroweather, and out to scales of up to about 5 years there are deviations due to effects such as El Nino, but otherwise, both atmosphere and ocean have qualitatively the same behaviours.

  12. Shaun, you write “It is worth recalling that fluctuation analysis only indicates the types and overall levels of variability at different time scales, it does not directly validate any specific model predictions.”

    I get the distinct impression that this report knew what the findings were supposed to be, BEFORE the study was done, and the report was written. Until any model has been properly validated, it cannot be used to make predictions. We know that no GCM has ever been properly validated. We know that when Smith et al used hindcasting to vaildate their model, and then used it to predicit the future, the predictions turned out to be wrong.

    So any study that pretends to use GCMs to predict the future, is built on quicksand. We know that there is no empirical data that proves that when you add CO2 to the atmosphere from current levels, it causes global temperatures to rise. So there is no a priori reason to suppose that GHGs have an appreciable effect on global temperatures.

    I find the whole scientific argument to be unconvicning.

    • I agree, except, the argument is based on consensus science. Real science is always skeptical. So, I find the whole unscientific argument to be unconvincing.

    • I wrote “So there is no a priori reason to suppose that GHGs have an appreciable effect on global temperatures”

      This should be “So there is no a priori reason to suppose that ADDITIONAL GHGs have an appreciable effect on global temperatures. (Capitals to note the difference).

    • Philip Richens

      Jim: I don’t think the model runs are being used as projections/predictions here. Instead, I think what is being compared is the variation of fluctuation size with time scale. There is a discrepancy between fluctuation size in models and pre-20th century data (e.g. Ljungqvist) apparent after a few decades. This means that models do not reproduce the longer term variability in global temp very well. But it sounds as if this discrepancy hasn’t been found in a comparison with the instrumental record. Therefore 20th century variability may not be natural. I suppose one possible way out is if the variability seen in the model runs is caused by other factors, e.g. incorrect forcings. I’m not sure though.

      • Philip, you write “Jim: I don’t think the model runs are being used as projections/predictions here.”

        Maybe, but I can only go by what Shaun wrote. He wrote “predicitions” as I quoted, so I can only suppose that he used predicitons from the model. Maybe if he is following the discussion, he might be able to clarify.

      • Philip, keep in mind two points. First what you are calling the instrumental record is actually the output of a certain kind of statistical model, not a set of measurements. Satellite readings suggest these statistical models are wrong. Second the GCMs have been designed to match this output, so it is not surprising that they do and it may mean nothing physically, just one model matching another by design.

      • Steven Mosher

        david, satellite readings suggest no such thing. neither does re analysis.
        neither do comparisons between CRN the rest of the network.

    • Before responding, let me make my position clear.
      The outcome of the study was not known before it was done. I am not a GCM modeller and have no particular interest in vindicating these essentially classical deterministic models.

      Indeed, what I have found over the last few years in the course of preparing my book (for the preface: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/Cambridge.Preface.16.1.13.pdf) was not at all black and white – and this is typical of real research. I have spent over thirty years working on natural atmospheric variability but mostly over weather time scales, trying to show that scaling laws – long range statistical dependencies – cover huge ranges of space and time scales (but in the weather domain, i.e. less than about 10 days). The types of models that I was validating were multifractal cascade models (stochastic, not deterministic, and still in my opinion the most fruitful approach) in which the variability builds up scale by scale over ranges perhaps as large as ten billion in space (roughly 1 cm to the size of the planet). Therefore for many years, my main criticism of weather and climate models was that they don’t take into account a nearly wide enough range of scales so as to have realistic variability.

      Starting in 2008 when I began analyzing the models – first weather forecasting models, and then GCM’s – I was therefore quite amazed that the models were able to generate anything close to what was required (for an “early” review: http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Atmos.Res.8.3.10.finalsdarticle.pdf). Although specific problems were identified, the models, data and multifractal cascades were actually close enough to make rather detailed comparisons possible. Finally, the realization that a slight change in the definition of fluctuations (the Haar fluctuation, structure function) has for many applications – especially climate – led to much greater clarity (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/Haar.npg-19-513-2012.final.pdf). As I mentionned, while key aspects of my earlier criticisms are valid – at a minimum the models need cascades for subgrid “stochastic parametrisations” – the statistical properties of the models are surprisingly realistic.

      Returning to your question: let me apologize, the comment that you quote was not clear enough. The issue here is not about predicting the future, the question is whether or not GCM’s – when combined with “realistic” forcings – have sufficient variability at different time scales to match the corresponding variability of the atmosphere (are they for example missing important “slow processes” such as land-ice?).

      The variability issue is pretty concrete. For example the typical fluctuation in multiproxy temperatures over the period 1500-1900 at 2 year times scales is about +-0.35, +-0.2, +-0.075 K when forced by respectively two different volcanic reconstructions and a solar only reconstruction. At 400 year scales, the figures are roughly +-0.125, +-0.075, +-0.075 K and decreasing with scale. In comparison, the multiproxies indicate variabiilties of +-0.1K, +-0.2K (2 years, 400 years) and rapidly increasing (see fig. 7, in http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/esd-2012-45-discussions.version3.SL.pdf) so that there are significant – but not huge – mismatches. Actually the main problem is that analysis of the reconstructed forcings (volcanic, solar) indicates that the GCM variabilities will like continue to drop at longer and longer time scales so that the mismatch with the data will likely increase.

      The point is that if the models and data disagree about the variability then they can’t agree about the actual temperature record. Even when they do agree about the variability, they could still be totally uncorrelated (unrelated) to each other! In other words, the agreement of their variabilities is a necessary but not a sufficient condition for the GCM to agree with the data.

      I hope this is clearer.

  13. The normal climate is always changing from warm to cold to warm and this does repeat. The Roman Warm Time was followed by a cold time and that was followed by the Medieval Warm Time and that was followed by the Little Ice Age and that was followed by the Modern Warm Time and the snows that always fall in a warm time have started to build the ice necessary for the next cold time that will be similar to the Little Ice Age. Earth’s climate does work this way. Look at the Actual Data.

  14. If you look at the last ten thousand years and compare the data, what is happening now is much like what happened in every warming period before now. Why do they think that they can convince reasonable people that it would not have happened this time without people?

    • I’ve looked at a lot of the data, paleodata, model outputs, and from a different point of view than is usually done.

      The usual approach is to attempt to get agreement between different measurements (and often proxies) at specific places and times. For example, until now we have asked questions like: “when exactly was the medieval warming event in England and how warm did it get?”. We all know about the complexities involved in giving a precise answer to that!

      The approach based on fluctuations is rather different, it asks the question “do different sources of data (and/or paleodata and/or simulations, and/or “reconstructions”) agree about the magnitude of the variability at a given time scale?” The reason that this is a better question is that it it is possible to be much more precise about the typical magnitude of a temperature fluctuations at say decadal scales, than about the actual temperature values! For example, a comparison of 4 different global surface data sets shows that they only agree with each other to about +-0.1o C (+-1.8o F), and it turns out that this doesn’t get better by averaging over longer times! That means that we just don’t know the average temperature over the last decade – or the average over the last century – to better than +-0.1o C! However, the same 4 series agree about the amplitude of the average decadal scale fluctuation (i.e. the variability a decadal scales) to within +-0.02o C (this is work in progress, but the numbers are correct)!

      The problem is that the standard way of validating data, paleodata and models has been deterministic (i.e. “what is the temperature at a given time and place”), it has been too demanding, too ambitious. Since study of the amplitudes of the fluctuations is statistical it has the disadvantage is that it gives a necessary – but not a sufficient – condition for two series to be the same. However, given the problems with data and models, at the moment, this is not much of a practical restriction!

  15. IMHO, these are just more statistical studies that don’t prove anything. Everyone here knows that since the Little Ice Age, the NH is warmer, and before the LIA, the NH was warmer. Why?

    Land use changes, i.e. more irrigation and more concrete, glass, asphalt, and heat generating devices will increase temperatures somewhat. I call that AGW, but 10 extra molecules of CO2 per 100k molecules, where is the proof?

    • I call that ALW (local), not good enough for global. Other than that, I agree. Those changes are associated with the variations in solar activity, by the way.

      It’s well known that climate changes (at multi-decadal time scales), it’s well known that for example the colder periods are associated with the prolonged minimums of solar activity, the minimums like the Oort, Wolf, Maunder etc.
      http://upload.wikimedia.org/wikipedia/commons/5/5c/Carbon14_with_activity_labels.svg
      http://upload.wikimedia.org/wikipedia/commons/2/28/Sunspot_Numbers.png

    • The macroweather and climate distinction makes this easier to see. In macroweather fluctuations tend to cancel so that their averages decrease. However, over long enough periods – climate scales – the fluctuations start to increase very much the way they do in the weather regime (see the fig. in the original blog). Therefore, events such as the LIA are quite literally analogous (in a precise mathematical way) to events such as storm systems – or more to the point – cold spells lasting many days.

  16. Willis Eschenbach

    Judith, you keep posting stuff from people who make ludicrous claims. This guy says (emphasis mine):

    But what about the current period with strongly increasing levels of greenhouse gases? Although we’ve only started looking at this, initial results comparing fluctuation analyses of surface temperature measurements and historical GCM runs (1850-present) show that there is a good level of agreement in their multidecadal and centennial scale variabilities [Lovejoy et al., 2012b]. However, the reason for the agreement appears to be that over this period, most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability.

    Now I’m sorry, Judith, but the idea that since 1850 the anthro forcings are “sufficiently strong that they dominate the natural variability” is nothing but an uncited, uncorroborated claim. As far as I know, it is completely untrue, and I know of no reputable climate scientist who says that, for the first century of that period, the anthropogenic forcings outweighed the natural variability.

    In fact, that is the exact question that we have been struggling mightily to determine for a couple decades—do human effects outweigh natural variability … and your guy comes in and claims his method is right because (through some mysterious method) he has determined that anthro forcings outweighs natural variability for the period 1850-1950?

    Really?

    This one goes straight to the circular file. Anyone who believes that, that anthro forcings dominated from 1850-1950, has not done his homework. Too bad, it seems like an interesting idea. Perhaps an actual scientist will take it up, instead of this credulous fellow …

    w.

  17. The International Meteorological Organization did not “defined the first climate normal from 1900 to 1930”, but defined climate “as the average weather from 1900 to 1930”. Neither IMO-1930, nor science ever since, has offered a reasonable, scientifically useful definition of “weather”. The American Meteorological Society (AMS) explanations for “weather” is: “The state of the atmosphere, mainly with respect to its effects upon life and human activities”. Subsequently AMS says:
    ___The “present weather” table consists of 100 possible conditions,
    ___with 10 possibilities for “past weather”, while
    ___popularly, weather is thought of in terms of temperature, humidity, precipitation, cloudiness, visibility, and wind. (Archive at: http://www.whatisclimate.com/ ).

    AMS even forget to mention how many weather conditions make up “future climate”. Chose as you like, and just as it may fit you best. That is Climate!?
    Since ancient times “weather“ and “climate” are layman terms. Without reasonable definitions, they should be regarded as scientifically insufficient. .

    • thisisnotgoodtogo

      ArndB,
      From my observations of how this all winds up, is that layman’s terms are more correct because they are less wrong, being by nature assumed to be less precise and accurate. .

    • My point is that they chanced upon a duration which corresponds roughly to that at which temperature fluctuations are at a minimum. This is an empirically verifiable fact, although the value 30 years is only a rough average since it really varies quite a lot from one geographical location to another and from one epoch to another.

    • Willis Eschenbach

      Shaun, you say (emphasis mine):

      My point is that they chanced upon a duration which corresponds roughly to that at which temperature fluctuations are at a minimum. This is an empirically verifiable fact, although the value 30 years is only a rough average since it really varies quite a lot from one geographical location to another and from one epoch to another.

      I have pointed out and provided spreadsheets showing that I can’t find this claimed minimum in either the Armagh or the HadCRUT data.

      Perhaps the problem lies in what you call “temperature fluctuations”. I am using the standard deviation as a measure of the temperature fluctuations. Is that what you are using? Because I can’t find a significant local minimum in standard deviation, at 30 years or anywhere, for either dataset. Here are the results again for the Armagh dataset.

      http://i863.photobucket.com/albums/ab195/weschenbach/armaghstdevbysamplelength_zpsbb57a808.jpg

      Nor do I find an increase in standard deviation from earliest (1796 in the case of the Armagh record) to modern times. You say that anthropological influences are increasing the fluctuations, but I can’t find evidence of that anywhere.

      What am I missing?

      w.

  18. In this context statistically variability equals ‘risk’ and nothing more. In an area where a developing science adopts a belief that variability can and should be controlled then there must also be a belief that ‘time’ is a factor. But, in a science of ‘weather vs. climate’ the factor of ‘time’ is already controlled by the ordinary meaning we give to the language we are using.

  19. It all went wrong when they changed “global warming”, which is simple at least in concept, to “climate change”. Climate is regional, as in “The Mediterranean climate is good for growing citrus fruits and grapes.” [Cambridge Dictionary example]. A “global climate” is possible as a concept, but difficult in reality unless used to mean global average temperature.

    Cambridge Dictionary defines Climate Change as: “the way the world’s weather is changing”. Says it all, really.

    • Within the walls of Western academia is a refusal to have a serious scientific debate about how much global warming is entirely natural. Since, however, maybe all of it is natural, the debate morphed into speculations that human CO2 is causing the Earth to experience bizarre and unpredictable weather phenomenon—so much so that early apple blossoms in Washington this spring are as worrying as the UK’s elderly burning books to keep warm throughout the coming winter. And, the fix for all of this is so simple: UN representatives from around the world should fly to places like Cancun to talk about how best to stop Americans from driving SUVs.

  20. Willis Eschenbach

    As a simple test of Shaun’s hypothesis that “most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability”, I looked to see if the variability of the climate had increased over the period of the instrumental record. Shaun’s hypothesis was that natural forcings were pretty stable, so any increase in variability must be due to humans.

    To test this hypothesis, I used the HadCRUT data, and I looked at the standard deviation over trailing 1, 5, 10, 50, and 100 year periods. In EVERY CASE variability decreased over the period of record.

    This is the exact opposite of what we would expect, given that the anthropological influence has undoubtedly increased over the period. By Shaun’s hypothesis, increasing anthropogenic effects should reveal themselves as increased variability in the modern parts of the temperature record … but there is no such signal in the HadCRUT data. So the HadCRUT instrumental data doesn’t support his hypothesis.

    Now I freely admit that this may be an artifact of the data processing used to create the dataset, or the choice of the dataset itself, or the scattered and fragmentary nature of the early temperature records … but if so, then Shaun’s conclusions must be subject to the same artifacts and problems, and if so, how can we place any credence in them?

    w.

    • If one were to assume that anthropological influence affects climate/weather in some way (supposedly by increased temperatures and CO2 concentrations in the atmosphere) then in what way could variability (which works both ways on the data) be increased?

      • It has been impossible to tease out a statistically significant human signal amidst the error of the variance. The fix for that is simple: start blaming humans for the variance. It is no longer important who is right or wrong about whether this is legitimate science. It’s more important to first agree on what we know is right and then figure out how to arrive at that concluson–it saves a lot of time.

      • Good question. I didn’t explain this carefully. If there is an overall increasing or decreasing trend in the data, then this can easily dominate the long time fluctuations, the low frequencies.

        It turns out that even if we remove linear trends (for example by using more sophisticated fluctuation definitions that are blind to linear trends) then the variability is still higher in the 1880-present period than in the 1500-1900 period. This indicates that there are nonlinear trends at the longest (in this case centennial time scales: 1880 to present).

      • Thanks for your response Shaun. I was thinking that the anthropological influence could be construed as causing a shift in the underlying trend rather than just an increase in variability.

      • Peter:
        In as much as the term “trend” usually implies a systematic change over a substantial part of a record, by definition, it will affect the low frequency variability of the record.

    • If I understand your method, you calculated the Aggregated Standard Deviation (ASD) as a function of aggregation period (Koutsoyiannis’s jargon)? If so, then this can indeed only decrease; this is precisely why this is an inappropriate method of estimating fluctuations anywhere except the macroweather regime (they increase in the weather and climate regimes)!

      In fact, since your analyses showed a decrease, this pretty much proves that you used such a technique. My analyses (which include the HadCRUT3 data you used) use a fluctuation definition which allows for both decreases and increases in the amplitudes (the Haar fluctuation: over a lag delta t it is the difference of the averages over the first and second half of the interval; to make the interpretation simpler, it should be “calibrated”, an extra factor of 2 is pretty accurate so that it can interpreted in terms of differences (when increasing) and in terms of averages (when decreasing)).

      It is very close to the ASD but has an extra differencing in the definition which is crucial! (See my above comment to David Hagen and http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/Haar.npg-19-513-2012.final.pdf).

      • Koutsoyiannis et al.: “Remarkably, during the observation period [which was 100-140 years,] the 30-year temperature at Vancouver and Albany decreased by about 1.5°C, while all models produce an increase of about 0.5°C … With regard to precipitation, the natural fluctuations are far beyond ranges of the modeled time series in the majority of cases … At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality.”

      • My comments about the necessity of decreases refers to the actual standard deviation of the aggregated series as the length of the series is increased. You must be referring to something slightly different.

      • Willis Eschenbach

        Shaun, thanks for the reply.

        I looked at the standard deviation of the data over trailing 1, 5, 10, 50, and 100 year periods. I looked at all possible periods in the data, overlapping. I didn’t take anything “as a function of aggregation period”, I wasn’t looking to do anything special. I just took the plain old garden variety standard deviations as a measure of the variation of the data over that period, and I noted how each one of those changed over time.

        Now, my understanding of your theory was that the variation is increased by the anthropogenic influences. But I don’t find that, at any length of analysis from 1 to 100 years. Instead, I find the variation decreasing as (presumably) anthropogenic influences rise. The largest variance occurred during the last half of the 19th century, well before the late 20th century increase in anthropogenic GHGs.

        I am still at a loss to understand why this simple test wouldn’t show the increase in variation that you say comes from the putative anthropogenic signal. Any suggestions welcome.

        w.

        PS—If I have learned anything writing for the web, it is to avoid saying something like what you said to me:

        In fact, since your analyses showed a decrease, this pretty much proves that you used such a technique.

        In fact, I didn’t use the technique you referred to at all, or anything like it.

        So all your “proof” turns out to prove is that despite the fact that you didn’t understand the situation, you still had absolute certainty that you were right … which doesn’t bode well for certainty with which you have stated the rest of your claims, like the certainty you seem to have that you have been able to split the temperature data into natural and anthropogenic portions.

        Like I said, I’ve grown much more cautious after making that mistake more than once … more than twice, in fact …

      • Willis and shaun. just post your code instead of trying to describe it in words. You’ll save yourselves some time and effort.

      • Willis Eschenbach

        Steven Mosher | January 21, 2013 at 10:33 pm |

        Willis and shaun. just post your code instead of trying to describe it in words. You’ll save yourselves some time and effort.

        Thanks, Steven. Yeah, you’re right … but then all I’m doing is calculating the standard deviation, hardly rocket science.

        w.

      • Allowing for increases and decreases certainly sounds good but Koutsoyiannis et al. made the interesting finding that 95% of all GCMs predicting more global warming than actual global warming. More interesting though is the Vancouver/Albany example because, while predicted was more than actual, we’re also looking at a change of signs. You are not picking that up. That is a major shortcoming. Depending on the circumstances tthat may be the difference between experiencing a rough landing and pushing up daisies.

      • Perhaps I have missed something but the issue is not the jargon or the algorithmic processing. The issue is to define the quantities we use. I think the variance (and standard deviation—and I prefer these concepts/terms rather than Delta-x etc.) cannot increase with increasing time scale.

        I am sorry that I have to use some mathematics to support my claim (you may feel free to skip this paragraph). Hopefully it is readable and correct :-) Let us assume a stochastic process x(t) in continuous time t. Let us denote C(s) its autocovariance function for lag s. Let X_T denote the time-averaged process at time scale T. Then the variance is (cf. Papoulis, Probability, Random Variables and Stochastic Processes, 1991, p. 299) Var[X_T] = (2/T^2) S_0^T (T – s) C(s) ds, where by “S_0^T” I denote the definite integral from 0 to T. Taking its derivative with respect to time scale T, call it A[T], we find (after manipulations) A[T] := d(Var[X_T])/dT = (4/T^3) S_0^T (s – T/2) C(s) ds. The first term within the integral (s – T/2) is symmetric with respect to T/2 (negative for s &lt T/2 and positive for s &gt T/2). The second term is typically a decreasing function of lag s. Thus, its values for s &lt T/2 are greater than those for s &gt T/2. Clearly then the negative product (for s &lt T/2) prevails over the positive product (for s &gt T/2) and thus A[T] is negative. A negative derivative means that the quantity Var[X_T] is a decreasing function of T.

        Of course one can define whatever quantities one wishes, and depending on the definition there can be decreasing increasing, fluctuating etc. But (as I explained in another comment) I stick with the simplest possible statistical quantities, in this case with the variance, whose statistical properties, including the bias and uncertainty of estimation, are known or are feasible to infer.

      • It appears that Demetris has shown that Willis’ reducing standard deviation as a function of increasing time is mathematically true for all cases. This reflects the fact that the more data one has, the less variability there will be and the more confident one can be for prediction purposes.

      • Willis Eschenbach

        Peter Davies | January 22, 2013 at 8:24 am |

        It appears that Demetris has shown that Willis’ reducing standard deviation as a function of increasing time is mathematically true for all cases. This reflects the fact that the more data one has, the less variability there will be and the more confident one can be for prediction purposes.

        Thanks, Peter. I fear I don’t see that conclusion anywhere in what Demetris says. Why would there be less variability the more data one has? The standard deviation in 1,000 throws of a coin is the same (within measurement error) as the standard deviation in 10,000 throws.

        So no, there is no requirement that the standard deviation of a dataset goes down if the number of data points goes up. The null hypothesis, absent Hurst correlation, is that there would be no change in the SD with increasing N. You can test it yourself. Take a big dataset, and sub-sample it. You’ll see that for any reasonable N, you’ll get the same SD (within error) as the whole dataset has.

        Curiously, for natural phenomena with high Hurst coefficients, the reverse is true. The standard deviation generally goes up when you use longer and longer time periods. You can see that, for example, in the HadCRUT data, where the 100-year standard deviations (≈ 0.3°C) are higher than e.g. the 10-year standard deviations (≈ 0.15°C)

        Finally, however, none of this has anything to do with whether the standard deviation (at whatever timescale you pick) is increasing over time, as Shaun claims. My investigation shows that is not true, at least for the HadCRUT data.

        w.

      • Willis:

        At first glance, your variance calculations would seem to challenge the construct of “Global Climate Disruption”.

      • Willis, I generally agree with what you say. The standard deviation at a specified time scale, say annual, should not change over time. What I said is that it should decrease over time scale, i.e. if we move from the annual to the 2-year scale.

        The “curious” thing you write, that the standard deviation calculated from a record goes up with increased record length, applies to the sample estimate, not the true standard deviation, and has a simple explanation. That is, the standard statistical estimator of standard deviation is negatively biased in the presence of positive autocorrelation. This applies even to a Markov (AR1) model but it is even larger in a Hurst-Kolmogorov process. People usually neglect the bias, but it may be very high and its neglect may result in a large error. The bias is a decreasing function of the record length, so with a longer record our biased estimate becomes closer to the true value.

        Quantification of the bias and mathematical proofs can be found in some of my papers, e.g. http://itia.ntua.gr/511/ and http://itia.ntua.gr/781/

      • Have you seen anything that would cause you to rethink your previous conclusion that, “observed weather [since 2001] falls well outside the 95% [actually better than 97.5%] confidence intervals for trends that might occur if the true trend is 2 C/century [per IPCC projections]”? (Demetris Koutsoyiannis et al., On the credibility of climate predictions)

        If not, the IPCC’s trends are rejected. There is a better than 39 out 40 chance than an underlying trend of 2C/century is way too high and accordingly, the IPCC’s projections are falsified.

      • Steven Mosher

        Willis if its hardly rocket science then you have no reason not to.
        The simple fact is you and shuan failed to communicate. he thought you did x and you claim you did y. I doubt you did y, because I did y and got a different answer.

      • Willis Eschenbach

        Steven Mosher | January 22, 2013 at 3:46 pm

        Willis if its hardly rocket science then you have no reason not to.
        The simple fact is you and shuan failed to communicate. he thought you did x and you claim you did y. I doubt you did y, because I did y and got a different answer.

        Thanks, Steven. Of course you are right. My spreadsheet is here.

        w.

      • Thanks Willis and Shaun. I certainly agree with your position wrt to SD. My wording was a bit loose (tends to happen in blogs) I really meant more data over a longer time frame, because one of my pet dislikes on this blog is the extent of extrapolations being done by both sides of the AGW debate with small time series data sets.

      • Steve Mosher:

        The code for the Haar fluctuation analysis (in MatLab and Mathematica) may be found on my web site:
        http://www.physics.mcgill.ca/~gang/Lovejoy.htm

        Wllis:

        The point about the decrease in the standard deviation of the means of an increasingly long series is mathematics, that’s why it can be stated with confidence!

      • Shaun Lovejoy | January 23, 2013 at 7:20 am | wrote: “The point about the decrease in the standard deviation of the means of an increasingly long series is mathematics, that’s why it can be stated with confidence!”

        Unfortunately, you may find that an overwhelming number of climate discussion participants do not distinguish between the aggregate mathematical properties of data & statistics. This catastrophically limits the scope for sensible discussion. Thanks sincerely for bringing some aggregate enlightenment to the discussion table — quite refreshing.

  21. Robert I Ellison

    A minor point – these models are not strictly deterministic but chaotic given uncertainties in both initial and boundary conditions.

    ‘Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.’ http://rsta.royalsocietypublishing.org/content/369/1956/4751.full

    Weather and climate are deterministic – although dynamically complex enough to resist easy comprehension.

    I am a little pressed for time at the moment – and have not given your argument the attention it deserves. Nonetheless, my point concerns the accuracy of forcings as they are currently estimated. The following is a graph purloined from Norman Loeb. It shows a couple of things. Substantial year to year variability of cloud cover and the hint of a longer term trend in SW cloud radiative forcing. It is by the way consistent with warming shown in ARGO data by Karina von Schuckmann in the period.

    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=CERES_MODIS.gif

    There is more than a suggestion also of decadal changes in the earlier ERBS and ISCCP data. The following is quoted from Takmeng Wong and colleagues – http://www.image.ucar.edu/idag/Papers/Wong_ERBEreanalysis.pdf

    With this final correction, the ERBS Nonscanner-observed decadal changes in tropical mean LW, SW, and net radiation between the 1980s and the 1990s now stand at 0.7, -2.1, and 1.4 W m_2, respectively, which are similar to the observed decadal changes in the High-Resolution Infrared Radiometer Sounder (HIRS) Pathfinder OLR and the International Satellite Cloud Climatology Project (ISCCP) version FD record but disagree with the Advanced Very High Resolution Radiometer (AVHRR) Pathfinder ERB record. Furthermore, the observed interannual variability of near-global ERBS WFOV Edition3_Rev1 net radiation is found to be remarkably consistent with the latest ocean heat storage record for the overlapping time period of 1993 to 1999. Both datasets show variations of roughly 1.5Wm_2 in planetary net heat balance during the 1990s.

    I find the last 2 sentences especially telling – as the forcing exceeds the estimated greenhouse gas forcing in the period by some considerable margin. The positive LW anomaly shows a cooling trend. To help interpret the numbers – LW and SW are positive out (cooling) and Net is positive in (warming) by convention. These are changes in radiant flux anomalies at TOA and are the primary cause of planetary warming or cooling – and the simplest explanation is changes in low level cloud cover linked to variability in ocean and atmosphere circulation.

    In the instrumental record we have quite solid evidence for decadal variation in ocean and atmosphere patterns of standing waves in the climate system known as the PDO, ENSO, AMO, etc. These are best seen as nodes that capture aspects of variability in the underlying complex and dynamic climate system. They have impacts on surface and ocean temperature that are poorly modelled and little understood in any quantitative sense. But we know that they augment and counter any warming from greenhouse gas increases.

    Beyond that are centennial and millennial variability the causes of which can be vaguely inferred from proxy records and comparison to the present day. The following graph is based on a high resolution millennial ice core salt content ENSO proxy by Tessa Vance and colleagues. It is based on teleconnections between the Southern Annular Mode and ENSO. Vance suggests that ENSO influences the tracks of storms spinning off the Southern Ocean. Although – alternatively – there are suggestions that the SAM may be driven by UV/ozone interactions in the stratosphere – and drive ENSO with more or less cold water flowing north along the Peruvian coast. More salt equals more La Niña – and more rainfall for Australia.

    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=Vance2012-AntarticaLawDomeicecoresaltcontent.jpg

    It makes me wonder what more intense and frequent La Niña – seen as a node in the global system – over a few centuries implies for the energy budget of the planet. It suggests the possibility that we are on the threshold of Bond Event Zero and that centennial cooling is on the cards.

    On the timescale of the Holocene even more fundamental variability is seem. The following is from Anastasios Tsonis’ paper on the demise of the Minoan civilisation. The graph follows Moys et al (2002) who constructed an 11,000 year ENSO proxy based on red sediment in a South American lake. It shows extreme variability in ENSO – and therefore global hydrology and climate.

    http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ENSO11000.gif

    As well as the long drought that doomed the Minoans – it shows the shift from La Niña to El Niño dominance some 5000 years ago that initiated drying of the Sahel in a climate shift that changed the course of human history.

    The world is still not warming for a decade or three more – something that suggests that these decadal climate modes are at least as strong as greenhouse gas warming. There is no guarantee either that the modes will repeat the warmer to cooler to warmer pattern of the 20th century.

    ‘The richness of the El Nino behaviour, decade by decade and century by century, testifies to the fundamentally chaotic nature of the system that we are attempting to predict. It challenges the way in which we evaluate models and emphasizes the importance of continuing to focus on observing and understanding processes and phenomena in the climate system. It is also a classic demonstration of the need for ensemble prediction systems on all time scales in order to sample the range of possible outcomes that even the real world could produce. Nothing is certain.’ Slingo and Palmer (2012) op. cit.

    You will forgive me – but until and unless we get a much better handle on TOA radiant flux and the reasons for and limits of variability I will take your conclusion with a grain of salt.

    • Chief, if this response was done when pressed for time, I imagine you could write a book if you had a day off!

      :)

    • Robert I Ellison | January 21, 2013 at 7:18 pm | ..

      “On the timescale of the Holocene even more fundamental variability is seem. The following is from Anastasios Tsonis’ paper on the demise of the Minoan civilisation. The graph follows Moys et al (2002) who constructed an 11,000 year ENSO proxy based on red sediment in a South American lake. It shows extreme variability in ENSO – and therefore global hydrology and climate.

      http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=ENSO11000.gif

      As well as the long drought that doomed the Minoans – it shows the shift from La Niña to El Niño dominance some 5000 years ago that initiated drying of the Sahel in a climate shift that changed the course of human history.”

      In this BBC programme series “Orbit Earth’s Extraordinary Journey” http://www.bbc.co.uk/programmes/b00xztbr
      “Join Kate Humble and Dr Helen Czerski as they explore the relationship between the Earth’s orbit and the weather.”, they said that the Sahara would re-green in 15,000 years time.

  22. “When the International Meteorological Organization defined the first climate normal from 1900 to 1930,”

    In retrospect they could not have picked a worse period for defining a normal climate. That is because climate temperature was increasing at the rate of 0.1C per decade during the above period and this rapid increase in temperature did not stop until 1940.

    “Today, we still use the 30 year ‘normal’ period – referring now to 1960-1990 –”

    Again a poor choice for a ‘normal’ period as in 1960 the temperature was below 1940 while in 1990 it was well above 1940.
    But 30 year periods seem about right for establishing periods when climate temperature was in constant change, or just constant: 1910 – 1940, 1940 – 1970, 1970 – 2000, 2000 – . If the system has a natural frequency it seems to be about 30 years, You would expect it to have some long frequency because of the time for the oceans to have some sort of equilibrium with the atmosphere.

  23. Arguments on climate hinge on uncertainty of data. Pooh, Dixie
    posted (16/01/13 @12.30 pm on Hansen thread,) a link to a new
    paper by P. Bjornbom that finds that global warming due to increased
    CO2, is much less than claimed by the IPCC and confirms the findings
    of Spencer et al and Lindzen& Chou, that a doubling of CO2 levels
    would lead to a global surface temp of about 0.18C compared to the
    3.0C claimed by IPCC models.

    This led me to a rereading of a paper posted on the Air Vent in
    March 2010, on historic variations on CO2 measurements posted by
    Tony Brown, which presents a strong case for reassessing the
    modern assumptions about data on CO2.

    http://noconsensus.wordpress.com/2010/03/06/historic-variations-in-co2-measurements/

    What is , perhaps, always recognised is that CO2 was not first
    understood and measured by Charles Keeling but was preceeded
    by a long history of accurate measurement since it was first identified
    in 1756 by Joseph Black and documented in this paper. What emerges
    is that:
    *From victorian times readings of CO2 were widespread and frequent.
    *Taken by established scientists using reliable methods,.
    *Levels varied , consistently around 310 but above 330 ppm frequently attained. (Allowances for gas lamps idenified.)
    * Averaging disguises the range.
    *European CO2 levels immediately prior to Keeling’s data appeared
    to be around 30ppm higher than he recorded.

    The IPCC seems to have dismissed the careful measurements of
    around 130 scientists and uncritically accepted the measurements
    of amateur meteorologist Keeling, which were criticises as selective,
    based on poor methods (eg Slocum) and later rejected by Keeling
    himself. (p13) The IPCC also seems, uncritically, to base its assumptions
    of unprecedented warming on unusually high CO2 levels on uncertain
    pproxy ice core data conflicting with leaf stomata data that may be
    more valid.

  24. Apology, an edit in para 3 ‘What is, perhaps, *not* always recognised…’

  25. I’m not sure how to square your comments on model results with the results from this study

    http://www.ldeo.columbia.edu/~jsmerdon/papers/2012_jclim_karnauskasetal.pdf

    and their comment in the conclusion that if this oscillation is real the conclusion that the warming over the past 150 years is due to radiative forcing changes is tenuous and could be due to internal variability. Of course if the model results aren’t real and they are creating spurious oscillations under conditions of no external forcing changes it makes their results rather worthless I would think.

  26. Shaun you say:

    “However, there is nothing special about the variability of the solar or volcanic forcings over the period 1880-present compared to the period 1500-1900 (that is, if we accept the corresponding reconstructions …)”

    How do you know that? How can anyone know that at our present level of technology? Seventy percent of the earth’s surface is ocean so it is reasonable to assume that 70% of volcanic activity occurs under the ocean where it mostly remains undetected. We do know (from He3 studies) that around 3 Terawatts of power are released from submarine tectonic activity – this is comparable with that from winds, tides and thermohaline circulation and yet this forcing is either completely ignored by climate modellers or assumed to be uniform in time and space. Volcanic eruptions are certainly not uniformly distributed on land so why should we assume that they are uniformly distributed beneath the ocean. This is surely a major stochastic forcing which is highly relevant to your analysis.

    The “what else can be causing it” argument falls over when all other credible forcings have not been taken into account. Even then “we dont know what we don’t know”. The naive belief that climate models are always completely comprehensive is one of the most annoying features of this debate.

    • I admit that I am basing my statement on the analysis of essentially all the published volcanic and solar “reconstructions”, so that if these are substantially wrong, then anything goes.

  27. We measure climate change by global temperature change. If you look at the temperature graph of the instrumental era you will notice first that the entire temperature curve is a concatenation of El Nino peaks and La Nina valleys. They are integral parts of the temperature curve, not noise, and must not be eliminated. Next you ought to notice that there exist distinct break-points separating temperature regions when physical conditions change. Climate behaves differently between different sets of break-points and computer processing must not be used to join these segments together. Starting with the twentieth century, the first ten years were a continuation of end-of-nineteenth-century cooling. I will come back to that. Then suddenly in 1910 warming begins. There was no parallel increase of atmospheric carbon dioxide and that rules out greenhouse warming as a cause. This period comes to an abrupt end in 1940 with the severe World War II cold spell. Unfortunately most temperature curves show a heat wave for the war years. No one seems to remember that the Finnish Winter War of 1939/40 was fought at minus 40 Celsius. Or that General Frost immobilized German tanks in front of Moscow. Or that the Battle of the Bulge was fought in the coldest winter west Europeans could remember. That is a definite break-point for climate change. And incidentally, if you want to claim that early century warming for the greenhouse you must also explain how to turn greenhouse warming off instantaneously. There was some warming just after the war but in the 50s, 60s and early 70s the temperature stood still. Hansen has a theory that it was a period of warming except that aerosols from war production just blocked out the sun. Likely story, I say. That period is a climate regime on its own. It came to an end with the Great Pacific Climate shift of 1976. It is said to have been a step warming that raised global temperature by 0.2 degrees. It was over by 1980 which brings us into the satellite era. No step warming can be greenhouse warming and since it ushers in new conditions it is also a break-point. I have analyzed the satellite era (see my book “What Warming?”). For eighteen years in the eighties and the nineties global mean temperature stood still and the only thing that happened was ENSO oscillations. There were five El Nino peaks among them. The middle one was the 1988 El Nino, the one Hansen reported to the Senate as proof of global warming. Six months later that El Nino was finished and the La Nina that followed had already lowered global temperature by 0.4 degrees. This period came to an end with the Super El Nino of 1998. It was an outlier, a once in a century peak, that brought much warm water across the ocean. This caused a step warming that raised global temperature by a third of a degree in only four years and then stopped. That step warming is a very important break-point but it has been completely obliterated in ground-based temperature curves. It was the only warming during the entire satellite era. This, and not greenhouse warming is the cause of the very warm first decade of our century. There has not been any warming since then but all the years that follow sit on top of that high temperature plateau created by the step warming. Hansen takes advantage of that and points out that the ten warmest years all belong to that decade. That is true but not because of any greenhouse warming involved. Worse yet, Hansen also shows the eighties and the nineties as a steady temperature rise so you don’t even see the temperature standstill of the eighties and nineties or the temperature step that follows in 1998. And if that is not bad enough Müller follows suit with his Berkeley temperature project. What must be done is to create a new baseline for the period following the step warming of 1998 so that we can compare apples with apples and not with oranges.
    Now a few observations about earlier centuries. Müller takes the temperature curve back to 1753. Then he notices some eighteenth and nineteenth century oscillations he does not understand. He suspects volcanoes and tries to fit a combination curve of volcanic sulfate and ln[CO2] to the data. That is complete nonsense because volcanism lacks periodicity and volcanic cooling is a myth anyway (read my book). He is good at curve fitting, though. He also tries to compare it with AMO but that does not work either. His oscillations are very obvious in his Figure 1 and there is nothing like that anywhere else. I measured them and they have a period of 25 years, start in 1753, and are finished by 1910. Their amplitude decays alowly over this 150 year period. This points to a possible, if unknown, starting event whose influence slowly disappeared over time

  28. Or that we should apply scientific method and test the hypothesis. Hysteria doesn’t help.

  29. Philip Richens

    Shaun,

    Recent reconstructions of 20th century TSI have tended to suggest less variability than originally thought. I’m not sure which reconstruction(s) were used in the GCM runs used for your 20th century comparisons. Is this choice likely to modify any of the conclusions?

    Also, have you done a scaling analysis for the Christiensen and Ljundquist reconstruction (http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf, figure 5)? If so, how does the result compare to the other post-2003 reconstructions?

    • Yes, analysing the solar reconstructions with fluctuation analysis revealed something very interesting: that the earlier sunspot based reconstructions (Lean 2000, Wang and Lean 2005, Krivova 2007) were all fundamentally increasing with scale whereas the more recent 10Be based reconstructions (Steinhilber et al 2009 and Shapiro et al 2011) were rapidly decreasing with scale! Therefore the question of the correct solar reconstruction is pretty open.

      See Lovejoy, S., D. Schertzer, 2012: Stochastic and scaling climate sensitivities: solar, volcanic and orbital forcings, Geophys. Res. Lett. 39, L11702, doi:10.1029/2012GL051871. http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Sensitivity.2012GL051871.proofs.SL.pdf

      A forcing rapidly decreasing with increasing time lags is unlikely to account for temperature variablity which increase in the multicentennial and multimillenial scales.

      I used 3 post 2003 reconstructions: Huang 2004 (boreholes), Moberg et al (2005) and Ljundquist 2011. They all had similar low frequency variabilities, quite a bit more multicentennial variability than 5 pre 2003 reconstructions I looked at.

  30. Shaun Lovejoy

    However, the reason for the agreement appears to be that over this period, most of the long term variability of both the data and the models are due to the anthropogenic forcings that are sufficiently strong that they dominate the natural variability.

    Does not the data (HADCRUT & GISS) since record begun show a single pattern:

    http://www.woodfortrees.org/plot/hadcrut3vgl/compress:12/plot/hadcrut4gl/compress:12/detrend:0.01/offset:-0.03/plot/gistemp/compress:12/offset:-0.1/plot/hadcrut3vgl/from:1884/to:2004/trend/plot/hadcrut3vgl/scale:0.00001/detrend:-0.96/offset:-0.71/plot/hadcrut3vgl/scale:0.00001/detrend:-0.96/offset:-0.46/plot/hadcrut3vgl/scale:0.00001/detrend:-0.96/offset:-0.96/plot/hadcrut4gl/from:1884/to:2004/trend/offset:-0.03/detrend:0.01/plot/gistemp/from:1884/to:2004/trend/offset:-0.1/plot/hadcrut3vgl/scale:0.00001/offset:1.5

  31. Paul Matthews

    It’s disappointing that you keep on claiming that variability over 1880-present is greater than at earlier times. Obviously this later period corresponds to instrumental data while the earlier times rely on proxies. It seems therefore more likely that the difference you observe is due to different forms of data that are not comparable, the usual apples and oranges problem.

    See also Willis’s comment that variability has not increased over the instrumental period. No fancy statistical tests are required to see that Willis is right – just look at the graphs.

    • Paul Matthews

      That’s the most succinct and lucid counterargument I’ve seen to counter the premise that variability has increased compared to earlier periods where GHGs were lower by a few ppm.

      Unfortunately, we have several examples in climate science where such “apples and oranges” comparisons are used to show changes allegedly resulting from human GHG emissions.

      Max

    • I’ve looked at 4 different instrumental surface series back to 1880 and 9 different multiproxy reconstructions of temperature and criticized the pre 2003 multiproxies on the basis of their suspiciously low multicentennial variabilities (hence my use of post 2003 multiproxies which were much improved). If you reject all proxies, then anything much before about 1850 – 1880 is unknowable so no progress is possible!

    • It turns out that this increased variability in the recent period is due to the exceptional warming which dwarfs the variability in previous 130 year periods (0.8o C compared to variability of about ±0.1o C in the previous 130 periods). Removing this overall trend leaves a variability very close to the previous 130 periods…

      • Paul Matthews

        Sorry, but this doesn’t make sense and doesn’t address the point. You should either work with proxies or with instrumental data, but not use both as they are not directly comparable.
        And what is this ‘exceptional warming’?
        And where does the idea that previous variability was only 0.1C come from?

  32. May I politely ask if you read any of the other comments before posting yours? If so, maybe you would like to give us your analysis of them. I’m thinking of Arno’s, immediately above yours, as it contains as much reasoned argument for natural cycles in the climate as anything I’ve seen from the CO2-only camp.

  33. tempterrainn

    Your whole paragraph about climate always changing made sense, until you got to the sentence:

    And so it doesn’t matter in the slightest that human activity is likely to cause the climate/macroweather to warm by several degrees in the foreseeable future?

    Unfortunately for you (but fortunately for the rest of the world) there is no empirical scientific evidence which validates the premise “that human activity is likely to cause the climate/macroweather to warm by several degrees in the foreseeable future”.

    It’s an imaginary model-created hobgoblin, TT.

    Forget the bad dream – you can pull the covers back over your head and go to sleep.

    Max

    • Heh, Jim, that evidence thingy? We’ll know it when we see it.
      =======

    • Jim, I said it was trivially true but uninformative (spoken in the context of hypothesis testing). I guess the modern, improved way to look at this would be using Bayesian inference instead of null and alternate hypotheses.

      My objection hasn’t been to your requests for evidence, but to your hand-waving claim that “the climate sensitivity for CO2 added to the atmosphere form current levels, has been proven to be indistinguishable from zero, by observed data.”.

      I’ve asked you how you’ve ruled out a low climate sensitivity to CO2 (eg. of 0.1-0.3 K/(W.m^-2) ), given the uncertainties in the forcings, and the unknowns in the processes resulting from the radiative changes in the LW (and the SW), but you haven’t answered.

    • oneuniverse, you write ” but you haven’t answered.”

      I have answered many, many times. I can find no CO2 signal. I deduce that there is an indication that climate sensitivity is indistinguishable from zero. That is all I have done. And no matter how many times you ask me, I will give the same answer

      • For the 100th time climate sensitivity cannot be indistinguishable from zero.

        Climate sensitivity is the response in C for a change in forcing in watts.
        Such that if the sun increases by 3 watts and the earth responds by warming by 3C the sensitivity is 1. If the watts increase by 3 and the temperature increases to 6C the sensitivity is 2. If the watts increase by 3
        and the temperature goes up by .5C the sensitivity is .5. Sensitivity cannot be zero or indistinguishable from zero.

        The related question is what is the sensitivity to a doubling of C02.

        1. Doubling C02 gives you 3.71 watts

        If climate sensitivity is 1, then the sensitivity to doubling is 3.7 * 1
        if the climate sensitivity is .8 then the sensitivity to doubling is 3.7* .8
        if the climate sensitivity is ZERO then the sun doesnt warm the planet.

        So, you need to argue that doubling C02 gives you less watts than 3.7
        Instead you are saying that there is no relation between changes in Watts and changes in temperature.

      • Steven Mosher

        If “equilibrium climate sensitivity” is defined as “the temperature response at equilibrium resulting from a doubling of atmospheric CO2“, then it could be “indistinguishable from zero” as Jim Cripwell writes.

        It all depends on the definition of ECS.

        Max

        PS I have personally concluded from all the available data out there, that (2xCO2) ECS is likely to be somewhere between 1C and 2C. IPCC previously had it at 3.2C+/-0.7C and will likely reduce this estimate based on the results of latest studies (but who knows what they’ll do – earlier JC post)?.

      • Steven Mosher

        Lemme give you that IPCC quote (AR4 WGI Ch.8, p.633):

        The mean and standard deviation of climate sensitivity estimates derived from current GCMs are larger (3.2°C±0.7°C) essentially because the GCMs all predict a positive cloud feedback but strongly disagree on its magnitude.”

      • This ended up in the wrong spot, so am re-posting:

        manacker | January 29, 2013 at 11:33 am | Reply

        Speaking of (2xCO2) climate sensitivity estimates by the IPCC GCMs, this appears to be “getting legs” in the general media.
        http://www.foxnews.com/science/2013/01/28/un-climate-report-models-overestimated-global-warming/

      • Steven Mosher

        “So, you need to argue that doubling C02 gives you less watts than 3.7″

        No.

        I can simply state that I have concluded, based on the data that are out there, that (2xCO2) ECS is likely to be somewhere between 1°C and 2°C, rather than 3.2°C±0.7°C as previously predicted by the GCMs cited by IPCC in AR4.

        If someone else wants to convert that to the concept of radiative forcing and put a W/m^2 estimate on it, that’s fine.

        Max

      • “So, you need to argue that doubling C02 gives you less watts than 3.7”

        An alternative argument would be that it matters if the Watts were LW or SW.

    • Jim, I’ve replied on the open thread.

      • The zeitgeist of global warming model-making is an ever-growing bureaucrat army of facilitators who are better paid than productive people who pay taxes: a secular, socialist government lording over a society in economic ruin and moral decline. The Left-Right divide has become a choice between fear of weather wierding vs. respect for love of freedom, individual liberty and faith in the power personal responsibility.

  34. Willis Eschenbach

    Robert I Ellison | January 21, 2013 at 7:18 pm

    … Weather and climate are deterministic – although dynamically complex enough to resist easy comprehension.

    And you know this how? Bernard Mandelbot would certainly disagree … in fact, Bernard also disagrees with Shaun.

    But more formal definitions need not be meaningful. That is, in order to be considered really distinct, macrometeorology and climatology should be shown by experiment to be ruled by clearly separated processes, In particular there should exist at least one time span on the order of one lifetime that is both long enough for micrometeorological fluctuations to be averaged out and short enough to avoid climate fluctuations…

    It is therefore useful to discuss a more intuitive example of the difficulty that is encountered when two fields gradually merge into each other. We shall summarize the discussion in M1967s of the concept of the length of a seacoast or riverbank. Measure a coast with increasing precision starting with a very rough scale and dividing increasingly finer detail. For example walk a pair of dividers along a map and count the number of equal sides of length G of an open polygon whose vertices lie on the coast. When G is very large the length is obviously underestimated. When G is very small, the map is extremely precise, the approximate length L(G) accounts for a wealth of high-frequency details that are surely outside the realm of geography. As G is made very small, L(G) becomes meaninglessly large. Now consider the sequence of approximate length that correspond to a sequence of decreasing values of G. It may happen that L(G) increases steadily as G decreases, but it may happen that the zones in which L(G) increases are separated by one or more “shelves” in which L(G) is essentially constant. To define clearly the realm of geography, we think that it is necessary that a shelf exists for values of G near λ. where features of interest to the geographer satisfy G>=λ and geographically irrelevant features satisfy G much less than λ. If a shelf exists, we call G(λ) a coast length”

    After this preliminary, let us return to the distinction between macrometeorology and climatology. It can be shown that to make these fields distinct, the spectral density of the fluctuations much have a clear-cut “dip” in the region of wavelengths near λ with large amounts of energy located on both sides. But in fact no clear-cut dip is ever observed.

    When one wishes to determine whether or not such distinct regimes are in fact observed, short hydrological records of 50 or 100 years are of little use. Much longer records are needed; thus we followed Hurst in looking for very long records among the fossil weather data exemplified by varve thickness and tree ring indices. However even when the R/s diagrams are so extended, they still do not exhibit the kinds of breaks that identifies two distinct fields.

    In summary the distinctions between macrometeorology and climatology or between climatology and Paleoclimatology are unquestionably useful in ordinary discourse. But they are not intrinsic to the underlying phenomena.

    In other words, Mandelbrot analyzed the nature of weather/climate. He found it to be chaotic, with a corresponding characteristic fractal dimension. However, in total disagreement with Shaun, he found there was no “break” between climate and weather to indicate that they are distinct regimes.

    Further discussion of the issues here

    w.

    • Robert I Ellison

      The global coupled atmosphere–ocean–land–cryosphere system exhibits a wide range of physical and dynamical phenomena with associated physical, biological, and chemical feedbacks that collectively result in a continuum of temporal and spatial variability. The traditional boundaries between weather and climate are, therefore, somewhat artificial. http://www.cgd.ucar.edu/cas/jhurrell/Docs/hurrell.modeling_approach.bams10.pdf

      The ‘dynamically complex’ system is merely another term for chaotic. Climate is characterised by control variables and multiple negative and positive feedbacks. All of these are physical processes that are in principle fully deterministic but yet incalculable with the pecision required to arrive a single deterministic solution. Models are chaotic and the results as Slingo and Palmer say are probabilistic forescasts at best – and the limits of imprecision can only be explored with sytematically designed model families. Something we have yet to see.

      Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable. They echo other famous limitations on scientist’s expectations, namely the undecidability of some propositions within axiomatic mathematical systems (Gödel’s theorem) and the uncomputability of some algorithms due to excessive size of the calculation. http://www.pnas.org/content/104/21/8709.full

      So while weather and climate are both chaotic – which are indistinguisable across the vastly different scales of time and space in which we find the physical processes occuring – the processes themselves have real world cause and effect.

    • It’s funny that refer to Mandelbrot (Benoit, not Bernard) since I worked with him 30 years ago and published a paper with him on fractal models of rain (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/Tellus.1985.small.pdf). Actually Mandelbrot and I were pretty much in agreement about weather and climate. And actually your quote

      “In particular there should exist at least one time span on the order of one lifetime that is both long enough for micrometeorological fluctuations to be averaged out and short enough to avoid climate fluctuations…”

      is right on target; that is exactly what macroweather is all about, it is the range of factor ≈ 1000 in scale which does indeed separate out the weather and climate regimes. Ironically the research for first paper published on the wide range scaling (fractal, multifractal) properties of the atmosphere (http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/Annales.Geophys.all.pdf) were when I was working with Mandelbrot on fractal models of rain and we did indeed discuss these topics (macroweather was then called “the spectral plateau”, but it was the same intermediate regime).

  35. Willis Eschenbach

    I’ve been thinking some more about this question, and I don’t see the distinctions that Shaun is talking about. He seems to be conflating spatial and temporal averages.

    For example, look at his Figure 1 above. The bottom trace is actual instrumental measurements from a single spot. The middle trace, on the other hand, averages temperatures over time, 20 days. But what Shaun has also done is to average the temperatures over space as well…

    Of course, this averaging over both time and space gives different variability than the variability of the individual actual measurements … but is the variability change solely due to the increasing TIME of the measurements (20 day averages vs instantaneous readings), or is the variability change also due to the increasing SPACE covered by the averages (an entire gridcell up in the Ellesmere Islands).

    Next, he is showing three different kinds of data in Figure 1. The top trace is the Vostok data, which is both poorly sampled, and is integrated over time and space by the physical setup.

    The middle trace is the output of a computer program giving us its estimate of the temperatures.

    The bottom trace is of observations.

    Is he serious? Talk about comparing apples and oranges, he’s tossed bananas into the mix. One trace shows highly integrated temperatures in Antarctica, the next one shows a computer’s guess about temperatures near the North Pole, and the final one is a single station in Wyoming …

    I’m sorry, guys, but looking at those three datasets doesn’t show a damn thing about macroweather, a lovely concept but one I fear is not demonstrated in Figure 1 …

    Nor is there any reason that he could not have used instantaneous, 20 day, and fifty year averages from a single station. If that had shown his “macroweather”, then we’d have something to talk about.

    Now, it may be that I’m just a suspicious bugger. And it may be that I’ve been around climate science too long. But when a man’s Figure 1 shows absolutely nothing about what he claims, I get very nervous … if his phenomenon is real, he should be able to show it using one single station. The fact that he has not done so speaks volumes to me …

    w.

  36. Could the increased variability of 1850-present vs. 1500-1900 simply be an artifact of better record keeping?

    • The problem isn’t bookkeeping since over the period 1500-1900 we mostly have information from proxies.
      But actually all proxies of temperature show the same story: much more variability in the last 100 years or so, and this is almost all due to the rapidly increasing overall temperature trend.

      • Willis Eschenbach

        Actually, the problem right now is your overblown claims without citations.

        You say that “all proxies of temperature show the same story” … really? Every single last one of them? Don’t make me laugh. I’ve seen proxies of all types, they don’t all “show the same story”, that’s a joke.

        Some proxies might tell the same story, but all we have is your word on that … and since you’ve been absolutely sure of things that were 100% wrong, twice in this thread alone, I fear your unsupported word at this point is worthless. You may be right … but you have forfeited all claims to being believable.

        Next, what proxies are you using that cover the last hundred years? I’ve seen very few that come up to the present day. Which ones are you speaking of?

        Next, why are you using proxies for the last hundred years, when there are perfectly good instrumental records?

        Finally, where are the citations? Where are the studies to back up your claims that “all proxies of temperature show the same story”? Presumably, you are basing that on some study that analyzed every single proxy, and determined they were all members of the same choir, singing the same song … but where is that study that covers every single proxy?

        I gotta tell you, Shaun … so far, I’m not very impressed. I’m not saying you’re wrong in your fundamental claims, but the surrounding claims are doing very poorly. Your habit of overclaiming things is hard to deal with, because some of what you say is clearly fantasy, but it’s hard to tell which parts …

        w.

  37. Hi Shaun
    You wrote
    “Although we’ve only started looking at this, initial results comparing fluctuation analyses of surface temperature measurements and historical GCM runs (1850-present)………if further research confirms our conclusions, it would mean that anthropogenic induced warming is the most important factor in explaining the variability since 1850.”

    However, in my view, you are comparing the wrong metrics. For global warming, the accumulation of Joules is the much more robust metric. This can be most accurately done by comparing model and observed changes in the global annual-average upper and total ocean heat content in Joules; e.g. see

    Pielke Sr., R.A., 2003: Heat storage within the Earth system. Bull. Amer. Meteor. Soc., 84, 331-335. http://pielkeclimatesci.wordpress.com/files/2009/10/r-247.pdf

    The latest upper ocean data shows muting of a warming signal; i.e. see
    http://oceans.pmel.noaa.gov/

    If the heat is being transferred deeper into the ocean, it is certainly not being sampled by the surface temperature trends.

    Moreover, while the record is not as long as the surface temperature data, it does not suffer from the warm bias that we have identified in our papers, e.g.

    McNider, R.T., G.J. Steeneveld, B. Holtslag, R. Pielke Sr, S. Mackaro, A. Pour Biazar, J.T. Walters, U.S. Nair, and J.R. Christy, 2012: Response and sensitivity of the nocturnal boundary layer over land to added longwave radiative forcing. J. Geophys. Res., doi:10.1029/2012JD017578. http://pielkeclimatesci.files.wordpress.com/2012/07/r-371.pdf

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841. http://pielkeclimatesci.wordpress.com/files/2009/11/r-345.pdf

    Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010: Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841”, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655. http://pielkeclimatesci.wordpress.com/files/2010/03/r-345a.pdf

    Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229. http://pielkeclimatesci.wordpress.com/files/2009/10/r-321.pdf

    In terms of this recent hiatus in warming, if this is just part of macroweather, we should expect a well above average warming in the coming years. Is this the type of prediction you would make?

    Best Regards

    Roger Sr.

    • Thanks Roger for you comment. Clearly, I should not have mentionned anthropogenic anything in this blog since for the moment that’s off-subject as far as macroweather is concerned!

      On the larger issue you raised – what is the best way to measure the warming, you are quite right that ocean heating is more sensitive than atmospheric temperature, and thanks for the references.

      I guess my unfortunate phrase – where I ventured into speculation on attribution – was really an attempt to illustrate the power of the scaling fluctuation analysis method. One can check that the variability as a function of scale is very much the same from say century to century from 1500-1900 indicating that the, internal dynamics, external forcings and feedbacks were roughly the same (statistically stationary). However, from around 1900 to present, the variability is far higher due to the overall warming trend. At the same time one can easily check that the main external forcings: solar and volcanic were also pretty stationary from century to century from 1500-1900 and that the period 1900-2000 was nothing special (more or less the same). It turns out that using even low end temperature sensitivities to CO2 doubling (about 2.3o C for doubled CO2), that the 1900-present variability can be easily explaining when combined with natural variability about the rising trend (assumed of the same scaling type as pre 1900).

      I’m writing a paper on this now so I won’t post more details at the moment.

      -regards,
      Shaun

      • Shaun, I don’t think the rub is AGW as much as CO2 caused AGW. CO2 makes a fine tracer gas since it would amplify other causes of warming.

        https://lh3.googleusercontent.com/-wJ5yIF7l6nc/UQCHlJLmiAI/AAAAAAAAG38/XInClIN5tVo/s912/macroweather%2520cru4.png

        Just comparing regional data, the NH does seem to amplify all possible changes to the system and hint at some longer term factor than ACO2.

      • Mythology errs; Apollo and the Vulcans still contend for dominance.
        ===========

      • The Skeptical Warmist

        Captdallas said, “Just comparing regional data, the NH does seem to amplify all possible changes to the system and hint at some longer term factor than ACO2.”

        —-
        Capt,, have you ever looked at the skewed relationship between the amount of energy that is naturally advected to the NH from the tropics versus how much goes to the SH. The ratio is something around 2.5 to 1. This doesn’t even take into account the huge heat sink that is the southern ocean. So with the already skewed uneven nature of the advection of heat on this planet, if there was extra energy being put in the system, the NH would get warmer faster, which is of course exactly what every data set shows.

      • Gates, “Capt,, have you ever looked at the skewed relationship between the amount of energy that is naturally advected to the NH from the tropics versus how much goes to the SH. The ratio is something around 2.5 to 1.”

        Yes I have actually. Around 2:1, but if the SH leads the warming due to a longer term cycle, likely land use in my opinion, we are good at removing all that buffering NH snow cover, then attribution to CO2 would be over estimated, kinda like the more current paleo (post 2003 with data that extends past 1960), seems to indicate.

        Now there is a battle of scientific opinion on the possible impact of natural longer term variability, Toggweilder et al. up to 4 C possible abrupt changes (3C NH warming at the expense of 3C SH cooling with ~4C global impact due to the Drake Passage opening) and Hansen , “It is not possible for natural variability to create this magnitude of change.” or words to that effect. Since a 10% shift in the ACC can produce in the ballpark of 10^22 Joules per year change in uptake, I tend to lean toward the dark horse, Toggweilder et al :)

  38. Shaun Lovejoy – If I understand correctly what you are doing, it seems unsurprising that 20-day intervals would show a different pattern. Weather tends to move in surges of a week or two in duration, as high-pressure systems move across, followed by lows, etc. 20-day intervals would therefore naturally have very high fluctuations, compared with the overall 40-year trend which is likely to be no more than half a degree or so. I don’t think you found what you thought you found. Correct me if I am wrong.

    • At a more basic level, larger and larger weather systems (“structures’, “eddies”) live longer and longer, the larger that they are. When they get to be planetary sized, they live about 10 days (give or take a factor of 2). For longer times, we’re looking at the averages over many structures and these tend to cancel, hence the macroweather.

      You’re quite right that there is nothing surprising here – except that it took so long to clarify something so simple!

  39. I have been following the discussion of macroweather (with thanks to Philip Richens for drawing my attention to it at Andrew Montford’s blog) but sadly have not been able to invest the time to do a detailed analysis to try and understand it in more depth. I would like to at some stage though, perhaps when I have more time.

    That said, while I may have the authors ear: I have noted some similar behaviour when looking at high latitude sites, that we do see more high frequency variation than, say, in the tropics, or even globally averaged. This appears to correspond with what Dr Lovejoy refers to as “macroweather”, especially given the high latitude (75N) of the example in the centre graph of the figure above.

    There is something that makes me uncomfortable with the analysis though. I am not saying it is necessarily wrong, just something that doesn’t make sense to me at the moment. For high latitude sites, we see a marked difference in the series variance during winter (higher variance) and summer (lower variance). The trouble is, the anomaly method removes the mean level which leaves a time series with the dominant orbital forcing removed, and so notionally a fixed expected value, but not a constant variance.

    I wonder what the consequence of this annual inhomogeneity in the variance on the statistical analysis. I need to read the paper more carefully to see if this is dealt with in any way. Or is this inhomogeneity what is being referred to as “macroweather”?

    Interesting discussion, anyway.

    • ___“For high latitude sites, we see a marked difference in the series variance during winter (higher variance) and summer (lower variance).”

      This is an important aspect, which gets by far too little attention. The mentioned differences should have been a major source of climate research since long. North of latitude 50°N, and with regard to Europe, north of the English Channel, the North Sea and the Baltic have a decisive role on the winter conditions. I remember an article in one English newspaper about 1989 (the clipping is lost), explaining that if one would erect a high wall along UK’s west coast, (with a height as the mountain range in Scandinavia), Asian cold waves could be seen frequently in the Kingdom. For example: The significance of to North- and Baltic Sea in this winter is discussed, here: http://climate-ocean.com/2013/12_8.html , titled: “Northern Europe’s bulwark against Asian cold from 19-31 January 2013”.

      Winter condition in Europe, when the influence of the sun is low, is an excellent source to study how climate works, and the role of the regional seas (and the oceans) have North of latitude 50°N.

    • To clearly see the macroweather you need to remove annual periods and subharmonics (mostly 6 months). Similarly, to see the weather behaviour with surface data, you need to remove the diurnal cycle and subharmonics at 12 and 8 hours. When this is done the inhomogeneities you discuss are pretty small (although there will some left-overs from the even higher order subharmonics and these could be removed if needed).
      Does this answer you question?

      • Thanks for your explanation, but it seems to me that we have rather different aspects in mind. My example shall indicate that semi-enclosed seas (e.g. Northern Europe) are an excellent source to study maritime versus continental weather/climatic issues.
        Best regards

      • Dr Lovejoy, many thanks for taking the time to respond.

        I am not so concerned about removal of the cycles as annual periods and subharmonics. This are designed to yield a series with local sample mean of zero (i.e., the temperature anomaly). These cannot remove changes of variance, which are implicitly spread-spectrum and cannot be removed in the same way.

        I have been trying to develop my thinking on this further, and to make analogies in frequency space, the variance change is somewhat like an amplitude modulation in communications. However, it is not an amplitude modulation of a known signal, but an amplitude modulation of broadband fluctuations by a narrowband cyclical pattern (the orbit of the earth). By mixing these two in frequency space, you end up with two components; the broadband fluctuations shifted up in frequency, plus the broadband fluctuations shifted down in frequency. Because they are broadband, they cannot be distinguished from the fluctuations you are trying to analyse. I do not yet know if this affects your analysis, but it seems to me such shifts in frequency have the potential to cause issues. It may be possible to predict the magnitude of the error this could introduce.

        I downloaded daily station data (tmin, tmax) from Resolute airfield (75N, 95W) and have confirmed this behaviour. I do not think the effect is small; the change in variance between winter and summer is of the order of a factor of three for the data I looked at. I have also downloaded your MATLAB code so may try running your analysis at some point. (I also have the wavelet toolbox which includes the Haar wavelet but I will use yours in preference). It may take me a while to get around to running it though, I have too many other projects on the go just at the minute.

        If the change in variance is a factor, then I do not think it is isolated to the limited range of scales shown, because the change in variance is also visible in proxies such as the Vostok ice core. This would not show up in the analysis in the same way though, as the ice core is not converted into temperature anomaly in the same way.

      • I think I understand your point… except that the (nonlinear) effect of modulating the variance at an annual frequency is precisely the reason why one has not only a strong annual spike in the spectrum but also the subharmonics! Hence removing the subharmonics removes the effect that you refer to.

        A short comment on Haar: while it’s true that it is a wavelet, it is so simple that it really isn’t worth getting bogged down in wavelet software and the wavelet formalism. The fundamental point is to combine averaging and differencing in the definition of fluctuatoins.

      • Shaun Lovejoy | January 24, 2013 at 11:22 pm | wrote: “A short comment on Haar: while it’s true that it is a wavelet, it is so simple that it really isn’t worth getting bogged down in wavelet software and the wavelet formalism. The fundamental point is to combine averaging and differencing in the definition of fluctuatoins.”

        You cannot believe the headaches I’ve encountered trying to get people in climate blog discussions to realize this simplicity. Both Lucia & Tamino (both climate bloggers) intransigently refused to admit this a few years back when a paper used an equivalent approach. I note that more recently the lights came on for Tamino. He’s now using the approach and starting to develop better intuition.

        Have you considered the merits of using Gaussian averaging instead of boxcar? And have you explored the utility of second order central differencing? I started out applying multiscale boxcars on backward differences, but I later became aware that multiscale Gaussian-averaged central differences are superior in most (but not all) climate contexts.

        (This may not be the place to discuss this in detail.)

        Regards.

  40. For the models the story is therefore fairly straightforward and agrees for example with the findings of van Oldenborgh et al 2012 that the main skill in the decadal scale GCM predictions is in their Greenhouse gas responses.

    Modellers so far shrug off temperature standstills as mere variability. But,are those models capable of producing a significant declining temperature trend in the face of ever increasing co2 ?
    I suspect they are not. What period of standstill or even a period of temperature decline will force a rethink ?

    However, if further research confirms our conclusions, it would mean that anthropogenic induced warming is the most important factor in explaining the variability since 1850.

    If further research takes a few years then I fear Shaun will be disappointed should long term cooling arrive as an increasing number of people expect.

    I am in little doubt that over the coming ten to twenty years (perhaps a much longer period) that mother nature will deal a hard blow to co2 modellers. We have had a 16 to 20 year temperature standstill and may well be entering a similar period of temperature decline. Will Shaun et al shrug off both periods as variability ?

  41. In terms of predictions, if I write a climate model based on what I expect, then I shouldn’t be too surprised if that is what I get,

    In reality, you can’t always want what you get.

    • But if you die sometime, you might find; you need what you get.
      =============

    • Robert I Ellison

      AOS models are therefore to be judged by their degree of plausibility, not whether they are correct or best. This perspective extends to the component discrete algorithms, parameterizations, and coupling breadth: There are better or worse choices (some seemingly satisfactory for their purpose or others needing repair) but not correct or best ones. The bases for judging are a priori formulation, representing the relevant natural processes and choosing the discrete algorithms, and a posteriori solution behavior. http://www.pnas.org/content/104/21/8709.full

      There is no single deterministic solution. A ‘plausible’ solution is chosen from a family of solutions on the basis of what seems reasonable at the time. .

  42. Willis Eschenbach

    Shaun, thanks for sticking in there (if you still are).

    Here’s your claim:

    Whereas in the weather and climate regimes, fluctuations tend to increase with time scale, in the macroweather regime, they tend to decrease. For example, over GCM grid scales (a few degrees across), the average fluctuations increase on average up to about 5°C (≈9° F) until about 10 days. Up until about 30 years they tend to decrease to about 0.8° C (≈ 1.4° F), and then – in accord with the amplitude and time scale of the ice ages – they increase again up to about 5° C (9° F) at ≈100,000 years. In macroweather, averages converge: “macroweather is what you expect”. In contrast, the ‘wandering’ climate regime is very much like the weather so that – at least at scales of 30 years or more – the climate is “what you get”. Conveniently, we see that the choice of 30 year time period to define the climate normal can now be justified as the time period over which fluctuations are the smallest.

    In a nutshell, average weather turns out to be macroweather – not climate – and climate refers to the slow evolution of macroweather.

    The part I don’t understand is why you haven’t demonstrated that. You certainly don’t show that in Figure 1. There, you show three different datasets, not one dataset averaged over varying time periods. You claim it is true in the quote … for climate models, and it appears that you seriously believe that that is meaningful.

    What I haven’t seen (and I might have missed it) is something like Figure 1, but where you aren’t changing datasets, and that shows actual data rather than proxies. Show me, e.g., Landers, Wyoming averaged over different time scales. Because I don’t see the phenomenon you are describing. When I look at the HadCRUT data, I don’t see that change in the variance. There’s no decrease in standard deviation up to 30 years and an increase after that.

    I also looked at the temperature data from Armagh Observatory. This is an excellent dataset, well maintained for over 200 years. I find nothing like what you describe. The standard deviation is quite similar across a range from 1 year to 150 years. (I detrended the data to avoid getting the trend involved with the standard deviation). The graph of the results are here

    http://i863.photobucket.com/albums/ab195/weschenbach/armaghstdevbysamplelength_zpsbb57a808.jpg

    and the spreadsheet is here:

    https://dl.dropbox.com/u/96723180/mesoclimate%20claims.xlsx

    I also looked (not shown, use the spreadsheet) at the change in variability over time (at various sample lengths) at Armagh Observatory. There is no significant change as we go from early industrial to modern times, at any sample length. That is to say, there is no observable increase in variability of the Armagh record in modern times, whether you use 5, 10, 30, or 50 year samples. It’s just not there.

    To summarize, I see no such “mesoclimate” when I look at either the globe or single stations (I repeated my analysis for my home town of Santa Rosa … same kind of results. I find what Mandelbrot found, that there is no difference between weather and climate.

    I also can’t find the change in variability that you claim occurs as a result of anthropomorphic influences. This should show up as increasing variability in long-term temperature datasets. I don’t see such a phenomenon anywhere, not in HadCRUT, not in Santa Rosa, not at Armagh.

    So … what am I missing here?

    All the best,

    w.

  43. Charlatans may mimic the more obvious elements of science but the legitimacy of science requires that we honor the facts. Without respect for truth the ends will always justify the means and give shelter to liars and tools. Global warming alarmism has been used by the Left as a means to achieve political desires; and, this is not something new in the history of humanity. The Left wants to be humanity’s new god so badly and in that has succeeded badly: lazy journalism has given the Left unlimited power to make us do whatever they want and to change the meaning of ‘whatever’ whenever they wish, even if it means turning English into a liar’s language–e.g., snow now equals global warming in the UK.

  44. I wonder about the underlying expectation, namely, that we be able to distinguish between human-induced climate change and natural variability.

    While it is increasingly of interest to some stakeholders e.g. those who wish to offer financial support for adaptation related only to human-caused changes, I’m not sure the distinction nets a huge benefit to decision-making or policy frameworks. Of course, we want to continue to evolve our knowledge of these important processes, and to continue to tune models to known natural variability; but as a goal, what exactly are the benefits, in pragmatic terms, to better quantifying natural variability, and better distinguishing between natural and anthropogenic influences?

    Even if we can make this distinction, do we need to make it? Viewing causation as inter-related probably lets us consider what we need to consider, short and long-term, as a society.

    • Robert I Ellison

      It would be funny if it weren’t so tragically absurd. The system is nonlinear with all that implies for risk. The system responds disproportionately to the impetus – whatever it is.

      But the world is still not warming for a decade or three – at least – as a result of low frequency ocean and atmosphere patterns and the warmist zeitgeist remains resistant to whole realms of the physical and biological sciences.

      ‘Natural, large-scale climate patterns like the PDO and El Niño-La Niña are superimposed on global warming caused by increasing concentrations of greenhouse gases and landscape changes like deforestation. According to Josh Willis, JPL oceanographer and climate scientist, “These natural climate phenomena can sometimes hide global warming caused by human activities. Or they can have the opposite effect of accentuating it.”
      http://earthobservatory.nasa.gov/IOTD/view.php?id=8703

      Odd – don’t you think? But the question that is resolutely ignored is what these natural variations imply for carbon politics. I am just a trifle discombobulated by the lack of either scientific or polital insight – it comes with a certain smug and moralising attitude.

  45. Willis Eschenbach

    Shaun Lovejoy | January 23, 2013 at 7:59 pm | Reply

    Thanks Roger for you comment. Clearly, I should not have mentionned anthropogenic anything in this blog since for the moment that’s off-subject as far as macroweather is concerned!

    The problem is not that you mentioned it, Shaun. The problem is that you haven’t given us a single reason to believe you. No citations, no data, no pack drill. As a result, I see no reason to think that your claims about anthropogenic anything are any better than the other claims of yours on this thread that have already been shown to be 100% wrong.

    w.

  46. “When the International Meteorological Organization defined the first climate normal from 1900 to 1930, the belief was that the climate was constant”
    It appears to me that many scientists still think there should be a narrow band of “normal” climate that only varies by a degree or 2.
    Yet it is obvious climate naturally varies by many degrees and a many degree climate swing is normal.
    wattsupwiththat.com/2013/01/24/first-complete-ice-core-record-of-last-interglacial-period-shows-the-climate-of-greenland-to-be-significantly-warmer-than-today/

  47. Willis Eschenbach

    Shaun Lovejoy | January 23, 2013 at 8:26 pm

    I didn’t want to make this too technical but what makes all this so convincing is the use of a new definition of fluctuation, techniqcally called the “Haar fluctuation”. However, it’s simple: the Haar fluctuation over an interval is the average of the first half minus the average of the second half. It’s the combinatio of averaging and differenincing that changes everything …

    Thanks, Shaun, that helps a lot. Unfortunately, I still can’t get to any results like yours. I did a Haar decomposition of the Armagh data at filter lengths from two to 150 years. For each length, I used the standard deviation of the results of the Haar filter as the measure of the variability. Here are the results, first of the Armagh monthly data

    http://i863.photobucket.com/albums/ab195/weschenbach/armaghhaardecomposition_zps773d8ffd.jpg

    then of the Armagh daily data

    http://i863.photobucket.com/albums/ab195/weschenbach/armaghDailyhaardecomposition_zps6de1155e.jpg

    I fear that I find nothing like what you find. My code (in R) and the data I used are here as a zipped file:

    https://dl.dropbox.com/u/96723180/Armagh%20r.zip

    I certainly don’t find what you claim to find, either in the daily data or in the monthly data. Not sure why … any assistance gladly accepted.

    w.

    • I didn’t look at your code, but it could be correct. If you plot the annual data on a log-log plot you will probably find a slope about -0.2; you’ve got macroweather all the way. The transition scale 30 years hides a lot of geographical variability and northern latitudes have particularly long transition scales, see fig. 3 in

      Lovejoy, S., D. Schertzer, 2012: Low frequency weather and the emergence of the Climate. Extreme Events and Natural Hazards: The Complexity Perspective, Eds. A. S. Sharma, A. Bunde, D. Baker, V. P. Dimri, AGU monograph, pp 231-254, doi:10.1029/2011GM001087. http://www.physics.mcgill.ca/~gang/eprints/eprintLovejoy/neweprint/AGU.monograph.2011GM001087-SH-Lovejoy.pdf

      However, your daily data is badly polluted by the annual cycle – that should be removed first. Making a log-log plot of the result should show you the weather -macroweather transition.

      A final word on the estimates. In order to make the Haar fluctuations agree as closely as possible to usual differences (in the increasing reigme) and averages (in the decreasing regime), it should be “calibrated”. I found that multiplying by a factor of 2 works quite well.

  48. Robert I Ellison

    Although several studies have suggested the recent change in trends of global [e.g., Merrifield et al., 2009] or regional [e.g., Sallenger et al.,
    2012] sea level rise reflects an acceleration, this must be re-examined in light of a possible 60-year fluctuation. While technically correct that the sea level is accelerating in the sense that recent rates are higher than the long-term rate, there have been previous periods were the rate was decelerating,
    and the rates along the Northeast U.S. coast have what appears to be a 60-year period [Sallenger et al., 2012, Figure 4], which is consistent with our observations of sea level variability at New York City and Baltimore. Until we
    understand whether the multi decadal variations in sea level reflect distinct inflexion points or a 60-year oscillation and whether there is a GMSL signature, one should be cautious about computations of acceleration in sea level records unless they are longer than two cycles of the oscillation or at
    least account for the possibility of a 60-year oscillation in their model. This especially applies to interpretation of acceleration in GMSL using only the 20-year record of from satellite altimetry and to evaluations of short records of mean sea level from individual gauges.’ http://www.nc-20.com/pdf/2012GL052885.pdf

    The usual explanation – sans quantifiction – is that the evolution of athropogenic atmospheric forcing creates pseudo cyclic episodes in the instrumental record. Although the proxy records suggests they are quite natural excursions in non-stationary time series.

  49. Brandon Shollenberger

    Looking at Figure 1 raises a question for me. How do you compare time periods when the underlying data is from different sources? You have data from an ice core, modeled reanalysis data and data from a weather station. All three come from different processes. How do you ensure what you’re observing isn’t caused by that difference rather than a difference in time scale?

    You could generate an effect like I describe with any time series. You can increase the resolution of any time series by interpolating data. You can decrease the resolution by smoothing data. Either process will alter the fluctuation patterns present in the data regardless of what the fluctuation patterns were in the underlying data.

    • You’re right that changing spatial resolution will change the statistical characteristics of the time series, but not in arbitrary ways. Basically there is a space-time relationship so that averaging in space is the same as averaging in time. This is the physical basis of the weather/macroweather transition scale: structures the size of the planet (20,000km) live about 10 days, hence averaging over 20 days (the middle) averages over about two lifetimes, and so is roughly the same as averaging over the entire earth. There is still a residual effect of spatial averaging due to the (multifractal) spatial intermittency but this will not be too big either. In any case it will simply change the overall amplitude of the variability but not the character shown in the figure. As for the top paleo curve, I agree that one has to trust the basic correlation with temperature, but the actual calibration is not important since this only changes the amplitude of the variation, not the character of the variations.

      • Brandon Shollenberger

        Shaun Lovejoy, my comment wasn’t limited to spatial issues. In fact, it was focused on temporal issues. I discussed how the autocorrelation of a series (which is effectively the character of the variation) is affected by things done on the temporal scale.

        For example, my understanding is the reanalysis data you used is generated by a computer model that generates results with a different temporal resolution than its inputs. That means it has interpolation, smoothing or both – on the temporal scale. How do you ensure the strong negative autocorrelation in it isn’t an artifact of the model?

        I don’t see anything in your post, papers or comment that addresses the possibility of temporal artifacts in the variation patterns. Are you’re claiming such is impossible?

      • “For example, my understanding is the reanalysis data you used is generated by a computer model that generates results with a different temporal resolution than its inputs. ”

        Did you review all the inputs used in re-analysis codes.? If so, please point to the documentation you reviewed. Simple question. easy answer,

      • Brandon Shollenberger

        Steven Mosher, no I did not “review all inputs used” in the NOAA 20CR.

  50. Microweather is what Makarieva can you get.
    ==========================

  51. Speaking of (2xCO2) climate sensitivity estimates by the IPCC GCMs, this appears to be “getting legs” in the general media.
    http://www.foxnews.com/science/2013/01/28/un-climate-report-models-overestimated-global-warming/

  52. Willis Eschenbach

    Steven Mosher | January 29, 2013 at 11:04 am | Reply

    For the 100th time climate sensitivity cannot be indistinguishable from zero.

    Climate sensitivity is the response in C for a change in forcing in watts.
    Such that if the sun increases by 3 watts and the earth responds by warming by 3C the sensitivity is 1. If the watts increase by 3 and the temperature increases to 6C the sensitivity is 2. If the watts increase by 3
    and the temperature goes up by .5C the sensitivity is .5. Sensitivity cannot be zero or indistinguishable from zero.

    Thanks, Steven. I fear that you have failed to distinguish between

    1) the behavior of a system far from equilibrium and a system at equilibrium, and

    2) the behavior of a system without emergent phenomena, and a system wherein phenomena spontaneously emerge to reduce the temperature.

    For example, the temperature of the ocean never gets much above 30°C. You can pump more energy into it when it is cold, and it will warm. It is sensitive to the climate as you point out … but that sensitivity decreases with temperature, to the point where when it has reached ~ 30°C, the sensitivity has dropped to zero.

    So I can understand why you have had to repeat your claim 100 times … people don’t believe it no matter how many times you make it, because it isn’t true.

    The related question is what is the sensitivity to a doubling of C02.

    1. Doubling C02 gives you 3.71 watts

    If climate sensitivity is 1, then the sensitivity to doubling is 3.7 * 1
    if the climate sensitivity is .8 then the sensitivity to doubling is 3.7* .8
    if the climate sensitivity is ZERO then the sun doesnt warm the planet.

    Again you fail to distinguish between the situation at equilibrium, and the situation approaching equilibrium. I think that you would agree that as any part of the planet gets warmer, it takes more and more energy to push its temperature up another degree.

    This is for several reasons. One is that radiation increases as the fourth power of temperature. Another is that parasitic losses (sensible and latent heat) increase as some power of the temperature. Another is that albedo increases cut down the incoming energy. Finally, the natural emergent heat regulating mechanisms (thunderstorms, dust devils, El Nino, clouds) all show a huge increase with temperature.

    Of course, an obvious corollary of it requiring the addition of more and more energy for each additional degree of warming is that climate sensitivity decreases with increasing temperature. Hard to argue with that, it’s just math.

    You have this strange idea that climate sensitivity is a constant. But simple logic shows us it cannot be a constant. It has to decrease with increasing temperature, this is the real world, Murphy lives. Each additional degree costs more than the last one.

    And since the climate sensitivity decreases with increasing temperature … well, Steve, that would imply that there is a temperature at which the sensitivity is zero … just as we see happening in the ocean …

    This is because all of the feedbacks and all of the parasitic losses and all of the temperature governing mechanisms increase as some function of temperature, with the result that (as we see in the ocean) they provide an upper limit to the temperature. Regardless how hot the sun gets, the ocean doesn’t get any warmer 30°C … where is your climate sensitivity then?

    Finally, you might want to take a look at my post “The Details Are In The Devil“. You still don’t seem to have absorbed the difference between a situation with and without emergent phenomena.

    You, and the climate modelers, have this bizarre idea that you can model emergent systems the same way that you model a system without emergent phenomena. I can’t begin to tell you how foolish I think this is, but read “The Details Are In The Devil” to see why your claims about climate sensitivity are not only wrong, but are part of a conceptual framework which is totally inappropriate for analyzing the kind of emergent system that is the climate.

    If you think that complex emergent systems can be dealt with by simplistic concepts like “climate sensitivity”, then perhaps you can tell me the climate sensitivity of the human body … and since the climate sensitivity of the human body is approximately zero, perhaps you might consider why concepts like “climate sensitivity” are totally inappropriate for analyzing complex emergent systems like the climate.

    All the best,

    w.

    • … and since the climate sensitivity of the human body is approximately zero,

      Not sure what that means (seems like a nonsensical concept, really, as the human body is obviously completely different than the Earth’s atmosphere in myriad ways), but lets run with it anyway, long enough for me to ask you: What is the measure of the “climate sensitivity of the human body” if you spend much time time exposed to the elements in an extremely hot or extremely cold environment? Seems to me that there are limits to the human body’s ability to remain homeostatic. It can do so given certain conditions and stimuli, but with others, the internal temperature increases or decreases to the point where it can no longer sustain life. At a certain point, it is no longer able to maintain equilibrium by adjusting internal processes: Kind of like what might happen when too much CO2 builds up in the Earth’s atmosphere, eh?

    • Steven Mosher

      Willis I think you may have missed my point.

      Climate sensitivity is defined as the change in C for a change in forcing Watts.

      I have a simple question.

      Do you think it is the case that if one decreases the Watts into the system
      ( say turn the sun off or have it set ) that the response will be no change in temperature.

      Forget C02 because we are talking about the system response to any change of forcing. Would a change in Watts input, say triple the output of the sun lead to ZERO change in C or a non zero change in C

      • from IPCC

        the �equilibrium climate sensitivity�, T2x, is the temperature change after the system has reached a new equilibrium for doubled CO2

        http://www.ipcc.ch/ipccreports/tar/wg1/345.htm

      • steven mosher

        You ask Willis to “forget CO2” in discussing (2xCO2) “equilibrium climate sensitivity”, but since IPCC tells us that ECS is the “temperature change after the system has reached a new equilibrium for doubled CO2” that appears rather hard to do.

        When Jim Cripwell states that, in his opinion, (2xCO2) ECS is “indistinguishable from zero” he is telling you that, in his opinion, “the temperature change after the system has reached a new equilibrium for doubled CO2 is indistinguishable from zero”.

        You can argue whether or not this statement is correct, but you cannot claim it is impossible.

        That was the whole point here.

        Max

      • Steven Mosher

        yes manaker, but I am talking about climate sensitivity NOT the sensitivity to a doubling of C02 ( increase of 3.71 watts )

        ‘Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing.

        Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2). For this to be useful, the measure must be independent of the nature of the forcing (e.g. from greenhouse gases or solar variation); to first order this is indeed found to be so[citation needed].

        The climate sensitivity specifically due to CO2 is often expressed as the temperature change in °C associated with a doubling of the concentration of carbon dioxide in Earth’s atmosphere.

        For a coupled atmosphere-ocean global climate model the climate sensitivity is an emergent property: it is not a model parameter, but rather a result of a combination of model physics and parameters. By contrast, simpler energy-balance models may have climate sensitivity as an explicit parameter.

        #######

        For the last time I am talking about the GENERAL PROPERTY OF THE SYSTEM. that is, if the INPUT increases by 1 watt what is the reponse in C..

        That property cannot be zero everywhere, which seems to be what Willis and Cripwell are arguing. I assume Willis agree with Cripwell since he hasnt corrected him for the past year or so.

      • Steve Mosher, quoting Wiki :

        Although climate sensitivity is usually used in the context of radiative forcing by carbon dioxide (CO2), it is thought of as a general property of the climate system: the change in surface air temperature (ΔTs) following a unit change in radiative forcing (RF), and thus is expressed in units of °C/(W/m2). For this to be useful, the measure must be independent of the nature of the forcing (e.g. from greenhouse gases or solar variation); to first order this is indeed found to be so[citation needed].

        Yep, the citation or evidence in support of this assumption would be appreciated (although there’s no question that the assumption simplifies the analysis).

        The joules “gained” from adding CO2 to the atmosphere are “added” mostly in the troposphere. The joules “gained” from a lowered albedo to SW are “added” mostly on the land surface and top few metres of the ocean in the SW. Will these two scenarios result in the same feedbacks? If they don’t, it will lead to divergent changes to the Earth’s energy balance (unless by chance the different feedbacks happen to have congruent effects).

        Steve:

        But what cripwell is arguing is that sensitivity is zero.

        To be fair, I think Jim Cripwell is arguing that climate sensitivity to additional CO2 (from pre-industrial levels) is zero.

      • Steven, you write “For the last time I am talking about the GENERAL PROPERTY OF THE SYSTEM. that is, if the INPUT increases by 1 watt what is the reponse in C.. ”

        I suggest you are not quite right. When the sun increases the amount of joules it radiates to the earth, the total number of joules increases. When we add CO2 to the atmosphere, there is no increase in the total number of joules in the system. So there are fundamentally two different types of forcings; those where the number of joules increases, and those where the number of joules stays constant. Increasing the amount of CO2 in the atmopshere is the latter type of forcing.

        I admit I am not always pedantic when I write. But I hope I made it clear I was talking about what I call “total climate sensitivity”; that is the amount by which global temperatures rise as a result of adding CO2 to the atmosphere. I still claim that since no-one has measured a CO2 signal in any modern temperature/time graph, there is a strong indication that the total climate sensitivity of CO2 is indistinguishable from zero.

      • Steven Mosher

        You now agree that Cripwell’s statement “the (2xCO2) temperature response at equilibrium (or ECS) is indistinguishable from zero” can be argued, but is not “impossible”.

        whereas

        You state that if you put energy into the system (with radiative forcing), it is impossible that this energy will have zero impact on temperature.

        The two statements appear to make sense to me.

        Max

      • Jim Cripwell: When we add CO2 to the atmosphere, there is no increase in the total number of joules in the system.

        Microseconds after the CO2 has been added, in net some radiation that would otherwise have escaped to space is absorbed and thermalised or re-emitted into the system, therefore at this point (before feedback processes are in effect), the total number of joules is higher than it would have been if the CO2 hadn’t been added. (At least, that’s my understanding of it.)

      • oneuniverse, you write “(At least, that’s my understanding of it.)”

        Your understanding is correct, but your interpretation is not. Imagine a body with a source of heat in thermal equilibrium with the outside. It’s temperature can be increased by adding more heat or more insulation. The effect of CO2, no matter how it is described, is to add more insulatrion. Going further, if the body has no insualtion, the effect of adding one unit would be considerable. If the body already had 20 units of insulation, the effect of adding one more, while in theory it would increase the temperature, the rise might not be measurably. This is what the situation is with the earth’s atmosphere. There is so much water and CO2 already there, that adding a bit more CO2 does not make a measurably difference.

      • Jim Cripwell

        This is what the situation is with the earth’s atmosphere. There is so much water and CO2 already there, that adding a bit more CO2 does not make a measurably difference.

        The measurements do not support your speculation:

        “Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997” (Harries et al 2001)
        (Published erratum: “In Fig. 1a of this paper, the labels for the two curves were inadvertently switched.”)

        “Radiative forcing – measured at Earth’s surface – corroborate the increasing greenhouse effect” (Philipona et al 2004)

        “Spectral signatures of climate change in the Earth’s infrared spectrum between 1970 and 2006” (
        Chen et al 2006)

        “Measurements of the Radiative Surface Forcing of Climate” (Evans & Puckrin 2006)

      • oneuniverse, you write “The measurements do not support your speculation:”

        You are quoting changes in radiative forcing. I am discussing changes in temperature. Chalk and cheese. No-one knows how to go from change of radiative forcing to change of global temperature, except by meaningless, hypothetical speculation.

      • Jim, the references were provided in support of my comment at 9:58 pm , which you seemed to dispute. They provide evidence in support of the idea that the near-instantaneous (pre-feedback) effect of adding CO2 is to increase the number of joules in the system.

      • To make it clear :

        I’ve provided evidence supporting my claim (that adding CO2, will, iniitially, pre-feedbacks, lead to an increase of energy in the system).

        You’ve provided no evidence to support your claim (that adding CO2 will not raise temperatures).

      • oneuniverse, Fair enough. I just dont think you evidence supports your position, so I misinterpreted what you were saying.

      • The cited evidence shows increased clear-sky opacity and less escaping radiation at the wavelengths predicted from increasing CO2, That seems to support my position – you’re welcome to explain why you think it doesn’t.

      • oneuniverse, if we assume that everything stays constant, except we increase the amount of CO2 in the atmosphere. No more energy is added to the system, and the earth must go on radiating exactly the same amount of energy into space, otherwise the temperature will rise indefinitely. So any changes in one form of radiation leaving the earth must be compensated for by energy at other wavelengths being radiated into space.

        What you describe is exactly the effect that one would expect would occur if CO2 is merely an insulating blanket. it prevents radiation from leaving at one wavelength, but the same amount of radiation leaves at some other wavelength, when the earth warms up to compensate for the increased insulation of the CO2.

    • Steven Mosher

      One more thing Willis.
      perhaps you can comment on Jim Cripwells claim that climate sensitivity is effectively ZERO.

      Now, of course, one can argue that sensitivity is not simply independent of temperature. In fact we have evidence that sensitivity can be dependent on temperature. But what cripwell is arguing is that sensitivity is zero.
      So, please tell us why you agree with Cripwell?

      • Steven:

        Do you think climate sensitivity is the same for all globally averaged delta Watts, no matter the magnitude, source or where the forcing is applied?

  53. Willis Eschenbach

    Shaun Lovejoy | January 21, 2013 at 6:51 pm |

    Science is not logic, it is inductive, not deductive. In science one must at least provisionally accept the simplest hypothesis consistent with the known facts (Occam’s razor). Refusing Occam’s razor is a failure of scientific logic.

    Say. What?

    No place for logic in science? No place for deduction in science? What kind of science are you talking about? That makes no sense at all. Science uses all forms of logic, both inductive and deductive. Heck, science uses Godelian logic, and non-Aristotelian logic, and the logic of Schrödinger’s cat, there’s no kind of logic scientists aren’t using somewhere for something.

    And we don’t “have to accept” the simplest hypothesis consistent with the fact, that’s nonsense—it may be both the simplest and the wrongest. Occams Razor is a guide, not a law, and a damn vague guide at that. It says

    Don’t multiply causes unnecessarily.

    Sounds great, but one man’s “totally unnecessary cause” is another man’s “it’s the main cause”, and the Razor provides no way to decide which is right. The Razor it’s a good weathervane, but nothing more.

    So your idea that “refusing Occam’s razor is a failure of scientific logic” just means than you are still in mystery about how Occam shaves … but nonetheless, you are willing to lecture us about it. Not a good combination.

    If we accept the data (including multiproxies) as roughly correct, then fluctuation analysis shows that the low frequency variability of the global temperatures from 1880-present is much larger than over the period 1500-1900.

    First, anyone who accepts the multi-proxies as “roughly correct” has not done their homework. See my post “Kill It With Fire” for some of the huge holes in e.g. Mann’s 2008 proxy farrago.

    Second, you seem to be surprised that the variability of the proxies is different than the variability of the instrumental record. I would have thought that would be obvious, but since its’ not, let me make it plain:

    1. There are valid physical and mathematical reasons to expect the variability of temperature proxies to be different, perhaps very different, from the variability of instrumental records of temperature.

    2. There are valid physical and mathematical reasons to expect the variability of temperature proxy A to be different, perhaps very different, from the variability of temperature proxy B.

    3. There are valid physical and mathematical reasons to expect the variability of a multi-proxy temperature reconstruction to be different, perhaps very different, from the variability of proxies that make up the multi-proxy.

    You point at the variability and claim to find meaning in it. You don’t seem to have noticed that the variability is expected, predicted, and the reasons for it are understood, and thus it means nothing.

    At the same time, the variability of the main natural forcings that have been proposed to account for the natural variability at these scales – solar output and volcanic eruptions – show that the variability of these forcings (1880-present compared to 1500-1900) has not changed by much. On the other hand the observed changes in greenhouse gas concentrations and aerosols is large and could explain the change in the variability. Therefore one is obliged to accept – at least provisionally – the hypothesis that the latter is the cause.

    No, we don’t “have to” accept the idea that man is the cause. I guess when you said logic and deduction weren’t a part of science, you were being self-referential, as shown by the Swiss-cheese quality of your logic. To show what you want to show, you have to show first that the variability has changed, and second that there is no other conceivable explanation other than humans. Unfortunately …

    1. You have not shown that the variability of the climate, measured by whatever you’d like to measure it by, the Haar fluctuations or whatever, has changed. All you’ve shown is that proxies have different mathematical characteristics (including variance) from observations, and different characteristics from other proxies, and different from multi-proxy reconstructions as well.

    2. Even if you had shown that, we don’t have the data to determine what are or are not significant variables in the climate system. As a result, a change in variability could easily be from some combination of natural causes we haven’t considered. Maybe when the PDO interacts with a long string of La Ninas, every once in a century things get more variable. You gonna guarantee that’s not the case? Maybe the continents are shifting microscopically and as a result over the last century undersea rift vulcanism has doubled … you gonna claim we would even notice that, when most of the undersea volcanoes are unknown, un-named, unobserved, and unmeasured?

    In other words, your claim that “man done it” is the only possible explanation lacks imagination … and that, in science, is a capital crime.

    So no, Shaun, we don’t “have to” accept your hypothesis that man is the cause, neither provisionally nor any other way. It’s a possibility, sure … but far from the only one. Broaden your horizons, unleash your imagination …

    Regards,

    w.

    • Brandon Shollenberger

      Willis Eschenbach:

      1. There are valid physical and mathematical reasons to expect the variability of temperature proxies to be different, perhaps very different, from the variability of instrumental records of temperature.

      I tried to bring up a similar point in an earlier comment when I said:

      You have data from an ice core, modeled reanalysis data and data from a weather station. All three come from different processes. How do you ensure what you’re observing isn’t caused by that difference rather than a difference in time scale?

      I didn’t get an explanation then (he seemed to completely misunderstand my point), and I suspect he doesn’t have one. If I’m right, he may have found an interesting aspect of Earth’s climate, but we can’t know based on the work he’s done.

      That said, we don’t need proxies to compare two of his three time scales. Macroweather and microweather can be compared without resorting to proxies.

      • Willis Eschenbach

        Thanks, Brandon. Where can I see his method applied to macro/microweather data, actual observations?

        All the best,

        w.

      • Brandon Shollenberger

        Willis Eschenbach, I don’t know if anyone has done that without resorting to model reanalysis data (which presents its own set of problems for fluctuation analysis). I was just pointing out it would be possible.

  54. Willis Eschenbach

    Brian H | January 31, 2013 at 1:58 pm |

    Willis, the changeover from LIA to “recovery” is itself a mechanism which a) lacks explanation, and b) accounts for increase in both warming and variability. In other words, pre- and post-1850 are not the same natural regime.

    Thanks, Brian. It is not at all clear what you are referring to by the term “natural regime”. Other than a miniscule warming (0.5°C/century or so), what is all that different about the recovery period from a previous century which had no warming?

    Yes, we don’t know why we went into the LIA. We don’t know why we stayed there as long as we did. We don’t know why we came out again. Yet people believe that the science is settled on the climate …

    However, saying that those are different “natural regimes” only names the differences without explaining them, so it does little to advance our understanding.

    All the best,

    w.

  55. “Today, we still use the 30 year ‘normal’ period”
    I suggest we also need to re-define or newly define a “climate” as what we should expect during the service life of roads, dams, buildings / houses, breakwalls, levees, etc.
    This could include averages, “standard” deviations, and extremes. To do so, we could use a 70 or 75 year “period” to accommodate long term swings in solar, PDO (index), ENSO and NAO cycles.
    Another benefit could be to exclude intra-cycle upswings as justification of “climate change” as a political tool.

  56. Thanks for sharing interesting facts about Macroweather. Is this a published theory?

  57. Pingback: ‘Noticeable’ climate change | Climate Etc.