Scenarios: 2010-2030: Part II

Part I introduced the challenges of climate prediction on decadal scales, specifically in the context of global climate model simulations. On the Part I thread, Paul_K writes:

While a small effort on upgrading numerical methods in GCMs might well be justified, the sort of large-scale attack on regional-scale decadal prediction using dynamic modeling which you describe here seems to be far too early, given the level of understanding of the climate system and the maturity of the GCMs – even if one were to accept the argument for the utility of such predictions. It is worth examining the history of development of dynamic simulators in other disciplines. In every instance that I am familiar with, the application of improved numerical methods – dynamic local grid refinement, flexible gridding, local options on type of solution routine and boundary condition characterizations – were all introduced to allow improved local characterization and reduced numerical error only AFTER certainty had been gained in the completeness and aptness of the governing equations to be applied.  The [current] models cannot obtain even a coarse match of key observational data in hindcast, and therefore cannot be expected to found sensible boundary conditions to support a local grid refinement scheme in any useful way.

The anticipated 0.2C/per decade increase in surface temperature in the coming decades may be only a small part of the climate picture relative to natural variability for the period 2010-2030.  So given the current challenges for climate models, how can we make more useful projections of natural climate variability, both global and regional?  Empirical, statistical modeling is one way to approach this.

 

Lets assume that the anthropogenic forcing from long lived greenhouse gases and aerosols can be specified reasonably well out to 2030.  That leaves us to consider solar variability, volcanic variability, the multidecadal ocean oscillations, and possibly other terrestrial and extraterrestrial factors.

Multidecadal ocean oscillations

One of the key skeptical talking points has been that climate change over the past century can mostly be explained by the multidecadal ocean oscillations, particularly the PDO and AMO.  This essay by Roy Spencer typifies this argument.

A new paper by DelSole et al (h/t Craig Loehle) identifies a significant component of unforced multidecadal variability in the recent acceleration of global warming, which they identify as the AMO.  This is interesting because it comes from a mainstream climate modeling group.  The mainstream is increasingly recognized the unforced multidecadal ocean oscillations as key elements in 20th century attribution and 21st century projections.

The two main ocean oscillations that get discussed are the AMO and PDO, although the NAO and NPGO also get mentioned and deserve consideration in this context.  If you are unfamiliar with the multidecadal ocean oscillations, I just spotted a really interesting online textbook “Oceans and 21st Century Climate”, see this section.

We are currently in the warm phase of the AMO (since 1995) and the cool phase of the PDO (since about 2008), with an expectation of remaining in this regime for the next 1-2 decades. The last period that we saw this particular AMO-PDO combination was   1946-1964, a period that was characterized in the U.S. by abundant landfalling major hurricanes and drought in the southwest.

The issue of future projection then becomes to estimate the duration of the current regimes (e.g. warm AMO, cool PDO), particularly the changepoints.  The changepoints can be estimated statistically, or decadal climate simulations could be used.

What’s going on with the sun?

I am a novice in this area, and have no idea which are the key references on this topic, but here are some current papers and web posts that caught my eye, I would appreciate some clarification/synthesis of all this from those of you that follow this more closely.  Note, on establishment climate blogs like Realclimate, not much mention of the sun, what appears to be their main article on this was written in 2005.  Also, i just spotted the blog Heliogenic Climate Change, which is about the sun (but also climate change politics).

The CMIP5 simulations recommend the following for solar forcing.   Given the uncertainties and problems with forecasting the current solar cycle, I’m surprised they aren’t exploring the impact of solar uncertainties on future climate.  Especially given the controversy regarding the PMOD vs ACRIM total solar irradiance (see section 7 in Scafetta’s paper).

David Archibald has guest post on WUWT that argues another Dalton minimum is shaping up.  See also this article by Livingston and Penn. http://www.leif.org/EOS/2009EO300001.pdf

Akasofu has a recent paper entitled “On the recovery of the Little Ice Age.”  He argues that “It is suggested . . . that the Earth is still in the process of recovery from the LIA; there is no sign to indicate the end of the recovery before 1900.  Cosmic-ray intensity data show that solar activity was related to both the LIA and its recovery.  The multi-decadal oscillation of a period of 50 to 60 years was superposed on the linear change; it peaked in 1940 and 2000, causing the halting of warming temporarily after 2000.”

A recent paper by Scafetta examines the solar contribution to recent climate change.  Future climate projections are actually provided in Scafetta’s longer document.

Climate Etc. Denizen Rocket Scientist has a long essay on this topic, that extends Scafetta’s arguments.

I would appreciate other good references on this topic.  I note here that the University of Colorado LASP has established  with NASA a Sun-Climate Research Center, which I view as a very positive thing.

Other terrestrial and extraterrestrial factors

Scafetta has a recent paper entitled “Empirical evidence for a celestial origin of the climate oscillations and its implications.”  Text from the abstract:

We investigate whether or not the decadal and multi-decadal climate oscillations have an astronomical origin. Several global surface temperature records since 1850 and records deduced from the orbits of the planets present very similar power spectra. Eleven frequencies with period between 5 and 100 years closely correspond in the two records. Among them, large climate oscillations with peak-to-trough amplitude of about 0.1 and 0.251C, and periods of about 20 and 60 years, respectively, are synchronized to the orbital periods of Jupiter and Saturn. Schwabe and Hale solar cycles are also visible in the temperature records. A 9.1-year cycle is synchronized to the Moon’s orbital cycles. A phenomenological model based on these astronomical cycles can be used to well reconstruct the temperature oscillations since 1850 and to make partial forecasts for the 21st century. It is found that at least 60% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations. The partial forecast indicates that climate may stabilize or cool until 2030–2040. Possible physical mechanisms are qualitatively discussed with an emphasis on the phenomenon of collective synchronization of coupled oscillators.

Climate Etc. Denizen Vukcevic argues that the Atlantic Multidecadal Oscillation is ultimately driven by the Geomagnetic Field.

Denizen Richard Holle writes of the influence of the Saros cycle (sun, inner planets and moon) on the climate, with an 18 year cycle.

So what to make of these ideas?  They are interesting, and if correct, they would certainly be useful for decadal-scale predictions.  But that are at the frontier border with ignorance.  How can we test these ideas?

Volcanic eruptions

Oops almost forgot volcanoes.  Seems like there is ~1 big one per decade?

Regional climate predictions of rainfall

Once you have made your decadal projection in terms of decadal regimes, then you can use historical data to estimate the distribution of El Nino, La Nina, and Modoki events, and hence to estimate distributions of hurricanes, floods and droughts, heat waves.  I suspect that these decadal oscillations have more of an impact on severe weather on a timescale out to 2030 than solar variability or greenhouse warming.

The impact of PDO/AMO on U.S. drought is described in this paper.  The combination of warm AMO and cool PDO is bad news for the southwest U.S.  What has changed since the previous regime in the 1950’s?  Warmer overall temperatures and decreasing incidence of drought apparently associated with global warming are cited in the paper.  Another interesting aspect is the La Ninas.  In cool PDO, you expect a predominance of La Ninas, which means drought in the southwest U.S.  How to explain the recent deluge in California during La Nina?  Well, the La Nina seems to be associated with a central Pacific cooling (rather than eastern Pacific), the so-called Modoki pattern.  The impacts of the changing nature of El Nino/La Nina are just beginning to be studied; whether the increase in the Modoki pattern is related to global warming or to something like the NPGO remains uncertain.

Another region that I have been looking at lately is the Arctic.  An article by Denizen describes the dominant role of the AMO on Arctic temperatures. This graph of Alaska temperatures show a marked jump ca. 1976, the time of the switch to the PDO warm phase.

Clearly these ocean oscillations can have a large influence on regional climates.

JC note: I’ve run out of time that I can spend on thispost; but with over 60 comments before the post is completed, I can rest assured that my posts are becoming increasingly irrelevant on the blog :)   I know that these issues have been discussed by a number of you on other threads but I haven’t been able to keep track of them.  If you you would like to provide a pointer to one of your posts, please do so and I will consider adding to the links to the main post.


380 responses to “Scenarios: 2010-2030: Part II

  1. Skimming the Scafetta piece, I’ve seen a number of issues that are highly questionable – Loehle08’s analysis of upper 700m OHC, which is now known to be incorrect; Beck’s analysis of historical CO2 concentrations (also quite doubtful given the need for truly spectacular carbon sources/sinks that magically stopped working in 1958); various claims of pseudo-cyclicity that are merely curve-fitting exercises; silly references to Watts’ Big Picture Book of Weather Stations (“Higher-altitude, higher-latitude, and rural locations, all of which had a tendency to be cooler, have been tendentiously removed;” [!]), and so on.

    In other words, the usual set of “skeptic” talking-points all trained together, from a known “skeptic” and published by a known “skeptic” “institute” inhabited by a number of well-known “skeptics” of dubious credibility.

    • Stomata proxies in some cases indicate a high CO2 level in the past, just as high as now in fact. It certainly appears that the ice core data is heavily smoothed by some natural mechanism. This is yet one other weak point in the AGW hypotheses.

      • Stomata proxies are highly unreliable; see the comments by Dave Springer on that WUWT thread.

      • It appears stomata are no worse a proxy than tree rings, according to Dave. Even if not definitive, the stomata proxies do cast doubt on the ice core data.

    • OHC is shown to be incorrect by exactly who?
      Links please.

      • This is Derecho “I prefer the peer reviewed literature to blogs” 64 speaking.

        Now quoting John Cook’s blog, which has a warm bias. To say the least. Four months ago I asked John Cook to correct an egregious y-axis error on his cloud measurement article graphing. Nothing doing.

        Anyway Craig Loehle isn’t alone in finding a drop in the heat-energy content of the top 700m. So did Josh Willis, the principle investigator.

      • Here’s some peer-reviewed literature on the subject:

        http://www.nature.com/nature/journal/v465/n7296/pdf/nature09043.pdf

        http://www.agu.org/pubs/crossref/2009/2008JC005237.shtml

        http://www.pmel.noaa.gov/people/gjohnson/Recent_AABW_Warming_v3.pdf

        I merely use blogs as starting points for the literature – of course, one should always go to the literature.

      • Derecho 64, stop with the peer review nonsense. Have you forgotten the peer review on MBH 98, 99. We are quite aware of what peer review in climate science means—who’s doing the peering and who’s doing the reviewing. Wake up, everyone has moved past your opinions.

      • If peer review is “nonsense”, how come McIntyre was so giddy to have O’Donnell 2010 pass peer review?

        Oh, and those papers are OLD. Get past the 1990s, it’s the 2010s now…

      • You are incorrigible. Have you read what Ross and O”Donnell had to go through to get published?

      • I’m sure the “skeptics” would rather have deFreitas back on board so utterly lousy papers could get published with no substantive review at all.

      • Bruce Cunningham

        Oh it’s the past peer review process that gave us the junk science of such papers as MBH 98, 99, Santer 2008 paper that ignored the last 10 years of data to say that the models are doing just great (because he knew that using it would show that they are not), and let us not forget the laughable, upside down, Mann 2008 paper. Please don’t make me go on. Your comment to another poster that all he wanted was reviewers that agreed with his opinion is the most tragically funny comment I think I have ever heard. Total made up junk has had a clear sail in the past as long as the paper pushed the CAGW doctrine. As Bob said, you are incorrigible, and don’t have an honest bone in your body. It is high time more scientist like Hal Lewis stand up against what has been the norm in the past.

      • Wow, Bruce. You really put me in my place, I must say.

        “Total made up junk”, eh? Got any evidence for that, or are you just repeating what you hear on Fox “News”?

        Lay your cards on the table and let’s see what you got.

      • D64
        Your NOAA link is broken.

        The Shuckmann paper has been dealt with by Pielke Sr, and you and he still haven’t found a viable mechanism for getting heat down to the deep ocean.

        The Nature article is behind a paywall, and ‘Nature’ has proved itself to be hopelessly warm biased anyway.

        Loehle I trust. No-one has proved him to be in error in his handling of the ARGO data so far as I know.

      • Here’s the NOAA link.

        At high latitudes, vertical ocean mixing is quite vigourous.

        Dismissing “Nature” out of hand? Really.

        Of course you trust Loehle. He says what you want to hear.

      • “At high latitudes, vertical ocean mixing is quite vigourous.”

        Indeed it is, but the water that descends is colder than the water it descends through.

        “Dismissing “Nature” out of hand? Really.”

        They stopped publishing solar papers for 5 years. And remember that half baked piece of nonsense co-authored by Mann and Steig they featured on the cover? The one successfully rebutted by “Skeptics” recently?

        “Of course you trust Loehle. He says what you want to hear.”

        You judge others by your own appallingly low standards.

        This doesn’t surprise me.

      • “Appallingly low”? Someone who bites on Loehle08 and dismisses Nature offhand has pretty darned low standards, IMNSHO.

      • I’m not so sure those are on point WRT 0-700m.

        The last I couldnt access and the first two seemed to cover 0-300 and 0-500m

        It would be interesting ( really) to see how the countered craig’s work. but paywall, crap

      • I concur tallbloke. SkS has done several other very serious mistakes with graphing before and nothing has been corrected even when pointed out. One shouldnt use it as a reference at all, since it has nothing to do with truth seeking.

        Just look at this super hockey stick:
        http://www.skepticalscience.com/new-remperature-reconstruction-vindicates.html
        And the original reconstruction (from the paper):
        http://www.kolumbus.fi/boris.winterhalter/FIG/LjungqvistNHtemp2000a.gif

        If SkS has no warm bias then what has?

        P.S. Sorry about offtopic, but I just had to.

      • Interestingly, from the Ljungqvist2010 abstract:

        “The temperature of the last two decades, however, is possibly higher than during any previous time in the past two millennia, although this is only seen in the instrumental temperature data and not in the multi-proxy reconstruction itself. Our temperature reconstruction agrees well with the reconstructions by Moberg et al. (2005) and Mann et al. (2008) with regard to the amplitude of the variability as well as the timing of warm and cold periods, except for the period c. AD 300-800, despite significant differences in both data coverage and methodology.”

        (Emphasis added).

      • So you kick of an OHC discussion and then when challenged, you skate off onto surface temperature.

        OK.

      • I’ve already pointed you to the three references on OHC mentioned in the Skeptical Science post; I also put up the abstract to the Ljungqvist 2010 paper to correct juakola’s claim that Skeptical Science cannot be trusted as a source.

        Like I recommended, go to the original literature, even when it’s inconvenient to one’s beliefs. juakola should have done so.

      • I just checked your three references. See my reply above.

      • Yes, it agrees with the _reconstructions_ but not with the apples-to-oranges thermometer comparison. Actually the paper says that very cautious interpretation when comparing thermometer data is strongly suggested. The biggest problem with the HS is the thermometer comparison not the proxy data itself.

        Is Sks’s comparison “very cautious”? Just look at the super hockey stick again and compare it to Ljungqvist and think.

      • Derek64, btw you forgot to put emphasis on the rest of the phrase. It says it agrees with the variability of past warm and cold periods but doesnt say anything about how the 1900’s shapes up in context.

        Again, I suggest you open the Ljungqvist original reconstruction again and consider again if it is a hockey stick or not. It isnt.

      • You guys like proxies when they show what you want them to show, but you hate them otherwise.

        Make up your minds about proxies.

      • So you site an opinion.
        Next.

      • Can people please stop engaging with D64? This thread has only just started and already it’s bogged down in Der(anged)’s narcissistic chirruping. juakola seems to be a newcomer, and can be excused for not knowing that engaging with a Believer-troll – particularly this one – leads nowhere.

        There’s no excuse for denizens.

      • Coming from you, Tom, I’ll take your comments with a grain of salt.

        From the various denizens here, there’s one thing abundantly clear to me – “skeptics” very much dislike skepticism toward their views. Double standards shading into hypocrisy? Most assuredly…

      • A grain of salt! As a Believer, I’m surprised you know the expression.

      • TomFP – I think on another thread Judith mentioned that she welcomed D64’s contributions because he/she represented a minority view on this board.

        Do all skeptics prefer to have discussion free sites (such as WUWT)? I thought that one of the reasons that this board was popular was because all ‘sides’ participated.

        D64 does at least cite original research papers, if you don’t like his/her particular phrasing of sentences, why not try discussing the science raised instead?

      • Louise – JC has made a number of observations about D64, to all of which I defer.

    • agreed. Much of what JC points to here is speculative nonsense or simple curve fitting numerology. That said, in absence of physics to predict it one often resorts to ’empirical” or statistical estimation. But it should not be mistaken for understanding. it’s voodoo.

      • Steve, my point in bringing up the papers “at the frontier border with ignorance” is that there is much that we do not understand. We can wave it off as “natural variability” but the challenge i am putting out there is to try to understand the natural variability so that we can perhaps develop some predictive capability. Yes, Scafetta’s magnum opus is published by SPPI, which in itself gives several strikes against the credibility of the document. But this document is based upon several peer reviewed articles in reputable journals. And Scafetta is a member of the NASA ACRIM science team. I agree that Scafetta does his arguments no favors by his rhetoric and association with SPPI. But for those of us who are motivated by scientific curiosity, these are surely interesting possibilities to ponder. For those who are motivated by policy solutions, no I would not base any decisions on Scafetta’s projections (those who don’t like Scafetta’s projections should do some work to point out the flaws in his analysis or provide a critique). If Scafetta, Akasofu, etc are incorrect, their ideas are at least interesting and getting published in reputable journals. As for Vukcevic’s ideas, I still don’t know what to make of them, but coming from a citizen scientist who is an active participant in this blog who has done a lot of interesting analysis, I think we should at least take a look. All three of these are associated with speculative physical mechanisms, they are not merely curve fitting voodoo. Prima facie rejection of new ideas at the knowledge frontier doesn’t help scientific progress.

      • Judith says “I agree that Scafetta does his arguments no favors by his rhetoric and association with SPPI.” Judith, I do believe that you are a scientist with impeccable integrity. I am curious, though, as to why you would make the above statement. Yes, we all know where the policy part of SPPI stands, but what would your reaction be if the identical article was published in the science section of the NY Times ? Data are data, does it really matter where it happens to be printed? Not naive, just curious about your answer.

      • I agree that science is science. In this highly politicized environment, if a scientist publishes scientific documents through an advocacy group, then people that object to that particular advocacy group will tend to dismiss the arguments sight unseen. Hence my statement that Scafetta did himself no favors by publishing this with SPPI.

      • Thanks Judith, but I am still curious if someone with your stature would have made the same statement if the article was published in the NY Times. This would go a long way in assessing how deep the mistrust goes. It is a small point, but depending on your answer, it would be very revealing.

      • Well, people don’t publish scientific treatises in the NYT. If the paper was published by greenpeace, then I would expect an analogous response to publishing in SPPI.

      • I know I distrust the NYT. If a scientist is published by them, I am immediately skeptical than usual. NYT almost always express their political agenda.

      • As for Vukcevic’s ideas, I still don’t know what to make of them, ……they are not merely curve fitting voodoo.
        Hi Dr. Curry
        Perhaps more appropriate would be ‘they are merely curve fitting voodoo’, but still far more generous than Mr. Mosher’s past opinion.
        That you ‘don’t know what to make of them’ is not your fault, the fact is that I started looking into climate events only about 18 months ago. Not every correlation is causation, but even if one happens to be true than all my efforts (not ignoring time you spent) would be worthwhile. In my approach I employ ‘negative logic’, process that I am not certain about, I tend to write a bit about (e.g. Arctic/magnetic field), the ones based on good solid physics, I do not give any clues eg. NP Gateways and NA precursor, defenitely no magnetics. Data here is as absolute as you can get, far superior to the temperatures or indices to which they are correlated. It needs to be well written and documented and that requires expertise, sadly not my forte .
        Not whishing to irritate Mr. Mosher, even less to waste your time you could take a quick look at this NASA link, which could throw some light why the magnetic fields may matter.
        http://earthobservatory.nasa.gov/images/imagerecords/36000/36972/npole_gmao_200901-02.mov
        If you find that of interest than more detailed explanation is here:
        http://www.vukcevic.talktalk.net/MF.htm (warning: very speculative!)
        Thanks anyway.

      • Vukcevic, this link explains what you are doing more than anything else I’ve seen on your website. again, i find it interesting, but have no idea what to make of it

      • Dr. Curry
        I suggest let’s leave this one for a while, perhaps sometime in the new year since there is more stuff in various files on my pc, not on the web. Do not whish to distract you from your planed schedule.
        I did say in your thread ‘skeptics-make-your-best-case’:
        Contribution by a part-time sceptic
        ARCTIC CONUNDRUM – M.A.Vukcevic

        Now you know what I ment.
        Have a Happy New Year.

      • Agreed there is much we don’t understand.

        “I think we should at least take a look. All three of these are associated with speculative physical mechanisms, they are not merely curve fitting voodoo. ”

        My issue is this. Science behavior proceeds by a process of explanatory subsumption. That is, theories of more and more generality subsuming theories of smaller scope. As a theory a GCM subsumes a great many other theories. And it consequently produces a large number of predictions, predictions of temperature, ice, OHC,sea level, precipitation, oceanic cycles, etc.

        The papers in question focus on one metric and offer speculative “theories” about the perturbations in that one metric. The are not subsuming. One can’t even begin to detail how they would fit into a scheme that already subsumes a great deal of known physics.

        I think of it in a brutally simplistic way. how would their model “interface” to the existing theory. The variables are missing. So, that’s not to say that the work isn’t “interesting.” it’s more of a practical bent.
        ( I’m a pragmatitist or operationalist if you like) How would such work interface to the established work. It cannot at present overturn the existing work because its explanatory power is one dimensional.
        I put it this way: ‘what does scafettas work predict for rainfall?”

      • Steve, IMO in climate modeling we aren’t yet at the stage of explanatory subsumption. there are serious issues with the attribution of 20th century warming from climate models (did a four part series on that a few months ago). If the climate models don’t have the attribution correct, then I don’t have much confidence in their future projections.

        BtW, Richard Holle has detailed predictions for everything out 18 years in advance, if that is your litmus test for something being useful.

      • Much of what JC points to here is speculative nonsense or simple curve fitting numerology. That said, in absence of physics to predict it one often resorts to ‘empirical” or statistical estimation. But it should not be mistaken for understanding. it’s voodoo.

        You miss the point here.One of the constraints ( prior to climategate) has been the abscence of robust debate ie in the literature and in the fast changing world of the blogosphere.

        In the former we can see clearly see predator /prey relationships with some of the so called “debunking strategies” of a select few ,which tend to have more information in their soundbites,then in the papers.

        A good example of widespread open debate has been Makarieva et al,which has highlighted the importance of blogs ie getting referenced in an open review process.This I would suggest will be the new paradigm ,where the more ‘commercial’ journals that reduce information (and analysis) behind paywalls will lose both status and significance in the future ie a decreasing readership.

        That said it is legitimate to enquire into the level of understanding into natural variability and the subsequent energy inequalities that are present in both the historical record for say singularities such as volcanics (for which their are problematic issues) or solar where the level of understanding in the climate community and models is relatively poor in both the use of metrics and oversimplification.

        At present the inability to reduce the uncertainty in both sensitivity,and interdecadeal phase modes over the last 30 years poses a considerable and legitimate question.

        Is the irreducibility of uncertainty evidence of the certainty of irreducibility, which as Ghil more colourfully put it “with all its random consequences”

  2. Yes, the call for shorter-term “projections” is reasonable on the face of it if it is part of an effort to find something verified and verifiable. But it would be nice to have something besides 23 chaotically criss-crossing WAG generators to work from.

    • David L. Hagen

      When I was last on the North Shore (Lake Superior) I overheard a local say he had just shoveled “12 inches of partly cloudy.”
      Today New York is shoveling 20 inches of “global warming” in its sixth worst blizzard on record.

      For climate modeling, a key query is what constitutes testable distinguishing features between catastrophic anthropogenic global warming (CAGW) and natural cycles (NC)?

      In 2000, global warming advocates predicted:
      Snowfalls are now just a thing of the past.

      ” . . .According to Dr David Viner, a senior research scientist at the climatic research unit (CRU) of the University of East Anglia,within a few years winter snowfall will become “a very rare and exciting event”…”

      Instead BBC now reports (16 December 2010):

      Arctic air has made a return to the UK, causing temperatures to plummet across the country.
      Last month, Britain had the most widespread snowfall since 1965. Temperatures reached -21C in the Scottish Highlands.

      While the Met Office predicted warmth,
      Global Warming Skeptic Predicts Brutal Winter, Warns “You Ain’t Seen Nothing Yet”
      Astrophysicist Piers Corbyn:

      Predicting in November that winter in Europe would be “exceptionally cold and snowy, like Hell frozen over at times,” Corbyn suggested we should sooner prepare for another Ice Age than worry about global warming. Corbyn believed global warming “is complete nonsense, it’s fiction, it comes from a cult ideology. There’s no science in there, no facts to back [it] up.”

      Furthermore, Piers Corbyn 100 Year Forecast (of June 2010). “World temperatures will carry on falling” till 2030.

      Now heavy snow is explained as a natural result of “climate change” (aka “global warming”).
      That snow outside is what global warming looks like

      “Unusually cold winters may make you think scientists have got it all wrong. But the data reveal a chilling truth” . . .As global temperatures have warmed and as Arctic sea ice has melted over the past two and a half decades, more moisture has become available to fall as snow over the continents. So the snow cover across Siberia in the fall has steadily increased.

      Judah Cohen warns:Bundle Up, It’s Global Warming
      “Blizzards Becoming More Frequent Because of Global Warming”

      But one phenomenon that may be significant is the way in which seasonal snow cover has continued to increase even as other frozen areas are shrinking. In the past two decades, snow cover has expanded across the high latitudes of the Northern Hemisphere, especially in Siberia,

      However, the unusually low solar cycle is also being predicted
      A Dalton Minimum Repeat is Shaping Up
      Such events could have caused the Little Ice Age.
      Furthermore, doesn’t more snow cover increase albedo causing cooling?

      Where is the analysis of: Another IPCC Error: Antarctic Sea Ice Increase Underestimated by 50%”?

      While all the press is about the observed declines in Arctic sea ice extent in recent decades, little attention at all is paid to the fact that the sea ice extent in the Antarctic has been on the increase. . . .recent 2009 paper by Turner et al. (on which Comiso was a co-author), concluded that: Based on a new analysis of passive microwave satellite data, we demonstrate that the annual mean extent of Antarctic sea ice has increased at a statistically significant rate of 0.97% dec-1 since the late 1970s. This rate of increase is nearly twice as great as the value given in the AR4 (from its non-peer-reviewed source).

      Can anything be used to disprove CAGW?

      Karl Popper, the late, great philosopher of science, noted that for something to be called scientific, it must be, as he put it, “falsifiable.” That is, for something to be scientifically true, you must be able to test it to see if it’s false. That’s what scientific experimentation and observation do. That’s the essence of the scientific method.
      Unfortunately, the prophets of climate doom violate this idea. No matter what happens, it always confirms their basic premise that the world is getting hotter. The weather turns cold and wet? It’s global warming, they say. Weather turns hot? Global warming. No change? Global warming. More hurricanes? Global warming. No hurricanes? You guessed it.

      See “Tucker Carlson Debates The “Religion” Of Global Warming”

      Daily Caller’s Tucker Carlson debates progressive Betsy Rosenberg on global warming. “If it’s warmer outside that’s because of global warming. If it’s colder outside, global warming still the culprit,” Carlson said on “Hannity” last night.

      Which of these diametrically opposing predictions are we to believe? And why? How are we to take diametrically opposed “projections”/ explanations?
      What quantitative tests can be used to distinguish between them?

  3. It is remarkably clear that ocean dynamics dominate climate patterns on a decaedal scale. It might be even multicentennial, we just dont know (no such data exists to prove it or to disrove it?). Earths heat capacity is huge and weather phenomenoms like AMO and PDO can cause “short” term fluctuations on direction or another and ALSO differences on earth radiation budget. Climate sensitivity cannot be measured: one must analyze the frequencies instead.

    This clearly is a strong indication that the climate sensitivity for doubling of CO2 is somewhere around 1 +-0.5 deg C, likely less than more. The climate oscillations in the past (i.e. during ice ages) are also propably due to oceans, since CO2 follows the temperature it cannot be the main cause controlling it.

    • More like 0.5 +/- 1 , more likely less than more. The negative feedbacks are real killers!

      • If climate sensitivity is as low as you state (0.5 +/- 1, or even lower) how do you explain ice ages and ice-free periods? At that low sensitivity, the climate system is very heavily damped.

      • At some regions there might have not been any ice ages at all, and the answer may very well be in the ocean dynamics aswell (including the Gulf-stream).

        Proxies are heavily local and they DO NOT represent the effective radiative physical temperature (or total heat in the system), as used in calculations.

        If CO2 was the main contributor then why does it follow temperature, why does the cooling (or flatlining) always start while CO2 still increasing (in both, deglaciations and normal multicentennial oscillations), tell me how do you explain THAT?

      • The faint sun paradox argues for negative climate forcings. I.E. The oceans were liquid when the sun was 25% dimmer. A simple ‘all other things being equal calculation would be that the earth should have been around -50 degrees C during the ‘faint sun’ period. Since the oceans aren’t believed to have been frozen a significant dampening effect must have existed.

        Various studies have concluded that during the last glacial maximum the temperature in the tropics was only a couple of degrees C cooler then it is now.

        Measured from Kelvin the Earth’s temperature has been relatively stable, within 5% of current temperature.

      • Your red herring of tossing up ice ages only underscores your inability to deal with the issue of the failed theory of dangerous global warming.
        There is no particular reason to believe that ice ages are caused by what is driving our climate currently.
        There is certainly no reason either to confuse the promotion of global warming hysteria to ice ages.

      • Perhaps CO2 isn’t the only game in town.

        What, for example, were the effects of fundamental changes in ocean currents? Brought on by events of the magnitude of the blocking of the Bering Straits, the closing of the Panama Isthmus, or a multitude of other events which we might not even be aware of?

        When the only tool you have is a hammer, every problem looks like a nail. And how do you explain screw-threads?

  4. Re Scafetta paper:
    Although ‘60 year climate cycle’ is favoured by many, there is no obvious presence of it in the longest temperature record available (CET – Central England Temperature, Met Office). However there are 40-50 years long undulations, which appear to have some similarity with the simple orbital resonance cycle of the Jovian planets.
    http://www.vukcevic.talktalk.net/CETpr.htm

    • Hi Vuk,
      you told me that curve was similar to your “earth wobble curve”, which, if I remember corectly, you said was derived from the inner planetary motions.
      Please could you visit my post on this and clarify for us.

      Thanks

      • tallbloke to vukcevic via email:
        Apologies for my bad memory, I thought the earth wobble involved the inner planets from your previous hints.
        Apology accepted.

      • Vukcevic,
        I am finding rotational curving is quite varied with materials and speed. The slower the rotation, the more open the curve. But still it depends on the density of material and other intervening factors such as gravity.
        In the case of the moon, it is not slowing the planet, just disrupting the atmosphere with it’s size and density.

      • Thanks, and interesting comment on WUWT replying to Ulric Lyons! Is it all tying together?

  5. Firstly in the general I seem to recall a number of posts here chastising scientists for getting involved in public policy and advocacy and yet now I’m seeing a paper referenced favorably which is dripping in policy advocacy and published by an institute that’s oriented around an unchangeable conclusion both in terms of science and policy. This very much created the impression that the problem is less to do with activism or policy involvement and more to do with which answer they’re promoting.

    Secondly with regards this

    “This finding strongly contradicts the IPCC’s claim that 100% of the warming observed since 1970 can only be explained with anthropogenic emissions.”

    To say that it strongly contradicts anthropogenic warming would suggest it provides strong evidence for an alternative mechanism however the paper itself says:

    “The physical mechanisms involved in the process are likely numerous. The gravitational
    forces of the planets can partially modulate the solar activity. For example, it was noted
    that the alignment of Venus, Earth and Jupiter presents cycles of approximately 11 years
    that are in phase with the 11-year solar cycles [21] and multi secular reconstructions of solar activity reveal 60-year cycles associated with the combined orbit of Jupiter and Saturn
    and other longer cycles [22]. Solar changes could modulate climate change through various
    physical and chemical processes as explained in Section 6, which are currently not included
    in the models, as explained in Section 6.”

    i.e. the author has no idea what would be the cause of a 60 year cycle and suggests it might be Jupiter or other planets. Even taking the paper as “correct” at face value the claim that it strong contradicts or supports much of anything seems highly erogenous

    Thirdly
    “No need for a PhD or for powerful computers and GCM’s models to issue an accurate climate forecast. It is even probable that Scafetta’s small exercise already provides a much more accurate forecast than any of here above discussed scenarios.”

    The operate word there is “accurate”. How is Scafetta’s accuracy being judged? The paper was published in March 2010 and in December 2010 you’re claiming it has made accurate climate forecasts?

    • Sharper00,

      I assume you’re referring to Scafetta’s paper when you say “I’m seeing a paper referenced favorably which is dripping in policy advocacy”. Could you provide some citations? I read through the entire thing and didn’t see any recommendations regarding policy.

      • Gene,

        I think it’s reasonable to say that in particular instances where scientists are considered to be advocating policy they’re generally not doing it in the form of “I hereby approve policy X” but rather do so in the form of lending undue weight to the conclusions those policies are based on and possibly pick convenient moments (with regard to the media cycle) to begin pushing particular conclusions. So a particular scientist might not explicitly endorse cap-and-trade but might well get involved in the policy debate at a politically sensitive time for it.

        So taking the Scafetta paper we have:

        “More than 30,000 scientists in
        America (including 9,029 PhDs) have recently signed a petition stating that those claims
        are extreme”

        i.e. the Oregon petition, something too flawed and frankly ridiculous (you can find similar efforts disputing evolution too) to appear in a paper purporting to be doing science.

        “Indeed, the Hockey Stick temperature graph does not have any historical credibility because between 1000 and 1400, the Vikings had farms and villages on the coast of Greenland,”

        I facepalm (figuratively) when I see this as a comment on a blog, imagine my reaction to seeing it in a paper. The entire hockeystick section is frankly an odd addition to the paper since A) we know the conclusions of the hockeystick (and subsequent papers) are correct and B) even if it weren’t, it doesn’t do much for claims of what’s causing warming within the modern instrumentation record anyway.

        Already mentioned above but the section on the reliability of surface stations is referenced to “Surface Temperature Records: Policy Driven Deception?” also published by SPPI. If you’re familiar with the topic you’ll know we’re still</b< waiting for Mr Watts to publish his actual analysis showing that bad station siting introduces a warming bias.

        Then in the conclusion you have stuff like

        “All this AGWT
        apocalypticism is extensively rebutted in Climate Change Reconsidered [3] by using scientific research based on actual data. Al
        Gore’s movie has been elegantly rebutted by Christopher Monckton of Brenchley in “35 Inconvenient Truths: The errors in Al
        Gores movie” SPPI (2007), http://scienceandpublicpolicy.org/monckton/goreerrors.html

        So as I say, dripping in policy advocation (or maybe “activism” would be better, the distinction is often not clear to me).

      • Ah so we do know the HS results are right? How did you come to that conclusion, by reading Mann? Here is a recent reconstruction I posted on this thread before:
        http://www.kolumbus.fi/boris.winterhalter/FIG/LjungqvistNHtemp2000a.gif
        Looks nothing like a hockey stick.
        And I also recommend you to read the McShane & Wyner paper and just forget about the hockey team reconstructions for a while..

      • “Ah so we do know the HS results are right? How did you come to that conclusion, by reading Mann?”

        By reading the NAS report on the original paper and subsequent papers by Mann and other authors all of which reach similar conclusions.

        “And I also recommend you to read the McShane & Wyner paper “

        You just presented a reconstruction and then recommended a paper which says reconstructions are impossible. I assume this is the usual shtick of “Temperature reconstructions can’t be done but if they could they’d definitely show there’s nothing unusual about modern temperatures”

      • Sharper00,

        I think it’s reasonable to say that in particular instances where scientists are considered to be advocating policy they’re generally not doing it in the form of “I hereby approve policy X” but rather do so in the form of lending undue weight to the conclusions those policies are based on

        I don’t doubt that there are those who advocate in a more subtle manner, but there are plenty of instances where the advocacy is outright. These range from the “must take steps to reduce emissions now” to the “death trains” of Hansen. Subtle advocacy is harder to pick out in that it requires determining “undue weight” (not my area of expertise and potentially debatable for those with the requisite expertise) and inferring motive (tricky at best).

        So as I say, dripping in policy advocation (or maybe “activism” would be better, the distinction is often not clear to me).

        I’ll suggest “partisan” as a good adjective for the last example in particular and the tone of the others in general. I can think of multiple motives for that type of tone, of which policy advocacy is only one. I can agree (strongly) that a partisan tone is both offputting and detrimental to credibility, regardless of the “side” of the speaker.

    • Great comment – I have to say though, I’m assuming you meant either “erroneous” or “egregious” rather than “erogenous”… :D

    • How come defenders of apocalyptic global warming always fall back on demanding that skeptics and critics provide a replacement mechanism?
      That red herring of a false argument is simply distraction.
      Skeptics have no need to replace the consensus.
      All that is required is to show that it fails.
      Perhaps together we can learn enough about climate to make meaningful predictions. As of now, we are faced with a powerful consensus selling bad data, bad predictions and demanding really bad policies. Stopping the damage does not mean having to find a cure.
      There is no evidence that there is anything to even cure, much less a problem.

    • “To say that it strongly contradicts anthropogenic warming would suggest it provides strong evidence for an alternative mechanism” – why so?

      To “strongly contradict” something, you don’t have to have an alternative – just evidence that disconfirms anthropogenic warming. Standard Believer error.

  6. Regardless of whatever Scafetta has to say.

    It would appear that the current GCM’s don’t have any decadal skill at emulating Hadley Cell’s.
    http://www.atmos.washington.edu/~qfu/Publications/jc.johanson.2009.pdf

  7. If someone is bothered about the paper ‘dripping in policy advocacy’ then I recommend reading the paper which has gone through the standard peer-review process from Journal of Atmospheric and Solar-Terrestial Physics:
    http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf
    Although the SSPI paper is reasonable as well.

  8. Hum? The AMO has a period of roughly 70 years, the PDO has a period that varies roughly between 20 to 30 years, the IPO (Inter-decadal Pacific Oscillation has a period of roughly 15 to 30 years, the quasi-decadal Pacific oscillation has a period of roughly 8 to 12 years, the southern oscillation and Indian Ocean Dipole have periods of roughly ? There may a Mongolia Oscillation. The Arctic Oscillation is aperiodic (and a PITA).

    Piece of cake!

    • Dallas,
      I agree that there are different periods in different oceans, and that is why no easy simple oscillatory period for the world ocean as a whole manifest for long (centennial scales). However, it does seem to be the case that several oceanic oscillations positive phases coincided to accentuate the warming in the late C20th.

      The never answered question is how much they contributed to the warming, and how much the effect attributed to co2 needs to be reduced, along with the sensitivity estimate.

      Given that the phase reversal of the PDO and AMO seem to be causing a cooling effect about equal or a bit greater than that claimed for co2 since 2003, it’s a pertinent question IMO.

      • The AMO and global or sea surface temperatures are not well correlated, and during many decades over the past century appear to be anti-correlated –
        AMO

        Correlations with the PDO are better, but the positive and negative phases tend to cancel each other out over the past century –
        PDO

        Based on the above, I interpret the data as more significant for explaining climate temperature variability than the centennial trends.

        Presumably, this is “natural” variability, although an anthropogenic role in PDO, AMO, and ENSO has been conjectured.

      • It would be interesting to see how the PDO, AMO etc correlate with cloud cover and snow cover – both of which affect the albedo and hence have a direct effect on the radiative balance.

    • David L. Hagen

      Any evidence for 20-30 year PDO? I have seen 50-60 year PDO cycle with half cycle cooling/warming trends of ~25-30 years. See Don Easterbrook EVIDENCE OF THE CAUSE OF GLOBAL WARMING AND COOLING Figs 22, 23, 31, 41

      • David L. Hagen

        Judith
        Re: “We are currently in the warm phase of the AMO (since 1995) and the cool phase of the PDO (since about 2008),”
        Don Easterbrook comments:

        Four distinct periods of climate, two warming and two cooling, occurred: a cool period from 1880 to about 1915; a warm period from about 1915 to about 1945; a cool period from about 1945 to 1977; and a warm period from 1977 to 1998. Global cooling has occurred since 1999. These warm/cool periods correspond almost exactly to sea surface temperature changes (the Pacific Decadal Oscillation) and glacier advance and retreat. . . .
        The warm PDO of 1977–1998 turned cool in 1999 and satellite
        thermal imagery shows that it has become entrenched, assuring at least three decades of global cooling.

      • “Global cooling has occurred since 1999.”

        Wrong. A lie. False. Utter baloney.

        Moderation note: such appelations do not further scientific discourse, and the words “lie” and “baloney” when referred to a specific person’s statements or arguments are out of bounds. Present an argument refuting the statement if you disagree with it.

      • David L. Hagen

        Derecho64
        See Lucia’s statistical analysis of the HadCrut trend from 2000-2010.
        Temperature Anomaly, OLS, and MEI corrected over most recent 10 year period.
        Trend (temperature ) y = -0.0011x +2.5007 (x = year).

        What part of “-” do you not understand?
        A negative trend since 2000 appears “cooling” (or ~ flat) to me in that data set. Clearly a cooler trend than IPCC’s 2 C / decade.

        Judith gives a professional response on the PDO, noting the interim variations. She properly notes the “noise”during the transition and that finalizing the dividing intercept will likely some more time.

        Furthermore, you make a direct libelous ad hominem accusation of “a lie” while citing no evidence at all.

        Grow up and raise your comments to professional scientific discourse!

        http://rankexploits.com/musings/2010/hadcrut-nh-sh-temperature-rose-in-november/

      • 10-11 years isn’t enough to determine a trend.

        Cherry-pick all you want, but that’s a form of lying. It’s not “libelous” to point out deception.

        MODERATION NOTE: “lying” is an inappropriate accusation, please desist. One makes weaker or stronger scientific arguments, that can be refuted or not. There is no lying involved.

      • Two points are sufficient to determine a trend, it’s when you’ve only got one point that you have a problem (if you see what I mean).

      • Oh I don’t know. A trend is just a trend. Statistical significance can be demonstrated for much shorter periods. It is now about 10:40pm here and we have seen a statistically significant cooling trend since about 2pm.

        Raise your game Derecho64 :)

      • David L. Hagen

        Derecho64
        “10-11 years isn’t enough to determine a trend.”

        You now make the logical fallacy of equivocation or doublespeak.
        http://www.fallacyfiles.org/equivoqu.html
        See: Equivocating:

        to use equivocal terms in order to deceive, mislead, hedge, etc.; be deliberately ambiguous

        As ivpo points out, there are clear afternoon-evening trends – a portion of the day/night cycles.
        The temperature trend over November-December – is clear thought it is a fraction of the 12 month cycle.
        You are falsely appealing to the need for a long period of evidence (e.g., a 30 year or longer trend) to support a long term (e.g., century long) climate change argument.

        Yet Easterbrook is clearly referring to the 60 year PDO cycle approximated by sawtooth or square wave 30 years warming/30 years cooling trends. A 10 year trend of cooler temperatures or lower temperature trend is clearly a significant fraction of 30 years, compared to warmer temperatures or a higher warming trend.

        See: On the recovery from the Little Ice Age Syun-Ichi Akasofu
        This shows the PDO cycle superimposed on the long term warming trend. To advocate a catastrophic anthropogenic global warming, you have to show a deviation from Akasofu’s “null hypothesis” of long term temperature rise since the LIA, with superimposed PDO trend, that statistically correlates with anthropogenic (not natural) CO2 change.
        See also his 55 page 2009 paper:
        On the recovery from the Little Ice Age
        (warning 55 MB)
        http://people.iarc.uaf.edu/~sakasofu/pdf/two_natural_components_recent_climate_change.pdf

        2) “Cherry-pick all you want”
        You have no evidence of “cherry-picking”. I just pulled the first graph I found off Lucia’s Blackboard. To prove “cherry-picking” you have to show that Lucia’s graph is a statistical outlier compared to all other known global temperature trends for that 2000 to 2010 period. You further have to show that I knew of other trends and picked this outlier to prove the point. You provided no evidence at all to support your case.

        3) “but that’s a form of lying.”
        “Ly´ing

        p. pr. & v 1. of Lie, to tell a falsehood.”
        Noun 1. lying – the deliberate act of deviating from the truth. Synonyms: prevarication, fabrication

        You make a formal moral accusation without evidence. To support this, you must show that what I stated is contrary to the facts, and that I knew they were contrary to the facts.
        The context is Easterbrook claiming a change in temperature or temperature trends due to the PDO from the previous 30 year warm period which he gives as 1970-1999 to the subsequent PDO cool period since 1999. Compare Easterbrooks systematic evidence of long term PDO oscillations. To support your argument, you must show that I knew Easterbrook’s evidence of PDO oscillations to be false, and that I knew to be false his claim of a 30 year warmer/more warming period is now being followed by a 30 year cooler/less warming period.
        I know of no evidence refuting these PDO oscillations.
        You have the burden of proof to show evidence refuting those PDO oscillations, and that I knew of that evidence.
        You have made a defamatory false accusation against me regarding a statement regarding scientific information.
        See: Laws on False Accusation

        The law protects those who have been wrongly accused of crimes when those accusations are untrue and have caused damage to the person’s reputation. The law allows those falsely accused of a crime to pursue a cause of action in court, generally based on defamation of character, which requires the accused to prove that a false statement was made, the statement was conveyed to a third party and that statement caused harm.

        I ask you to withdraw your claim that I lied.

        3) “It’s not “libelous” to point out deception.”

        False accusation laws protect victims of libel and slander. Libel is a type of defamation that is made in writing, for example, through mediums such as newspapers or magazines. Slander is the spoken form of defamation.

        To show Defamation

        that there was a false statement of fact, this statement was conveyed to a third party and the statement is understood to be about you and tending to harm your reputation.. . .Recognize the defenses to a defamation of character suit. Truth is an absolute defense to a defamation action. A privileged statement, such as a statement made in court, is also protected. Likewise, innocent dissemination and consent to the statement being made are also defenses.

        http://www.ehow.com/how_2063889_sue-defamation-character.html

        You have not provided any evidence at all, let alone sufficient to prove “deception”. I hold your speech to be libelous and defamatory.

        Raise your conduct from out of the gutter to the court of scientific discourse. Otherwise withdraw and stop wasting our time.

        I ask you to withdraw you accusations of scientifically false statements, cherry picking, lying.

        If you persist with your guttermouth, you will be accounted scientifically reprobate and will be shunned.
        http://www.thefreedictionary.com/reprobate

      • You tend to prolixity.

        You (and Don Easterbrook) can’t claim “global cooling” based on a 10 or 11 year trend, no matter how tedious the curve-fitting exercises one undertakes.

      • David L. Hagen

        Thus you judge yourself as a stubborn reprobate.

      • David L. Hagen

        D64
        Vaughan Pratt’s analysis shows you have been the most promiscuous poster here. Focus on quality not bombast.

        Easterbrook’s “Global Cooling” comments focus on the “global” temperature has been cooling since 1999 per the change in PDO, NOT that long term “climate change” is statistically proven to be cooling. See his projections based on whether the solar cycle enters another Dalton minimum or Maunder minimum as in the Little Ice Age.

      • Given that Easterbrook has been deceptive before (“hide the incline” and all that), I’m not disposed to believe anything he claims.

      • David L. Hagen

        D64
        See Moderation Comments above to your ad hominem attacks.
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-25954
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26135

        By accusing Easterbrook of being “deceptive” you again are attributing moral fallacies without evidence, and in direct breach of blog rules.
        Do not attribute motives to another participant.
        http://judithcurry.com/2011/01/01/blogospheric-new-years-resolution/#more-1538

        Again on your accusation of “hide the incline”, you have not shown evidence of that. The major scientific issue at stake in Easterbrook’s use of prehistoric data is how to relate proxy temperature evidence to the modern temperature record. The blog you linked to claims the current global temperature is much above the medieval warm period. That is a major bone of contention.

        See The Medieval Warm Period – a global phenomenon, unprecedented warming, or unprecedented data manipulation?
        http://wattsupwiththat.com/2009/11/29/the-medieval-warm-period-a-global-phenonmena-unprecedented-warming-or-unprecedented-data-manipulation/

      • See this previous article that has links to analyses of Easterbrook’s deceptive practices.

      • The PDO started flickering in 1999, when it dropped cool, then it came back warm for about 5 years, then flickered. Since 2008, it has been decisively cool. So in principle you can call the switch anytime between 1999 and 2008, if you low pass the time series, you might come up with 2005 or so for the switch, depending on how you do the time series analysis. We probably need a few more years of hindsight before we can interpret the switch unambiguously.

        Note, when the AMO switched phase in 1995, it was like turning on a lightbulb, very decisive. The PDO switch was a flickering one.

      • Dr Curry
        AMO 1995 phase switch vs. raw (not smoothed) de-trended NA Gateway data:
        year AMO NAG
        1990 -0.015 -0.534
        1991 -0.111 0.224
        1992 -0.200 -0.018
        1993 -0.19 -0.259
        1994 -0.149 -0.167
        1995 0.159 -0.409
        1996 -0.035 0.016
        1997 0.076 -0.225
        1998 0.4 0.200
        1999 0.152 0.958
        2000 0.058 0.717
        An Excel graph is even more telling.

      • Sorry made a mess of that, copied NAG data a year out
        year …….. AMO …….. NAG
        1990 …….. -0.015 …….. 0.224
        1991 …….. -0.111 …….. -0.018
        1992 …….. -0.2 …….. -0.259
        1993 …….. -0.19 …….. -0.167
        1994 …….. -0.149 …….. -0.409
        1995 …….. 0.159 …….. 0.016
        1996 …….. -0.035 …….. -0.225
        1997 …….. 0.076 …….. 0.2
        1998 …….. 0.4 …….. 0.958
        1999 …….. 0.152 …….. 0.717
        2000 …….. 0.058 …….. 0.300
        must be a coincidence ! (I only plotted & compared smoothed data before).

      • I looked over Easterbrook’s presentation. It’s rife with carefully-chosen data, “redrawn” figures (I wouldn’t trust Easterbrook with those at all), and unproven claims – in short, what I’ve come to expect from a “skeptic”.

  9. I am interested in the idea that a number of oscillations are occurring with different periods. This would lead them to reinforce at some points and cancel each other at other points. If there aren’t too many, wouldn’t something as simple as the least common multiple give longer cycles for the more extreme effects?

  10. A fundamental problem in interpreting 60-year cyclicity from the global temperature record is that the 60-year cyclicity visible in the global temperature record isn’t global. The warming-cooling-warming
    pattern that generates it is confined dominantly to higher northern latitudes, i.e. to less than a quarter of the earth’s surface. There’s no good evidence for any cyclicity in the temperature record over the rest of the earth. This makes it difficult to explain the cyclicity in terms of extraneous forcings such as Milankovitch cycles or Jovian motions.

    So what does cause the cyclicity? Well, it occurs mostly in and around the Arctic and North Atlantic Oceans, and the correlation between temperatures in these areas and the Atlantic Multidecadal Oscillation is far too close to be coincidental, so I’m going to postulate that it’s mostly if not entirely caused by the AMO.

    • The Scafetta paper mentioned earlier in this thread by juakola (http://www.fel.duke.edu/~scafetta/pdf/scafetta-JSTP2.pdf) looks at HadCRUT3 from 1850 to 2009 (monthly sampled) across global/N. Hemi./S. Hemi. and sea/land and claims to identify cycles of 58.7 – 67.4 years in the record across these various categories (see Table 1). The paper discusses some of the possible reasons for the range.

      (BTW I thought I posted a similar comment, but it failed to show, so if this repeats apologies)

      • HAS

        Thanks for your comments.

        Scafetta may well be right in claiming that the 60-70 year temperature cycle is detectable everywhere from power spectrum analysis. However, when we look at the actual temperature records we find that the amplitude of the 1900-1970 cycle (1900-40 warming minus 1940-1970 cooling) decreased from 2.0C in the Arctic to 0.5C at 45N latitude and to 0.1C south of 20N latitude. In other words, it was concentrated in a relatively small area. It appears as a visible feature in the global record only because its amplitude is so large.

        As I noted in my original post, the temporal and spatial correlation between this cycle and the AMO is too close to be coincidental. And if the AMO is indeed responsible for putting large cyclic dents in the global temperature record we should be paying more attention to it than we are.

    • The warming-cooling-warming
      pattern that generates it is confined dominantly to higher northern latitudes, i.e. to less than a quarter of the earth’s surface.

      The proportion of land mass to ocean is higher in the Northern Hemisphere then the Southern Hemisphere.
      The current satellite measured NH land warming trend is .24C/decade. The SH land trend is .08C/decade.
      http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

      Given the ability of the oceans to act as a buffer it wouldn’t be unreasonable to expect NH surface temperature trends to swing more then Southern hemisphere surface temperature trends.

  11. Good grief, just caught this, i mistakenly published this before it was ready. now that it has comments I will leave it up, should have the post finished tomorrow at noon.

    • David L. Hagen

      Judith
      Re: “Regional climate predictions of rainfall” for scenarios 2010-2030.
      See WJR Alexander et al. Linkages between solar activity, climate predictability and water resource development, Journal of the South African Institution of Civil Engineering • Volume 49 Number 2 June 2007 pp 32-44.
      http://nzclimatescience.net/images/PDFs/alexander2707.pdf
      See especially Fig. 1, Table 3.

      It is also shown with a high degree of assurance that there is a synchronous linkage between the statistically significant, 21-year periodicity in these processes and the acceleration and deceleration of the sun as it moves through galactic space. Despite a diligent search, no evidence could be found of trends in the data that could be attributed to human activities.
      —————–
      The average number of sunspots in the alternate cycles that make up the double cycles were +706 and –664, demonstrating a meaningful difference in sunspot activity in the alternating cycles. . . .
      The starting point was the incontestable, statistically significant (95 %), 21-year periodicity in the South African rainfall, river flow and other hydrometeorological data. . . .
      Another observation is that the magnitudes of the periodic changes relative to the long-term mean values, increase from evaporation (absent) to rainfall, to river flow, to flood peak maxima. Together, these characteristics indicate that the periodicity is amplified by the processes involved in the poleward redistribution of solar energy. . . .
      There is an almost three-fold, sudden increase in the annual flows in the Vaal River from the three previous years to the three subsequent years. This is directly associated with a six-fold increase in sunspot numbers. The second important point is the consistency in the range of sunspot numbers before and after the reversal. The totals for the three prior years varied between 25 and 60, and the totals of the three immediately subsequent years varied between 250 and 400. It is very clear that these are systematic changes associated with the sunspot minima, and are not random events.

      The duration of 2010-2030 is about the length of a double 21 year Hale solar cycle. From Alexander’s South Africa evaluation, there are likely to be strong changes in precipitation associated with just before/after the solar minimum over the asymmetric 21 year Hale solar cycle. The current unusually low solar minimum of Cycle 24 may further amplify this 300% difference that Alexander observed correlated with the 21 year cycle.
      Detailed analyses may be needed to see how this applies to other regions or if there are any phase differences between Southern Africa vs other regions.

      • David, thanks this is a very interesting paper. It supports my point that regional data analyses can often produce something useful in terms of decadal scale predictions, especially when there is a very long time series of observations.

  12. Tallbloke,

    I agree. Synchronized oscillations have a pronounced impact on global average temperature. Predicting the synchronizations is no easy task nor is determining their impact. Determining the average global temperature average is no simple task either. Pattern matching and jumping to conclusions seems to be pretty easy. Arrhenius sought to prove glacial/interglacials were caused by CO2. His notion caused a bias that impacted his results. He has a legacy that still lives in climate science both pro and anti warming.

    The best paper I have read on natural variations impact on climate was A.A. Tsonis’ who doesn’t have a horse in the race. While far from definitive, it indicates the relative strength of natural forcing. Chaos math and satellite era measurements seem to produce meaningful results. They don’t improve the fore or hindsight of the models significantly though, in my humble opinion, at least not yet.

    While I doubt that refining regional models to predict region climate a decade in the future will be all that successful anytime soon, it should be enlightening. Regional impacts should have more global impact than given credit currently. That now fictitious medieval warm period would have global implications (my totally uninformed thought.) If I plug in estimated temperature values that would produce the anecdotal evidence of the MWP in to GISS temperature calculations, that big ass red blob would make news. Then I don’t have a horse in the race either.

  13. The water cycle isn’t fully understood — presents a road block to regional and decade forecasting. Though cycles have been discussed in numerous studies (most devoted to the fishing industry and glacials), an integrated model of oscillations and their interrelationships has yet to be presented and may require decades to fully define and prove.

    A 3D presentation tool has yet to be presented that models this level of complexity.

    The Sensor Web will help to resolve many of these uncertainties but it can’t be aligned to the past 30 years of the current 50-60 year cycle due to lack of data.

    ESTABLISHING THE GLOBAL FRESH WATER CYCLE SENSOR WEB
    Peter H. Hildebrand
    NASA Goddard Space Flight Center, Greenbelt, MD

    “The overall usefulness of the water cycle observational system will be limited by the ability to communicate, integrate and analyze this diverse set of water cycle information in a computational framework. Major development issues will include the data communications and pathways, the multi-dimensional data assimilation approaches, and the final reduction and delivery of data products in a manner suitable for use by diverse, non-technical users.”

    • statistical methods and natural climate variability (global and regional) on decadal time scales

      – natural climate variability implies that all the Assumptions about anomalies are erased from the incoming data without discounting given atmospheric mix (I don’t know if this is possible as all the satellite information requires assumptions in the transforms)
      – hard science is typically bottom up logic yet given an impossible problem (example: emulate human hand motion with a robotic design) the Engineering and Design community frequently delivers top down solutions like “Mimic”
      – Meta climate model (current “state” of understanding) framework for the system both resolved and unresolved at its most general (first) and most specific (last).
      – Foundation framework for all math before climate assumption modules are defined and added on top of the framework (a road map to the given state of the worldwide Climate Science Model).
      – Rules associated with open access to all aspects of the programming framework which can not be hacked by any “evil doer” ; )

  14. Judith:

    However much simpler Scafetta’s paper may appear than all the “powerful computers and GCM’s models,” it has a gaping flaw in its logic, namely the 0.28 °C shift it needs to get a reasonable fit between the period centered on 1910 and that 60 years later centered on 1970.

    In 1910, CO2 would have been around 292 ppmv, not much more than the pre-industrial base of 280 ppmv. In 1970 we can see from the Keeling curve that it is 326 ppmv. That’s an increase during those 60 years of 1.114 or 11.4%. The log base 2 of 1.114 is 0.156. Multiplying this by 1.8 °C per doubling (the observed instantantaneous climate sensitivity, as distinct from the various IPCC notions of the concept which mean something quite different), we can predict a temperature rise of 0.28 °C over that period.

    Far from refuting global warming, with his 0.28 °C shift Scafetta has confirmed it with devastating accuracy!

    It’s really too bad this very nice confirmation is undermined by all the polemics in the paper, which makes it hard for any legitimate climate researcher to take seriously what is clearly a legitimate and accurate result.

    • Yes the need to de-trend the temperature series (using y=2.8*10^-5(x-1850)^2-0.041) rather detracts, but attribution of this effect to CO2 alone isn’t indicated by the paper.

      • attribution of this effect to CO2 alone isn’t indicated by the paper.

        Correct. The paper indicates that the effect is attributable to the movements of astronomical bodies.

        My point is that astronomy need not be the only explanation, since CO2 alone can completely account for the observed effect.

        (But PLEASE NOTE, that is NOT the same thing as saying CO2 is the only agent of climate change. PLEASE STOP SAYING THIS. I’m really tired of hearing this over and over. I have never said this myself and I never will since it’s demonstrably false. Stop putting words in my mouth. Capiche?)

        In my response below to BH I take this a step further by pointing out that the solar system would have to be moving in a way that tracks the exponential growth of human population and technology. This would be a triumph for astrology.

      • Can you explain to a layman exactly what ‘technology’ you feel that you can measure.

        And show some figures that represent exponential growth.

        I can go and measure lots of things, but I don’t have a ‘technology meter’ that gives me numerical values AFAIK.

        Please clarify.

      • AnyColourYouLike

        Speaking of clarification, and given that Vaughan Pratt’s own website labels those who don’t agree with his views on CO2 induced climate change as either 1/ Stupid, 2/ Uneducated, 3/ Unbelievers, or 4/ Liars,

        http://boole.stanford.edu/dotsigs.html#ClimateChange

        rather than rebutting each post in technical detail Vaughan, would it not be quicker just to assign a number in response?

        Eg

        Latimer Alder

        “Can you explain to a layman exactly what ‘technology’ you feel that you can measure.

        And show some figures that represent exponential growth.

        I can go and measure lots of things, but I don’t have a ‘technology meter’ that gives me numerical values AFAIK.

        Please clarify.”

        Answer (1)Latimer is stoopido…next? etc.

      • would it not be quicker just to assign a number in response?

        There are two reasons not to do so. First, I consider myself a guest on Judith’s blog, and it would be ungracious of me to treat it as though I owned it, especially in the particularly immature manner you suggest—what would be the point? Second, while it’s true that I allow myself more impatience on my own website than on others, you may have overlooked that I also try to include reasons even there on occasion.

        While on this blog I will continue to respond to reasonable questions with what I hope are reasonable answers. If Latimer were later to repeatedly pretend that I never answered his question, I might eventually lose my cool the way I did yesterday with Arfur Bryant, who stubbornly refuses to listen to the point that “trend” connotes linearity while continuing to insist on fitting a straight line with slope 0.05 °C per decade to a graph that is obviously not linear.

        (I spent many fruitless exchanges trying to get AB to see this point about linearity of trends some weeks ago. I found it an effort to be patient, but from his responses he apparently appreciated the effort. But eventually, prompted by PDA, I decided no one could be that stupid and that he must be pulling my chain, his protestations to the contrary notwithstanding, so I called it quits at that point and tried to put that huge waste of time out of my mind. Seeing it again yesterday was something like a PTSD reaction. He and Jan Pompe are a pair.)

        If none of this makes any sense to you, I’m sure you’ll find my frustration incomprehensible. If I were someone else I wouldn’t let things bother me that way, but I’m not.

      • AnyColourYouLike

        “There are two reasons not to do so. First, I consider myself a guest on Judith’s blog, and it would be ungracious of me to treat it as though I owned it, especially in the particularly immature manner you suggest—what would be the point?”

        I think it would be obvious to anyone with a sense of humour that I was being facetious in order to make a point Vaughan, but perhaps you don’t have a sense of humour? Yes, you are a guest here, though given your self-confessed, hardline views on the dishonesty or stupidity of those you label climate “deniers”, I wonder what possible benefit you hope to derive from it?

        As for this “impatience” you ascribe to the musings on your own blog….Hmmmm, are you schizophrenic, do you have two different personalities? You come here with your objective scientist hat on, apparently debating in a spirit of open enquiry, whereas at home you take the hardest of lines with “deniers”, 100% of whom are either; 1/ Stupid, 2/ Uneducated, 3/ Unbelievers, or 4/ Liars.

        There seems to be a slight lack of consistency there, one might even say hypocrisy – do you change personal beliefs like you change ties each morning? Mind you, what would I know? I’m obviously immature and either stupid, undeucated…etc. You get the picture. ;-)

      • There seems to be a slight lack of consistency there,

        Guess I’m not detecting your own sense of humour. You certainly don’t come across as gently poking fun.

      • Can you explain to a layman exactly what ‘technology’ you feel that you can measure.

        Depends on what you have in mind by “layman.” Let me know if the following is too technical for you.

        By far the most relevant measure of increasing technology is per capita fuel consumption as a function of time. Per capita automobile ownership in 1950 was 0.02 cars, today 0.08, an increase of a factor of four, a greater ratio than the population increase from 2.6 billion to 6.7 billion over the same period.

        Over the same period American consumption of meat and poultry rose from 144 pounds per person per year to 222 pounds, requiring proportionately more farm machinery, grain, etc.

        Production of plastic increased from 5 megatonnes a year in the 1950s to 100 megatonnes today. Dividing that 20x increase by the 2.6 increase in population gives a plastic increase of around 8x. The invention of plastics would have offloaded what was previously used in their stead, making it a bit harder to estimate the technology impact.

        But that’s true for all major inventions. Every new big invention allows each person to consume more power than before. Ships, trains, electricity, automobiles, piston-engine planes, jet planes all constitute per capita technology increases. Furthermore transport speeds have increased during the century, a further per capita impact. Each new invention imposes an additional load on the per capita energy budget.

        If I were tallbloke I’d be offering to bet you that the corresponding per capita numbers for 1900 were proportionally smaller than in 1950. But I agree with you that it would be great to have a comprehensive table in one place giving hundreds of statistics of this kind from one half-century to the next so we didn’t need to bet on it. Those numbers are out there, but it may take some work to round them up.

        Absent numbers, feel free to believe that the average modern human is consuming exactly the same amount of energy as in the year 1700. That would say something about your sense of history.

      • Good.

        You actually had some measurements of something – in this case ‘per capita fuel consumption’ which you show has increased. (Didn’t quite see the ‘exponential’ but, but we can return to that favoured word of the Alarmists another day).

        Fine. That is a much more meaningful statement than the generalised ‘technology’. Next time you want to make your point you will have something to back it up rather than just a Climatologist’s unsubstantiated assertions.

      • The de-trending is basically an unexplained residue that I don’t think the author was attributing to planetary motion. So I don’t think it’s quite right to say: “The paper indicates that the effect is attributable to the movements of astronomical bodies.”

        BTW I’m a bit surprised by the parenthetical outburst – I don’t think I’ve suggested you had any view on what caused climate change either in my earlier comment or elsewhere.

      • The de-trending is basically an unexplained residue that I don’t think the author was attributing to planetary motion. So I don’t think it’s quite right to say: “The paper indicates that the effect is attributable to the movements of astronomical bodies.”

        Scafetta acknowledges the quadratic curve as what the IPCC attributes to global warming, so he clearly understands that it’s not unexplained by the IPCC. He subtracts the global warming curve from the HADCRUT curve so they can be aligned, referring to the subtraction as “detrending,” but where does he claim it is unexplained? He can’t simply declare it to be unexplained on his own recognizance, he has to have an alternative to CO2 as an explanation for global warming or his argument doesn’t have a leg to stand on. That’s what all the planetary motion stuff is about. (Though in Section 8 he also suggests the possibility that global warming isn’t even happening but may be just urban heat island effect, closing of colder stations, etc.)

        BTW I’m a bit surprised by the parenthetical outburst – I don’t think I’ve suggested you had any view on what caused climate change either in my earlier comment or elsewhere.

        Sorry about that, I hadn’t intended it for you but for those who I was certain would read my statement “CO2 alone can completely account for the observed effect” (which is true) as meaning “CO2 alone can completely account for climate change” (which is false). The latter is frequently attributed to climate scientists. But you’re right, I shouldn’t get worked up about that attribution, better to just shrug it off as routine denier propaganda.

      • He can’t simply declare it to be unexplained on his own recognizance, he has to have an alternative to CO2 as an explanation for global warming or his argument doesn’t have a leg to stand on.

        That’s pretty close to a false dichotomy. CO2 is an explanation. It isn’t the only alternative. Longer period unforced variation comes to mind (Roman warm, dark ages cold, medieval warm LIA cold, modern warm, e.g.). The CO2 hypothesis isn’t proven true because a credible alternative isn’t offered.

      • yes but absent any other credible explantion we would choose the one that is consistent with other physical theory.

      • But then you reject things like continental drift because you don’t (at the time) know of a mechanism that allows continents to move (and besides, Wegener was really annoying). It wasn’t until the geological evidence that continents had been connected became overwhelming that anyone bothered to look seriously and found things like mid-oceanic ridges and very suddenly plate tectonics became the conventional wisdom. I was in grad school at the time. It was quite entertaining.

        The only reason we even need to choose is to support or reject an agenda. Otherwise it would be of purely academic interest. In fact, since we’re unlikely to actually do anything significant globally, it’s academic anyway. You don’t really expect China and India and pretty much the rest of the world to actually restrain the growth of CO2 emissions, much less reduce them, do you?

        If you’re a warmer, you better pray that Peak Oil has already come, Peak Coal isn’t far behind and shale gas is just another oil patch ploy to extract massive amounts of money from New York doctors.

      • The only reason we even need to choose is to support or reject an agenda

        You don’t believe in scientific curiosity? I’m one of those who is driven almost exclusively by curiosity. I don’t have any agenda except when I hear what I think are scientific misconceptions, and then I try to find the clearest possible arguments as to why they are misconceptions. If I can’t find those arguments then I’m willing to concede I may be wrong.

        I couldn’t care less about policy, only about the science. I’m easily entertained by people bent on destroying things, I view it as a learning opportunity, at least up to the point they destroy me. I am however very interested in trying to get people to see each other’s point of view, especially when both sides have merit.

      • BlueIce2HotSea

        “You don’t believe in scientific curiosity?”

        My take is that DeWitt Payne is advocating a suspension of judgment until taking a hard-position becomes necessary.

        DeWitt’s comment reminded me of my first encounter with the phrase “Crackpot Theory”. It was in a geology textbook describing the label placed on the not yet mainstream idea of continental drift. The author then proceeded to explain the CD theory in great detail, which left me with no clue on how to judge which one was correct.

        It is clear that a hard-line authoritarian approach is antithetical to scientific curiosity. Yet we see it again and again.

        One reason may be that no matter how politely one may broach the subject of an alternative theory, it remains at the core, an ever so subtle implied accusation of incompetence. To the extent that competence is a real issue, the louder the howls and nastier the reaction to innocent curiosity.

      • That’s pretty close to a false dichotomy.

        Perhaps, but the following sentence from his abstract makes this question moot. “It is found that at least 60% of the global warming observed since 1970 has been induced by the combined effect of the above natural climate oscillations.”

        This in combination with his attempts to argue that the remaining 40% has non-CO2 causes such as urban island effect, closing of colder stations (which can’t impact the record the way he thinks it can because the record is anomaly-based and anomalies don’t care whether they come from hot or cold stations), etc, make it clear that he’s proposing explanations of global warming that are alternative to the CO2 explanation, as opposed to merely denying that global warming has any explanation.

    • Vaughan Pratt says:
      In 1910, CO2 would have been around 292 ppmv, not much more than the pre-industrial base of 280 ppmv

      This is just pure speculation.

      • This is just pure speculation.

        On the contrary it is calculated from the Hofmann model for dependence of CO2 on time. The Hofmann model is predicated on the exponential growth of population and technology, both of which we have a good handle on. If you have some evidence to show that the model is in error for some year then it is incumbent on you to show that evidence. “Pure speculation” is not a valid objection to a model unless you believe every model of everything is pure speculation (which it may well be).

      • Ok, I looked. On the basis of the stomata record what would you say CO2 for, say, 1810 was? Hofmann’s formula predicts 281.5 ppmv, can you improve that estimate based on the stomata data?

      • I’d say that since the Stomata values indicate bigger fluctuations in co2 than the ice cores do, we’ll find out what sort of fluctuations the modern record shows if cooling continues and Keeling does his work assiduously.

    • thanks for this analysis, this scafetta paper caught my eye but I haven’t dug into it, the noise in the paper and at his website doesn’t help his credibility tho.

    • V Pratt says..

      Far from refuting global warming, with his 0.28 °C shift Scafetta has confirmed it with devastating accuracy!

      Quite possibly. But would we not then have to assume that all natural variations during that 60 year period amounted to zero?
      Are we aware of any other comparable 60 year periods where natural variations amount to zero?

      Also, if you are correct in that sensitivity is 1.8DegC per doubling (including any natural variation) then we have absolutely nothing to worry about. Do we?

      • But would we not then have to assume that all natural variations during that 60 year period amounted to zero?

        Not at all. In effect Scafetta is doing an autocorrelation of the signal with itself, using a horizontal (time) shift of 60 years and a vertical (temperature) shift of 2.8 °C. This is a variant of my killAMO trick where I smooth using a 60-year window (65.5 to be precise, works slightly better than 60). Scafetta’s approach matches the AMO (more precisely AMO + PDO) signal with a phase-shifted copy of itself, in a way that is not affected by the faster-moving natural influences on temperature (of which there are a lot, some quite substantial), which Scafetta’s 60-year-wide window wipes out. While there may be slower-moving natural influences than AMO, they appear to be too slow to show up in a mere 150-year span.

        Are we aware of any other comparable 60 year periods where natural variations amount to zero?

        When Scafetta’s autocorrelation technique is applied to 1850-1910 and 1910-1970, with the latter shifted up 0.16 °C, you get the same good agreement Scafetta found for 1880-1940 and 1940-2000 with the latter shifted up 0.28 °C. As in the other case this does not show there are no natural variations; rather it only shows that CO2 was not having as big an influence back then.

        Even more interesting is to take any year y between 1910 and 1950 and plot the periods [y-60,y] and [y,y+60]. You will notice the same thing Scafetta noticed in the case y = 1940, namely that the two plots have roughly the same shape except that the second is tilted up more or less—less for smaller y, more for larger y.

        If you then subtract (pointwise) the first plot from the second and take its mean, you will get a mean difference md(y). If you then plot md(y) for i from 1910 to 1950 you will find that it tracks CO2-induced warming cw(y) very well, where cw(y) = 1.8*lb(280 + 2^((y-1790)/32.5)) is the Arrhenius-Hofmann law for warming in the year y attributable solely to CO2 assuming 1.8 °C per doubling as the instantaneous observed climate sensitivity.

        In order for Scafetta’s astronomical explanation to hold, the orbits of the Moon, Jupiter, Saturn, etc. would have to be changing in a way that closely tracks the progress of the anthropogenic component of global warming throughout the period from 1880 to 1980. Since the latter is intimately linked to the exponential growth of population and technology, it would be a remarkable astronomical phenomenon if the planets were similarly linked, though astrologers would point out that they’d been saying this all along.

        Also, if you are correct in that sensitivity is 1.8DegC per doubling (including any natural variation) then we have absolutely nothing to worry about.

        What, me worry? ;)

        Assuming world population and fuel usage continue to follow the same exponential trends they have over the past century, then 1.8 °C per doubling, taking all known natural variations into account (which are very significant btw), will raise the temperature between 2 and 2.5 °C depending on whether you think anthropogenic CO2 doubles every 32.5 years (Hofmann’s estimate, giving the 2.5 figure) or every 42 years (my estimate, giving 2 °C and thus making me less of an alarmist than Hofmann).

        One benefit of that rise would appear to be new shipping lanes over the North Pole. And residents of Stockholm, location of Arrhenius’s university, will surely welcome the warmer winters.

        Another beneficiary is plants. Warming along with increased precipitation and nitrogen deposition are all beneficial consequences of increased CO2 for plant life. Since plants utilize CO2 in photosynthesis one might also expect plants to benefit directly from the increased CO2. However experiments at Stanford’s Jasper Ridge preserve over the past decade, described in the very interesting paper Grassland Responses to Global Environmental Changes Suppressed by Elevated CO2, indicate that CO2 in conjunction with the above side effects of CO2 suppresses root allocation thereby reducing net primary production. One possible cause is that the minerals in the soil are already maxed out.

        Then there are the hazards, for example ocean acidification leading to depletion of major components of the marine food chain, along with melting of the abundant frozen methane reserves in the Arctic, but quite enough has been said already about the hazards.

      • I should clarify that what md(y) is tracking is not cw(y) itself but cw(y+30) − cw(y−30), that is, the increase in CO2-induced warming between the midpoints of the two 60-year intervals. This is a steadily growing quantity.

        Let me emphasize the similarity between Scafetta’s autocorrelation method and my killAMO method, namely that we both average out the faster variations by using a large (60-65 year) window to smooth with. In Scafetta’s case the smoothing happens implicitly as a consequence of aligning his two 60-year plots vertically.

      • Assuming world population and fuel usage continue to follow the same exponential trends they have over the past century

        On what basis do you have to make that assumption?

        pre-2002 the inflation adjusted prices of coal and oil had actually declined since the late 1970’s. Post 2002 the inflation adjusted prices of coal and oil have increased substantially.

        China was exporting steam coal at $27/tonne in 2002. They were paying $118/tonne to import it just this week.

        Exactly where does the additional 6 billion tonnes of coal per year we would need to double CO2 come from?

        Asia is already net importer. Europe is already a net importer. Africa can spare maybe 75 million tonnes per year if they forego having electricity for themselves. South America can spare maybe 75 million tonnes per year. The total of all North American coal export facilities is in the neighborhood of 100 million tonnes a year.

        The Chinese have resorted to coal rationing. Maybe it’s because they have already figured out that ‘exponential growth’ in coal usage will only result in ‘exponential price increases’.

        My back of the envelope calculation shows coal peaking between 2015 and 2020 at about 7 billion tonnes year. It’s purely a function of ramping up nuclear power industrial capacity at this point.

      • Thanks, Harry. Your arguments remind me of the equally compelling theoretical arguments that Moore’s Law of exponentially increasing speed, transistors, etc. has now hit the physics wall. Those arguments have been going on for some time now, but the ingenuity of the human race seems to be more than a match for them.

        Being a theorist myself I realize I should be more supportive of my fellow theorists. However unlike most theorists I’m also an empirically inclined pragmatist or you’d be seeing more support from me on your points.

        My back of the envelope calculation shows coal peaking between 2015 and 2020 at about 7 billion tonnes year. It’s purely a function of ramping up nuclear power industrial capacity at this point.

        With a larger envelope you could distinguish between the energetic peak and the mass peak. The former comes much sooner than the latter. This is particularly relevant to global warming because the energetic peak benefits us by its efficiency while the mass peak harms us by the opposite.

        Exactly where does the additional 6 billion tonnes of coal per year we would need to double CO2 come from?

        I would have said the ground, except I’m not sure what you’re asking here. Are you saying the ground does not contain that much coal, or that we can’t afford to pay for it?

      • I am sceptical about AMO ‘fiddle’.
        There is a conflict between UCAR and NOAA definition of period
        http://www.cgd.ucar.edu/cas/catalog/climind/AMO.html period of 60-80 years
        http://www.aoml.noaa.gov/phod/amo_faq.php#faq_1 last for 20-40 years (or x2 = 40 -60 years)
        They both say ‘natural changes’ since they have no clue what is causing it. AMO is not natural, it has identifiable source and I call it North Atlantic precursor
        http://www.vukcevic.talktalk.net/CET-NAP.htm
        or if you take rising trend out of it then NA Gateway as in
        http://www.vukcevic.talktalk.net/NPG.htm last graph on the web page (with different smoothing) .
        Take source out, then you get flat CETs and the AMO disappears.
        Data on which this is based is relatively scant prior to 1650, but it does identify within the reason, all the major temperature changes.
        http://www.vukcevic.talktalk.net/NAP11-16.htm
        As it says : ‘physical process that climate science never considered’.
        Take its source out and I think the CETs will be (1100-2000) flat as a pancake, except for a volcano here and there.
        Of course you and many others may and most likely will disagree, I just plot the data, and that is what the data shows.
        But what is the data?
        Well, I have to write something first, for time being I am just putting down a foundation stone to pledge the case.

    • Sorry Dr Pratt but the gaping flaw is totally yours.

      1) Sensitivity to CO2 doubling depends on feedback factor and may vary between 1.5°C and 4.5°C in average, and between 1°C and 10°C considering huge uncertainties (Knutti & Hegerl as refered into Scafetta’s paper).

      2) The 0.28°C shifting proposed by Scafetta applies to the complete 1880 – 2010 period while your nice Log calculation totally depends on the chosen period, and the 0.28 result is only valid when applied between 1910 and 1970.

      For different periods the result would be totally different, which falsifies your approach:
      => 0,18°C when applied between 1880 and 1940
      => 0.24°C when applied between 1895 and 1965
      => 0,43°C when applied between 1925 and 1985
      => 0,55°C when applied between 1940 and 2000
      => 0,64°C when applied between 1950 and 2010…

      3) There is no proven correlation between warming slope and CO2 concentration increase. Warming slope between 1910 and 1940 is the same than the one observed between 1970 and 2000 (+0,15°C per decade) whereas CO2 concentration increase was 2.5 lower ! This is also the main explanation for low warming trend reproduced by models for 1910 – 1940 peridod (+0,06°C per decade calculated instead of +0,15°C per decade measured i.e. also fator 2.5 lower…)

      Basically the failure in your logic is the same than the one in GCM’s models: they mainly rely on the hypothesis that warming if almost fully driven by CO2 concentration, which is formally falsified by simple comparison between models’ outputs and temperature records.

      Whatever is the origin (AMO, PDO, sun or CMSS speed or just a mixture of all these factors), the fact is that we can observe an obvious 60 years cycle in the T° records, and that the baseline pattern includes a 30 years warming (+0,15°C per decade) and a 30 years slight cooling trend (-0,05°C per decade). And the other fact is that models totally fail explaining and reproducing these (multi) decadal patterns, and more especially cooling trends.

      Therefore models are so far proved formally invalidated and totally unable to provide any useful forecast for what Earth’s climate could be in the coming decades.

      • Just noticed you posted this twice. My reply to your part 2) is here.

        Regarding 1), the wide range of climate sensitivities you’re referring to are all theoretical. There can only be one instantaneous observed climate sensitivity, namely what the HADCRUT3 record shows in conjunction with the Keeling curve, both of which we know very accurately. Theoretical numbers like 4 °C have nothing to do with observed reality.

        Regarding 3),

        Whatever is the origin (AMO, PDO, sun or CMSS speed or just a mixture of all these factors), the fact is that we can observe an obvious 60 years cycle in the T° records, and that the baseline pattern includes a 30 years warming (+0,15°C per decade) and a 30 years slight cooling trend (-0,05°C per decade). .

        Yes, I’ve been making that exact point myself for weeks on this blog now. What we have here is a failure to communicate. If it’s my fault then I should stop trying to do the impossible and just go away.

        And the other fact is that models totally fail explaining and reproducing these (multi) decadal patterns, and more especially cooling trends.

        I’m the wrong person to comment on that since I’m not well informed about GCMs and other complex models. I work only with the simplest possible models, like the Arrhenius logarithmic law and the Hofmann raised exponential law. These in combination with the AMO, El Nino events and episodes, solar cycles, and volcanoes, all of which constitute natural agents of significant climate change, pretty much completely explain the temperature record. If you’re correct that GCMs don’t, then apparently simple models are better than complex, but I wouldn’t know.

        Perhaps Jupiter and Saturn also explain the temperature record as Scafetta suggests, but to do that they would have to be very well correlated with human consumption of fossil fuels, which the temperature record certainly is. Astrologers would feel vindicated.

        Let me say it again for emphasis.

        The AMO, El Nino events and episodes, solar cycles, and volcanoes are natural agents of significant climate change.

        I realize it has become a knee-jerk reflex of the denier routine that anyone claiming that CO2 is an agent of climate change is automatically claiming that it is the only agent of climate change. I try to make it my own knee-jerk reflex to point out that it isn’t. I’m sure we’ll get along just fine that way.

      • There is no sign of 60 year cycle in the longest temperature record (CET).
        http://www.vukcevic.talktalk.net/CETpr.htm
        Global temperature is an unreliable ‘concoction’.

      • The CET (temperature record for the Midlands of England) for the period 1850-now correlates very badly with HADCRUT3, plunging in 1880 when HADCRUT3 is peaking. If you can’t find the AMO in the modern CET period there’s no point looking for it earlier.

        The tree-ring data bank tracks climate for some 2000 locations spread over six continents, going back more than a century before the CET while being much more representative of global temperature. In “A tree-ring based reconstruction of the Atlantic Multidecadal Oscillation since 1567 A.D.”. Geophys. Res. Lett. 31: L12205. doi:10.1029/2004GL019932, Stephen Gray et al find clear signs not only of the AMO but even of the beats where the two periods drift out of phase and cancel at around 1730-1750, which is about where they should cancel if there are two periods of 56 years and 75 years (which also seems to give the best fit to the HADCRUT3 record).

        An interesting point they make is “Agreement between the observed and reconstructed values increases in the 20th century, which is likely a result of better sampling of the SST field after ~ 1900.” If they’re right about the latter, that would make tree-ring data more reliable than pre-1900 temperature data.

      • Dr. Pratt
        I am not sure that I do entirely understand your interpretation of CET/AMO relationship, so I have made my views known in a new post here:
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-25988

  15. Why does ARGO have to mathematically enhance the data from the oceans to the warmer scale?
    Depending on the program created, the data can show what the programer wants to achieve by the model rather than let the raw temperature data do the work.
    Hmmm…sort of sounds like other model work we’ve seen.

  16. Judith,
    I don’t think NASA trust their current climate madel predictions.
    Here is an e-mail I recieved:

    Final text for ROSES 2010 Appendix A.17: NASA Energy and Water Cycle Study.

    The overarching long-term NASA Energy and Water Cycle Study (NEWS) grand challenge can be summarized as documenting and enabling improved, observationally based, predictions of water and energy cycle consequences of Earth system variability and change. This challenge requires documenting and predicting trends in the rate of the Earth’s water and energy cycling that corresponds to climate change and changes in the frequency and intensity of naturally occurring related meteorological and hydrologic events, which may vary as climate may vary in the future. The cycling of water and energy has obvious and significant implications for the health and prosperity of our society. The importance of documenting and predicting water and energy cycle variations and extremes is necessary to accomplish this benefit to society.

    NASA’s Energy and Water Cycle Study solicits projects to mine the vast data and model resources through innovative analyses to make progress against the NEWS goals. These projects should eschew focusing their efforts on product generation, model capability revision, or extensive model simulations. Instead, they should exploit existing resources that can be gained from previous or ongoing NASA sponsored research. Potential NEWS PIs are encouraged to leverage previous and existing NEWS funded activities, see http://nasa-news.org/ (see tabs for “Resources” and “Projects”).

    Amendment 29 releases the final version of the text of Appendix A.17: NASA Energy and Water Cycle Study, which replaces the draft text in its entirety. Notices of Intent to propose are due on February 16, 2011. The due date for proposals is March 22, 2011.

    On or about December 27, 2010, this Amendment to the NASA Research Announcement “Research Opportunities in Space and Earth Sciences (ROSES) 2010” (NNH10ZDA001N) will be posted on the NASA research opportunity homepage at http://nspires.nasaprs.com/ (select “Solicitations” then “Open Solicitations” then “NNH10ZDA001N”). You can now track amendments, clarifications, and corrections to ROSES and subscribe to an RSS feed at: http://nasascience.nasa.gov/researchers/sara/grant-solicitations/roses-2010

    Questions concerning this program may be addressed to Dr. Jared K. Entin
    Earth Science Division, Science Mission Directorate, NASA Headquarters, Washington, DC 20546-0001
    Telephone: (202) 358-0275 E-mail: Jared.K.Entin@nasa.gov

    • I’ve been on the NEWS science team since its inception, need to write a new proposal. The announcement says “These projects should eschew focusing their efforts on product generation, model capability revision, or extensive model simulations. Instead, they should exploit existing resources that can be gained from previous or ongoing NASA sponsored research.” So this isn’t about model development or simulation, it is more about making use of the satellite data products developed in the previous NEWS research. Documenting and understanding the earth’s energy and water budget is a big part of this.

      • Judith,
        One of the things not looked into is the shape of our planet has a big influence of the different energies at different latitudes.
        With solid materials, semi-solid materials and gases, these interact differently along with the differing density in the two hemisperes.
        There are more but you have seen the past posts.

      • it is more about making use of the satellite data products developed in the previous NEWS research.

        I’m strongly in favor of that. I believe we can learn a lot more about climate by observing it directly and drawing conclusions than trying to predict its trajectory as though it were a ball flying through the air. It’s way too complex to predict with any accuracy. Modeling is good for understanding particular phenomena, but there are far too many phenomena modeled by far too few models to hope for sensible predictions. It’s like trying to model the economy.

  17. Sorry Dr Pratt but the gaping flaw is totally yours.

    1) The 0.28°C shifting proposed by Scafetta applies to the complete 1880 – 2010 period while your nice Log calculation is only valid when calculated between 1910 and 1970.

    Using same calculation you should obtain :
    => 0,18°C when applied between 1880 and 1940
    => 0.24°C when applied between 1895 and 1965
    => 0,43°C when applied between 1925 and 1985
    => 0,55°C when applied between 1940 and 2000
    => 0,64°C when applied between 1950 and 2010…

    2) There is no proven correlation between warming slope and CO2 concentration increase. Warming slope between 1910 and 1940 is the same than the one between 1970 and 2000 (+0,15°C per decade) whereas CO2 concentration increase was 2.5 lower , which is also the main explanation for low warming trend reproduced by models for 1910 – 1940 peridode (+0,06°C per decade calculated instead of +0,15°C per decade measured)

    Basically the failure in your logic is the same than the one in GCM’s models : they mainly rely on the hypothesis that warming if almost fully driven by CO2 concentration, which is formally falsified by simple comparison between models’ outputs and temperature records.

    Whatever is the origine (AMO, PSO, sun or SSCM speed or just a mixture of all these factors), the fact is that we can observe an obvious 60 years cyclicity in the T° records, and that the baseline pattern includes a 30 years warming (+0,15°C per decad) and a 30 years slight cooling trend (-0,05°C per decade). And the other fact is that model totally fail explaining and reproducing these (multi) decadal patterns, and more especially cooling trends.

    • 1) The 0.28°C shifting proposed by Scafetta applies to the complete 1880 – 2010 period while your nice Log calculation is only valid when calculated between 1910 and 1970. Using same calculation you should obtain : [rises for 1880-1940, 1895-1965, 1925-1985, 1940-2000, 1950-2010]

      Two points. First, Scafetta only performs one shift, whereas you’re proposing five shifts. My original post on this considered only Scafetta’s single shift. In subsequent posts I generalized to the function md(y) along the lines you suggested, which would give a different shift for every year y at the center of a Scafetta-style match.

      Second, we can see how Scafetta’s one shift works on page 19 of his longer document where he says “[Figure 10] clearly suggests the existence of an almost perfect cyclical correspondence between the periods 1880-1940 and 1940-2000.”

      Those two 60-year periods are the sole basis for his alignment, not the rest of the 1880-2010 period. Hence in order to predict on the basis of the Arrhenius and Hofmann laws what vertical alignment Scafetta will need to use for those two periods, one must ask how much higher the 1940-2000 year period is than the 1880-1940 period. Since Scafetta is comparing these two 60-year periods pointwise, one must compare 1880 with 1940, 1881 with 1941, and so on. The most reliable way to estimate the vertical shift is to use the two middle years, 1910 and 1970, which is what I did.

      So while it may seem I’m only considering the region from 1910 to 1970, actually these are just the midpoints of Scafetta’s two regions. I’m really looking at the period 1880-2000, that is, 30 years on either side of those midpoints (since each midpoint is 30 years from each end of its window). In that regard I am merely duplicating what Scafetta himself claims to have done. Had he claimed to have performed the match by pairing up some other two regions then naturally I would have used that instead. I certainly did not make up these regions on my own.

  18. Richard S Courtney

    Dr Curry:

    You assert:
    “Note that Scafetta is not the first one highlighting such reproducible climate patterns (about 60 years period) mainly resulting from Oceans’ oscillations and especially AMO & PDO switching into positive or negative mode.”

    [snip]

    “But Scafetta is obviously the first one proposing such a crystal clear analysis and forecast.”

    Say what!?

    I have been making that forecast repeatedly since 1998 (i.e. since before the recent halt to the last warming period). It may be true to say that Scarfetta was the first to publish it in a refereed paper, but I have been saying it in pulic presentations and on the web for over a decade. I repeated it again in the first part of this thread where I again wrote:

    “Richard S Courtney | December 23, 2010 at 7:16 pm | Reply

    Dr Curry:

    There is no possibility of using GCMs to predict the next 20 years. This is demonstrated by the last 10 years: no GCM predicted that there would be effective stasis in mean global temperature over the last 10 years.

    The best that could be done to predict global climate before 2030 is to assume the established pattern will continue. And this pattern consists of several cycles of which two are dominant in the recent record.

    One apparent cycle seems to have a length of ~900 years. It gave us
    the Roman Warm Period (RWP)
    then the Dark Age Cool Period (DACP)
    then the Medieval Warm Period (MWP)
    then the Little Ice Age (LIA)
    then the Present Warm Period (PWP).

    The other major apparent cycle has a length of ~60 years. It gave us 30-year periods of alternative negation and enhancement of the warming from the LIA and, therefore, there was
    cooling or no warming before 1910
    warming from ~1910 to ~1940
    cooling or no warming from ~1940 to ~1970
    warming from ~1970 to ~2000
    cooling or no warming after ~2000.

    If this pattern continues then
    cooling or no warming will continue until ~2030 when warming will resume towards temperatures of the peak of the MWP
    or, alternatively,
    cooling or no warming will continue until cooling towards temperatures of the LIA will initiate before 2030.

    Richard”

    Richard

    • Richard, that text in the draft was from a comment on the blog that I pasted in so i wouldn’t lose it. stay tuned for the bigger picture.

      • Richard S Courtney

        Dr Curry:

        I apologise if I ‘jumped the gun’. I do observe that your article was part of a draft that was inadvertently posted at this time.

        I intended no offence.

        Richard

    • I plotted the trend from Hadcrut3v and crutem3 using only the annual temps to reduce the amount of noise and I calculated the following trends using excell.

      Hadcrut3v 0.0059 C per year

      Crutem3 0.0169 C per year

      sources here http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt
      and
      http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/2009/global-data-sets/LAND_crutem3.txt

      So I think it is still warming

      • bobdroege

        Implicitly what you are testing here is whether the series is of the form:

        Temp(Year) = Trend*Year + Constant + Random Error(Year)

        Because of the Random Error you can’t be certain of your estimates of Trend (or Constant), but provided the Random Error is well behaved you can estimate a probability distribution for Trend and Excel’s regression function will do this for you.

        I quickly looked at annual Crutem3 for 2000/9 and got the same 0.0169 C per year you did, but the 90% confidence limits were +/- 0.0263. This means you can’t reject the hypothesis that there is no trend with 90% confidence (i.e. the 90% confidence limits cover the possibility of no trend).

        Now I should note that there lots of reasons why we might not be satisfied with this simple linear model as an explanation of what is going on (if only for the simple reason it doesn’t work back beyond 2000), but if the discussion is about trends 2000/9 then we can’t say with 90% confidence there has been a statistically significant trend over the period.

      • Apologies I quote the 95% confidence limits, the 90% ones are +/- 0.0212.

      • Thanks HAS,

        Your grasp of statistics is beyond mine.

        I see you can calculate the 90% and 95% confidence limits for the 2000/2009 trend, and the results could be reported as

        -0.0094 to 0.0432 degrees per year for 95% and
        -0.0043 to 0.0382 degrees per year for 90%, and that leads me to conclude that there is a confidence level that excludes the negative range, such that one could state that there is warming from 2000 to 2009 with a certain confidence level.

        Though the confidence level for the converse, that excludes the positive trend would be impossible? I might study a little more statistics in order to do the analysis myself.

        But thanks for your efforts and I note others do not provide data to support the cooling or no warming after 2000.

        And totally off topic, but has anyone noticed what has not happened in the Hudson Bay this season?

      • Yes there will be a level at which you can say with confidence the trend is non-zero. I think it’ll be somewhere in 70 – 80% range for this series from memory (I’m avoiding rerunning the numbers). By convention 90% and 95% are used.

        And you are right on your second point. I slipped over the question of whether you were testing for Trend 0 or Trend =< 0, because Excel by default produces the results for former (you'll see I referred to not being able to reject that there was no trend). If you are testing the latter you tighten your confidence limits (half the extreme values no longer count against you).

        In this case I think you may get close to rejecting the hypothesis that the Trend was =<0 with 90% confidence but not with 95% confidence.

  19. Having read your full article, I would simply reiterate my contention that none of the climate models have been validated for the purposes to which the proponents of CAGW use them. So, once again, most of what you write is all sound and fury signifying nothing.

    With respect to the sun, you have omitted any reference to Livingston and Penn. When L&P first tried to publish their findings, their paper was rejected. Why, I have no idea. Possibly because it vaguely argued against CAGW. L&P were persuaded to publish their finding on line, and many of us have been following whether their alleged straight line to no sunspots is continuing. It is. If L&P are correect, and I agree it is a big IF, then sunspots may disappear by around 2017 to 2020. What modern instrumentation will show us then, remains to be seen. But far from a Dalton minimum being in the offing, we may be facing a Maunder type minimum. For the sake of proving CAGW is wrong I welcome this, but I hate to wish a Maunder minimum on the world, with all the misery that that will entail.

    • Part II has nothing to do with climate models. The introduction states why climate models don’t do well. So I move on and take a look at statistical modeling. I have no idea who Livingston and Penn are, would appreciate a web link to some references.

      • thx, i’ll add it to the main post

      • The original paper by L&P is at

        http://www.probeinternational.org/livingston-penn-2008.pdf

        An up to date plot is at

        http://solarcycle24com.proboards.com/index.cgi?board=general&action=display&thread=855&page=34
        reply #497

        My point about any modelling, statistical or otherwise, is that if the models have not been validated for the purpose for which they are being used, then they dont mean anything.

      • Judy, pay attention to L&P. Leif Svalgaard points out that the last time the sunspots became sparse or invisible it became cold, but there were also volcanoes to raise the albedo. Isotope studies are ambiguous about the action of the sun at that time. If the L&P Effect continues, we may well get to see whether Cheshire Cat Sunspots cool the earth or not. If they do, Katie Bar the Door.

        By the way, on several boards I’ve been amazed that otherwise knowledgable commenters didn’t know about Livingston and Penn. Their observations may well be a huge game changer.
        =============

      • Their observations may well be a huge game changer.

        For whom? Those affirming that CO2 is getting to be a serious problem or those denying it?

      • That CO2 is steadily rising is a well supported observation. That “CO2 is getting to be a serious problem” is still an opinion based on wide assumptions in terms of climate sensitivity and paleo reconstructions which may or may not prove to be valid.

      • That CO2 is steadily rising is a well supported observation.

        Agreed. But that pretty much splits the skeptics into those who accept rising CO2 and those who don’t. For now I’ll say the split is 50/50, we can refine that later.

        That “CO2 is getting to be a serious problem” is still an opinion based on wide assumptions in terms of climate sensitivity…

        What is this concept “climate sensitivity” of which you speak? How is it defined?

        While I don’t normally bad-mouth the IPCC, I’m hypersensitive to their ill-defined concept of so-called “climate sensitivity” . It’s a low point of every one of their reports.

      • “What is this concept “climate sensitivity” of which you speak? How is it defined?

        While I don’t normally bad-mouth the IPCC, I’m hypersensitive to their ill-defined concept of so-called “climate sensitivity” . It’s a low point of every one of their reports.”

        Hmmm,
        Temp rise in response to an increase in radiative forcing is probably the short definition from the IPCC.

        And you have a better way to express this climate response to increasing GHGs?

      • And you have a better way to express this climate response to increasing GHGs?

        No but I can point out some problems with what you believe to be the IPCC definition (which it isn’t as we’ll see).

        There are two problems with your “response to an increase”.

        1. Response: you don’t say when. Do you mean the response one second later, one week, one year, one decade, one century, one thousand years, infinity, or what?

        The IPCC has two notions, one for infinity, called “equilibrium climate sensitivity,” and one for two decades, called “transient climate response.” It turns out that there is a huge difference between each of 0, 1, 2, and 3 decades, more than enough to account for the wide spread of climate sensitivities. I have no idea what the IPCC has in mind by infinity, in particular I have no way of estimating it.

        2. Increase. You don’t specify whether the increase is a step function (the IPCC definition in the case of equilibrium climate sensitivity), an exponentially growing function (the IPCC definition in the case of transient climate sensitivity, with a doubling rate of 70 years), or a raised exponential (a function of the form b+exp(t), which with a doubling rate of 32.5 years for the anthropogenic exponential part is what is actually happening according to Hofmann et al at NOAA ESRL Boulder).

        Again these make a huge difference. We missed our one chance at measuring the response to a step function during WWII, which seems likely to have dramatically stepped up CO2 emissions for the duration. And the IPCC’s exponential rate will be matched briefly around 2060 while being pointless at any great length of time before or after 2060.

      • The complexities and inconsistent choices in defining climate sensitivity and also radiative forcing are indeed a nuisance, even if some people may get substance from that to their own research like a student of mine got to her thesis.

        The subject of this thesis was, how to estimate the contributions of one country to the climate change. One of the issues that had to be considered, was the dynamics of CO2 concentration and radiative forcing given the emissions.

  20. “So what to make of these ideas? They are interesting, and if correct, they would certainly be useful for decadal-scale predictions. But that are at the frontier border with ignorance. How can we test these ideas?”

    Effective hind-casting through long temperature for deviations from normals, and hind-casting through long precipitation series, and flood drought cycle records is the test.
    Basically the only possible means of deterministic forecasting is to explain what happened in the past.

    • Richard S Courtney

      Ulric Lyons:

      You assert:
      “Basically the only possible means of deterministic forecasting is to explain what happened in the past.”

      Actually, no.
      The only possible means of deterministic forecasting is to repeatedly forecast what will happened in the future with measured degrees of accuracy.

      There are many possible ways of “explaining” the past, and there are dozens of climate models that each do it in a different way.

      But proving an ability to forecast the unique “explanation” of what will happen in the future is a completely different problem.

      In other words,
      there is one past that can be modelled in an infinite number of ways
      but
      there is an infinite number of possible futures and only one correct forecast of the one possible future which will occur.

      Richard

      • Richard, I`m talking about what actually causes the so called anomalies, in detail at a monthly or less definition, through every season in at least the last 2,000yrs!, there can only be one way to do that, and that is by showing the cause. Then you can happily discard all the pointless models.

      • Richard S Courtney

        Ulric Lyons:

        OK. I do understand what you are claiming, but your claim is both impossible and misleading.

        It is impossible because nobody knows and (in the absence of a time machine) nobody can know ” the so called anomalies, in detail at a monthly or less definition, through every season in at least the last 2,000yrs”. The causes of unknown phenomena cannot be determined.

        And it is misleading because – if it were possible – determination of all the causes would not indicate the reliability of an inference of the correct interactions of those causes for future prediction of the “anomalies”. The reliability of that inference could only be known by its demonstrated forecast accuracy obtained from comparison of a series of predictions to the observed real outcomes.

        Richard

    • Forecasting for period of 2-3 years may be possible due to propagation lag (ocean currents, the Arctic ice flow etc), but for longer term it just may be a fruitless effort if the forecasting models are based on inclusion of AMO and PDO, which according to my findings are unpredictable.
      http://www.vukcevic.talktalk.net/NPG.htm
      http://www.vukcevic.talktalk.net/NAP-AMO.htm
      Two indices may be cyclical but length of their cycles is uncertain (!!) and to make matters worse it is de-trended information, while possible drivers (as identified in the above links) have definite long term trends.

  21. The real question is do current climate models make useful decadal scale predictions?
    I am not aware of any to date from the consensus group of climate scientists.
    Have there been any?
    Have any predictions by climate scientists to date helped anybody do anything?

    • There are strong commercial reasons for acting on the predictions of the ‘consensus’ group of climate scientists and some businesses are doing so. The most obvious examples concern the predicted changes in the Arctic.
      Here is one company that is already exploiting the lack of ice in the summer Arctic, and investing heavily in new ships to exploit this new shipping route. –

      http://www.beluga-group.com/en/#News-News

      “For the years to come Beluga Shipping is going to tackle more projects which again contains sailings through the Arctic Ocean to Siberia – including full or partial transits of the Northern Sea Route. Siberia is considered as economically strongly growing and now accessible market. This market is of particular interest regarding the large potential coming along with the location of industries, i.e. companies and plants related to the segments energy, oil and gas. Being a future-oriented company Beluga Shipping regards Siberia as important market of the future.”

      Now it is obviously early days for this project, and if the ‘consensus’ prediction of continued ice loss in the Arctic is wrong the beluga group are going to lose a lot of money. But it is a clear example of how predictions of future climate change together with the changes that have already occurred are enabling some companies to do something.

      • They must be watching the recovery of the summer minimum since 2007 with interest.

      • It is possible that polar shipping could be reduced by weather patterns that distributed the winter ice over the N.E. passage in summer.
        But they wont have to worry about thick multi-year ice, just about all of that went this summer so will only have to contend with the thin first-year ice.

        ‘Recovery’ of the Arctic ice seems to rather overstate the case. While this years’ ice extent has not fallen as far as it did in the exceptional summer of 2007 the extent for Aug, Sep and Oct are ALL below the long term downward trend that is already established from the last three decades of data.
        There is credible evidence that this year the summer Arctic ice volume dropped to the lowest seen in the observational record, even if extent still exceeded the surprising extreme of 2007. That massive melt and egress of ice from the Arctic took level down to those that given the trend we should not have seen for another 20 years.
        By comparison this year was only (!) about 6 years ahead of the decadel trend.

        On that basis I reckon that beluga-shipping are still ahead, the conditions that climate scientists have predicted that they are hoping to exploit seem to arriving quicker than expected.

      • “Recovery” to what?

        Can a 5’7″ anorexic who has gone from 76 lbs to 84 lbs be considered to have “recovered”? Perhaps if her doctor is an anthropogenic climate change “skeptic”…

      • I don’t know if the Discovery AMSU site has been hacked or what, but take a look at this dive in the global temp!! Over half a degree C down in 4 days.

        http://discover.itsc.uah.edu/amsutemps/execute.csh?amsutemps

      • Oops! Here’s a link that works:

        http://discover.itsc.uah.edu/amsutemps/

  22. Hamish McDougal

    Akasofu (2010)
    http://www.scirp.org/Journal/PaperInformation.aspx?paperID=3217&JournalID=69#abstract
    would seem, by Occam’s Razor, to be the most believable.

  23. If climate is chaotic and/or exhibits long term persistence (Hurst-Kolmogorov statistics) then there is reason to expect quasi-periodic behavior at all time scales including periods much longer than 60 years, Dansgaard-Oeschger events at a period of ~1500 years and Heinrich events at ~10,000 years may be examples. D-O events may not be easily visible during an interglacial period, but that doesn’t mean they aren’t still happening. Attributing 100% of the ~60 year smoothed temperature increase to anthropogenic greenhouse gas increase seems highly questionable when there is evidence (Alpine glacier retreat) that warming started before CO2 increased significantly. Something drove the barbarians out of Asia near the end of the Roman period too.

    • Weather is certainly chaotic and shows quasi-periodic patterns.
      Climate, much less so. It clearly shows responses to volcanic eruptions (Pinatubo), changes in solar energy (Maunder minimum) changes in the distribution of the energy (Milankovitch cycles) Changes in the way oceans transport heat from the equator to the poles (Closing of the Panama straits) and of course changes in the CO2 levels in the atmosphere (PETM)

      Weather is a chaotic trajectory.
      Climate is the averaged envelope of that trajectory and is best explained by the thermodynamic inputs.

      • Richard S Courtney

        izen:

        You say:
        “Weather is a chaotic trajectory.
        Climate is the averaged envelope of that trajectory and is best explained by the thermodynamic inputs.”

        If your definition is true then climate predictions are useless and – by definition – cannot provide useful information at decadal scales.

        The “trajectory” of weather is of interest. The envelope that encloses all possible trajectories contains no information of practical importance.

        Richard

        PS I keep pointing out the inconvenient truth in this post and climate modelers keep pretending they do not know it.

      • Does averaging a chaotic time series give you a non-chaotic result?

    • David L. Hagen

      Persistence is a major issue that will reflect over the 2010 – 2030 period.

      The PDO oscillations are one example of persistence in weather. Easterbrook observes that record cold periods occurred during PDO cold phases:

      Don Easterbrook says: December 30, 2010 at 1:13 am

      It’s interesting to note that each of this records occurred during cool periods—coincidence?

      1.January 1814 -2.2°C [1790-1820 cool period]
      2.January 1881 -0.9C [1880-1915 cool period]
      3.December 2010 -0.2C [1999-? cool period]
      4.February 1855 0.0C, [1840-? cool period]
      January 1963 0.0C [1945-1977 cool period]
      5.February 1895 0.2C [1880-1915 cool period]
      6.February 1947 0.4C [1945-1977 cool period]
      7.January 1985 0.5C, [1945-1977 cool period]
      December 1878 0.5C [1880-1915 cool period]

      The list above also puts it in perspective with respect to other extreme years in living memory – most notably 1963 and 1947 [1945-1977 cool period]

      http://wattsupwiththat.com/2010/12/29/december-cold-%E2%80%93-unprecedented/#more-30554

      Easterbrook notes that the PDO is again in the cold phase until 2030.
      http://wattsupwiththat.com/2008/12/29/don-easterbrooks-agu-paper-on-potential-global-cooling/

      d’Aleo and Easterbrook show that global temperature correlates well with AMO+PDO.

      Consequently, this suggests there will be a greater frequency of record cold temperatures until 2030.

  24. Changes in TSI are only part of forcing related to solar cycles.

    Various theories put forth that Sun spots end up being a proxy for Cosmic Rays and cosmic rays may impact could cover.

    A recent study in November 2010 found a correlation being cosmic rays and cloud cover as a second order forcing.
    http://www.atmos-chem-phys.net/10/10941/2010/acp-10-10941-2010.pdf

    • Yes, the authors do find a small correlation between GCRs and clouds.
      They also explicitly state the amount of warming effect this would have if (as most observations indicate) the cosmic ray flux shows no overall trend for the last 50 years.
      Here’s how they put it in the paper –

      “Based on the relationships observed in this study, and assuming that there is no linear trend in the short-term GCR change, we speculate that little (~0.088 degC/decade) systematic change in temperature at mid-latitudes has occurred over the last 50 years.”

      So even if this result is robust the contribution it would make to the last fifty years of warming is about 10% of it.

  25. The Laken et al paper is interesting.
    Something strange about their comment 0.088 deg C/decade represents a SMALL systematic change in mid-latitude temperatures. Does anyone know the derivation of the estimate or if this is a typo in the original paper?

    • I haven’t scrutinized the paper in detail, but I surmise that the estimate is not a typo. Rather, it is the authors’ estimate of potential interdecadal climate variability rather than multidecadal trends.

      • Laken repeated it in a comment on WUWT, but could just be a cut and paste. He got some follow up traffic on the issue but I couldn’t see any resolution in a quick look.

  26. A few observations having made my way through the thread.

    First (and this also relates to some traffic at CA) it is important to make the distinction between dealing with predictions of say 2C temperature increases on 30 year time frames but with a low probability that they will happen (say, 50% certainty).

    I draw this distinction because the former is harder to predict (e.g. years of GCMs simulations to quantify) and is more difficulty to respond to in a policy sense (this isn’t like an earthquake where the probability of it occurring somewhere in a region over a period is very high, and isn’t like a regularly repeated event with a 10% probability of failure like an air trip). In any event I don’t think this is what the IPCC is arguing as the basis for current action, even if this could well be the issue for many scientists.

    So putting that aside for another day, we are dealing with how best to predict what the middle of the distribution of futures looks like in 30 years time. The question before us is do we take the design of the multi-scale weather forecasting models that were developed to describe events over days and extend those through to describe events over multi-decadal time scales (aka GCMs), or do we design a multi-scale model de novo specifically for the purpose of predicting global temperatures in 30 years time?

    There has been quite a bit of discussion around the fitness of GCMs for this purpose that must cause doubt. They are good laboratories with which to investigate well developed subsystems, and they no doubt will improve over time, but even when compared with empirical observations in the relatively small number of papers cited in this thread they are found wanting.

    Developing a multi-scale model for any purpose is an art, but starting at the macro level, and then identifying areas that are material to the purpose at hand and concentrating develop effort there isn’t a bad strategy.

    The thread has been useful in this regard (and will no doubt get better). Akasofu (2010) offers a useful starting point whereby empirical effort goes into the at least describing the longer-term trends in the temperature, and following on from this are the various attempts to identify and postulate causality for various shorter-term periodic influences even if the latter are not always convincing (and I should add that the propensity of some like Del Sole et al to look to GCMs as comparators is unhelpful except to the extent that it shows how the GCMs are not replicating the temperature record).

    This then starts to provide an agenda for research. The effort would go into those relationships that are more material either because they are uncertain or causality is unclear, and Laken et al is a useful example of this. And once that framework is established it then allows the focus to go onto the important remaining shorter-term variability in the record, some of which might well be susceptible to analysis by GCMs (perhaps now incorporating appropriate trends/external forcings into the runs unforced by GHGs – although there may be simpler ways to model these on the timescales and level of aggregation involved).

    • Again I’m having difficulty managing simple things like posting coherently. The second para got scrambled I suspect by accidentally highlighting and deleting a few words. What I meant to say was:

      “First (and this also relates to some traffic at CA) it is important to make the distinction between dealing with predictions of say 2C temperature increases on 30 year time frames but with a low probability that they will happen (say, <10%) and those that say this is very likely (say, >50% certainty). “

    • I like your analysis of this. can you provide more info on Laken et al. ref, maybe i missed it previously?

      • I hadn’t intended anything particularly deep about the reference to Laken et al (http://www.atmos-chem-phys.net/10/10941/2010/acp-10-10941-2010.html -harrywr2, December 28, 2010 a 6:04 pm) just that this was part of a body of work investigating how amplification of cosmic radiation changes might occur.

        A few overnight reflections on the above.

        First, my hunch is that investigation of the forecasts may well lead to the real problem moving from 2C warming being the most likely outcome to it being the more intractable (both research wise and politically) 10% problem. This just means we can’t really put this problem aside.

        Second, part of what I do in my day job is help applied research labs develop research strategies within domains of interest (and I focus on the physical and engineering based sciences).

        My next step in thinking about the problem at hand would be to have a good look at recent literature reviews of research into climate on a century/millennium scale. (In this particular case and given my general predilections I’d be focused on those with a strong statistical base.)

        What I find interesting is that I’m not too bad at finding this kind of literature in new domains, but on a quick look now I can’t see anything obvious outside quite specific disciplines. This is probably just a consequence of my own lack of knowledge of the discipline, but I’d say this is an essential starting point and if no such review exists it should be done. It’s fun to pick out the odd article like Akasofu (and that is weak on its statistical base) but I’d want to see a much more systematic assessment of the literature at this level. This then helps identify if we need to do more at this level and to identify material areas of uncertainty etc that indicate priorities for research at the next levels.

        So it’d be really good to identify any good literature reviews people are aware of, and if they don’t exist work out how to get one done.

      • Richard S Courtney

        HAS:

        I agree your principles. And I think a materials science approach is needed.

        It is a fundamental principle of materials science analyses that if data from adjacent levels of complexity (e.g. micro and macro) do not agree then there is not sufficient understanding of a system for the system’s behaviour to be predicted.

        Climate science fails this benchmark; e.g. cloud processes are not sufficiently understood for the microstructural properties and macrostructural properties of clouds to be determined with clear agreement between them. But cloud behaviours are an integral part of the climate system. Hence, it is not possible for climate system behaviour to be known with predictive capability.

        Richard

      • I spotted a website once (i think it was from northern europe) that picked geophysical topics and assembled journal references on them, i’ve tried to find it since but cant, anyone familiar with this?

      • Thanks for this and the references to the Tsonis/Swanson stuff that I’d missed on my cruise through. There’s some quite interesting what I’d call “top-down” stuff going on. Where does this activity all sit in vis a vis those engaged in GCMs and data collection which seems to be the higher profile end of climate science?

        Anyway it’s motivated me to have a more systematic look when I get a bit of time.

      • actually i am motivated to do a new post on those papers, the look really important to me

    • HAS: So putting that aside for another day, we are dealing with how best to predict what the middle of the distribution of futures looks like in 30 years time. The question before us is do we take the design of the multi-scale weather forecasting models that were developed to describe events over days and extend those through to describe events over multi-decadal time scales (aka GCMs), or do we design a multi-scale model de novo specifically for the purpose of predicting global temperatures in 30 years time?

      I’m afraid I’m going to take the deniers’ side on this one, HAS, and declare models hopeless at making 30-year predictions, regardless of whether they’re modern GCMs or future “de novo multi-scale models,” which would take 30 years to develop and hence would be useless today for making a 30-year prediction, since once you had one working you could ignore it and just look out the window.

      Instead I would simply take our naive understanding of what we’ve observed to date, consisting of both natural and anthropogenic agents of climate change, and blindly extrapolate it.

      We can see how well this would work by deleting all knowledge of what the climate has done since 1981 (today’s date is 2010.997), blindly extrapolating a naive model based only on data up to the beginning of 1981, and asking just how ridiculously wrong it was.

      I did this using nothing but the HADCRUT3 data up to December 1980, the Keeling curve from 1958 to December 1980, Arrhenius’s logarithmic dependence of surface temperature on CO2, and Hofmann’s raised-exponential law (which any sensible physicist could have thought of three decades ago, albeit not with Hofmann’s onset year and doubling-period parameters for it which were based on more modern data).

      I recomputed the best-fit onset year and doubling period for anthropogenic CO2 using only the 22 years of Keeling data, which advanced Hofmann’s estimate of onset year from 1790 to 1802, and shortened his 32.5 year doubling period to a more pessimistic 30.5 years.

      I then did a joint least-squares fit of the composite Arrhenius-Hoffman law and the Atlantic Multidecadal Oscillation (the only climate agents that really matter for 30 year predictions) to the HADCRUT3 data from 1850 to 1981. This fit made the climate sensitivity 1.832 and the AMO amplitude 0.1334, not far off the 1.837 and 0.1321 figures I’m getting using an additional 30 years worth of data.

      I then extrapolated the above two principal agents of climate change 30 years into the future, namely to today, to see what they had to say. You can see the result as the purple curve that could have been predicted 30 years ago.

      You can judge this for yourself by comparing it with WoodForTree’s picture of today.

      One thing to notice here is that if you look at the 12-year moving average of the HADCRUT3 record up to 1981 (the red curve in my graph), anyone predicting that it was about to turn up with an unprecedented slope and rise to an anomaly of over 0.4 °C would have been told that their model was obviously “ridiculously wrong.” Not by climate deniers (there weren’t any in 1981, the public didn’t care back then and who can blame them looking at the red curve up to 1981) but by just about every serious climate scientist on the planet! They would have dismissed out of hand either the AMO or the composite Arrhenius-Hofmann Law AHL: one or both would have to be wrong, probably the latter.

      Yet no data at all from the future was used in 1981 to make that projection. Any climate scientist with the gumption to stick to those two laws, AMO and AHL, might not have lost his or her tenured position, but he surely would have been held up to the same ridicule as the discoverer of plate tectonics.

      ——————————————-

      So: if we wait 30 years and do the same thing time-shifted, will we find as good agreement?

      I would say yes subject to two riders. First, that we continue with business-as-usual consumption of fossil fuel. Second, that there is no serious methane release from the Arctic.

      Even a small decrease in fossil fuel consumption will throw any such prediction off very badly. This is because for over a century now we’ve been in an exponentially accelerating race with nature, with nature always about three decades behind. If for example the Koch brothers suddenly take it into their heads that their grandchildren deserve better, we might all equally suddenly lose focus on profligate consumption, giving nature a chance to catch up. Even a 10% loss of focus would considerably increase the error in my prediction (which I’ll give in a moment) on the negative side: I’d turn out to have been a pessimist.

      If on the other hand excessive Arctic melting releases methane in significant quantities, then I will turn out to have been an optimist.

      If both, with any luck they’ll cancel and my prediction will be borne out.

      My prediction for global warming in 30 years time: 0.57 °C above today (an anomaly of around 0.97 °C).

      I’m probably the last person on the planet still using the old Unix bc calculator from the 1970’s (under Cygwin on Windows 7 but your Mac may well have it if you can find your way to the shell, aka terminal). Here’s my explicit climate predictor (“closed form formula” as we used to say back in the day) in bc language, which you should not find too hard to translate into something your own calculator can handle. e(x) = exp(x), l(x) = ln(x), s(x) = sin(2πx). This is simply realizing the function in purple at the bottom of my graph.

      define hofm(y) { return e((y-1790)/46.9) + 280 }
      define lb(x) { return l(x)/l(2) }
      define ahl(y) { return 1.8373*lb(hofm(y)) }
      pi=3.1415926535
      define amo(y) { return 0.0660*(s(2*pi*(y-1925)/56)+s(2*pi*(y-1925)/75)) }
      define clim(y) { return ahl(y) + amo(y) – ahl(2011) – amo(2011) }

      With this loaded, I can have the following conversation with bc.

      clim(2041)
      .57326791406127938756

      (Ok, so I rounded after two digits when I said 0.57 °C above today’s temperature. bc can give you as much precision as you want, a feature I tend to take more advantage of than might seem reasonable to most.)

      clim(2100)
      2.61149280954452629739

      Hofmann and his colleagues at NOAA ESRL Boulder are such pessimists. Based on my more optimistic fit to the Keeling curve I would have said more like 2.08436870654235514294. ;)

      • Sorry, s(x) = sin(x) (no 2π needed)

      • Forgot to thank our Information System Laboratory’s Stephen P. Boyd, whose suggestion it was earlier this afternoon that I delete the last 30 years of data to see what my model would predict.

      • Dr. Pratt
        Not having much confidence in the meaningfulness of global temperatures, I’ll fall back on the CETs. If as expected that next few years equal last two, than your calculations result would return us (temps wise, I hope not to wait 30 years, next year will do), back to nice and warm 2000’s. Let’s drink to that. Happy New Year.

      • Not having much confidence in the meaningfulness of global temperatures, I’ll fall back on the CETs.

        Not seeing much correlation between the temperature record of the Midlands of England (the CET) and the many other sources of temperature data from around the world, I’m happy to be a counterpoint here to your fallback position.

        If as expected that next few years equal last two, than your calculations result would return us (temps wise, I hope not to wait 30 years, next year will do), back to nice and warm 2000’s.

        Huh? How did you get that from my formula for climate change in the year y relative to the year 2011?

        1.83*lb(280 + 2^((y-1790)/32.5)) + .066 * (sin(2π(y-1925)/56)+sin(2π(y-1925)/75))

        But happy new year anyway.

      • I didn’t calculate, just hope that last 2 winters are exception, else I am off back to the Adriatic.
        Trends can vary greatly even between the places where they could be expected to be similar. East and west coast of Greenland (1985-1995) are good example.
        http://www.vukcevic.talktalk.net/LFC10.htm
        Ocean currents and heat exchange is the key here.

      • Trends can vary greatly even between the places where they could be expected to be similar.

        Indeed. This is one reason to prefer global over local climate data.

      • Begs the question of what practical use ‘global climate data’ is though. If real things only respond locally, then it is pretty meaningless.

        If I am a plant I care about the climate in London, not at all about Melbourne or California.

      • Thanks.

        Did you expect me to look at your graph and wave my little blue bootees in the air shouting ‘Eureka’ like Socrates (sic)? Because so far I have resisted that temptation.

        If however you felt able to explain why this particular plot might be of interest, then there’s still a chance…….

      • I thought YS1&YS2 could be interested in researching an inverted ‘hockey stick’ from the start ; just assumed you live in London too. HNY.

      • @vukcevic

        Fine.

        An original commentary of that one sentence would have helped me (and anybody else who is following the discussion) to understand your contribution. And even with it the ‘inverted hockey stick’ has still escaped me.

        In the commercial world where I work, you get extra brownie points for explaining your points so that the argument is clear to all concerned. It is a quicker way to get to the nub of the argument and to test it. In technical sales, if the customer doesn’t understand what you are selling, he is very unlikely to buy it.

        I see no reason why academia/climatology should be any different. Allusive/obscure references without explanation may work well in the Senior Common Room over the port, but do absolutely nothing to convince Joe Sixpack that there is anything sensible being said.

      • Agreed. There seems to be some really interesting stuff behind vukcevic’s analyses, but the diagrams themselves are invariably incomprehensible. A short para of explanation would be a big help.

      • Begs the question of what practical use ‘global climate data’ is though. If real things only respond locally, then it is pretty meaningless.

        To post frequently on a topic one considers meaningful may be regarded as one’s thing; to do so on a meaningless one looks like OCD.

      • Try ‘The Emperor’s New Clothes’, by Hans Christian Andersen.

        I note too that you haven’t come up with any good reasons why global temperature data should be meaningful.

      • Try ‘The Emperor’s New Clothes’, by Hans Christian Andersen.

        And this refutes what? Couldn’t you save a few words by calling me a moron, or mentioning Hitler, or something?

        I note too that you haven’t come up with any good reasons why global temperature data should be meaningful.

        Let me know why this doesn’t count as a good reason and we can go from there.

        These however are mere decadal phenomena. The AMO is a 65-year phenomenon, while global warming is exponential with all that implies.

        If you don’t care about phenomena on a 30-year time scale or more then we have nothing in common to talk about.

      • @Vaughan

        You wondered why I posted. The story by HCA gives a fine account.

        No point in my calling you a moron or Hitler or anything. I prefer letting others come to their judgments without me guiding them. And trading insults is not really my style.

        But I do wonder why your preferred style of discourse seems to be assertion first, and only backed up with evidence if challenged. Perhaps this works well in a lecturer/student master/slave relationship, but that is emphatically not what you find in the blogosphere. However great your achievements elsewhere, here you are represented only by the words on the page, not by any other extraneities. ‘Because I said so’ does not work well when your audience has a fair proportion of self-selected sceptics.

        Your link is to a forum about an Indonesian volcano. Perhaps buried in there is a good reason for global average temperature to be used. But I fear it is not obvious to me. Nor perhaps to any other readers. Could you expound? Or did you leave it as a New Year puzzle in Pratt’s Cryptology?

      • Richard S Courtney

        Vaughan Pratt:

        In response to a comment saying to you:
        “I note too that you haven’t come up with any good reasons why global temperature data should be meaningful.”

        You have responded saying;
        “Let me know why this doesn’t count as a good reason and we can go from there.”

        As an interested observer of the conversation, I fail to observe the relevance of your answer.

        Are you saying that you know of a method to predict volcanic eruptions?

        I would appreciate an explanation of your point because your response seems to be an evasion instead of an answer.

        Richard

      • And while we’re on the topic of obscurantism. could you translate

        ‘The AMO is a 65-year phenomenon, while global warming is exponential with all that implies’ into something that Joe Sixpack can understand.

        I’ve remarked before on the difficulties of getting straight answers from climatologists. I wonder if your resolution for 2011 is to help me demonstrate it even more effectively than you have done in 2010?

      • From the tone of your questions I’d say you were trying to bait me. I do rise to the bait sometimes, but I think not this time, thanks. May I recommend derecho64, to whom you’re currently running a close second in number of posts on the religion and Craven threads?

      • Latimer Alder

        Thanks for confirming my earlier view about straight answers.

        I;ll leave others to decide the relevance of Indonesian volcanoes to global average temperatures.

        Perhaps they will conclude, like me, that beneath your rather pontificating style, there is little of any substance to say.

        But still you say it frequently and at length.

      • Thanks for confirming my view about your attempting to bait me. You make it painfully obvious to all but those on your team that your questions are not asked in good faith. Off with you now and engage D64, he’ll rise to your bait every time.

      • Latimer Alder

        If you only feel capable of answering ‘friendly’ questions, I’ll leave you in peace. A true scientist would welcome the hostile ones as a reason to sharpen his thinking and.or prove his point. Which might be his/her purpose of coming on a balanced blog like this. If a climatologist wants to play to the home gallery there are plenty of heavily moderated sites where this is positively encouraged. This is not one of them.

        As to my motives, I make absolutely no secret that I am deeply sceptical of many of the climatologist’s claims and probe them with some ‘Joe Sixpack’ questions. And as I’ve noted elsewhere the quality of most responses has been disappointingly poor. But evading/avoiding the question entirely is surely a true sign of lack of a case.

        As a reminder, you were the one who claimed to have an explanation of why ‘global average temperature’ was a good metric of something or other. On reading the link you provided it proved to be about something completely different. RTC and I both called you on it, and you have not proposed any alternative. Instead you have chosen to attack the messenger.

        My case rests.

      • I imagine coffee drinkers will find pretty meaningless
        yesterday’s Associated Press article on the impact on Assam tea of a long-term temperature increase, namely 2 °C in Assam over the past 80 years. (Much of that rise must have been relatively recent or we’d have heard about this a decade or more ago.)

        Tea drinking climate sceptics might however find the article presumptuous. One offending bit would be “The U.N. science network foresees temperatures rising up to 6.4 degrees Celsius (11.5 degrees F) by 2100. NASA reported earlier this month that the January-November 2010 period was the warmest globally in the 131-year record. U.N. experts say countries’ current voluntary pledges on emissions cuts will not suffice to keep the temperature rise in check.”

        Where on earth did they get that ridiculous 6.4°C number? And since we all know 1998 was the hottest year on record, and that we’ve now entered a period of global cooling, what could possibly be motivating NASA to pull the wool over our eyes about 2010?

        And what do emissions have to do with tea? As is well known, at least to the denizens of this blog who keep themselves better informed on these things than those ivory tower climate scientists, real things such as tea only respond locally, so even if emissions had something to do with climate they wouldn’t be affecting a remote part of India like Assam.

        We’ve seen the same sorts of lame excuses for jacking up prices from the insurance industry, who two decades ago discovered they could argue for higher premiums on the basis of dubious claims about global warming causing “increased storm damage.” That they’ve been able to get away with this also establishes price fixing on their part, since in a free market they’d be undercut by those charging fair premiums that reflect the absence of any real increase in storm damage.

        We now see the same sort of collusive price fixing between the Assam tea growers, who have cottoned on to the insurance industry’s neat trick.

        And they’re not the only ones to twig to this trick. The article reports wheat farmers in northern India also in collusion on the same deal.

        Something similar also seems to be happening to French wine. However in that case the vintners clearly have failed to established a total cartel because they’re now finding themselves being undercut by growers to the north of them who claim to be seeing their climate becoming more favorable to wine-growing. Hmm, that one’s more subtle, the plot seems to be thickening.

        It should be clear what’s going on. Everyone and their dog is jumping on this excuse to jack up prices. They’re picking the pockets of those who aren’t. No self-respecting climate denier would jump on that bandwagon, it would defy everything that climate scepticism stands for.

        In the long run there’ll be the haves who jumped on this evil climate bandwagon and the have-nots who declined to compromise their principles.

        I saw that coming long ago and made that move quickly. Whether God will forgive me for it is something that keeps me awake at nights.

        Or maybe it’s the darn cats on the bed that insist on moving around every couple of hours.

      • Time to buy and store ice wine, I suppose.

      • “Fear feeds ignorance” said James Lovelock in the Ages of Gaia,*and a great niche was opened for fear when science became incomprehensible to those who were not its practitioners”

        The attachment of a number to anything or anyone implies a significance that was missing from its physical description .A telephone number is valuable tool in comparison the observation that atmospheric abundance of perfluoromethyl cyclohexane is 5.6×10-15 ,or that whilst you have read this line of text a hundred thousand of the atoms in your body will have disintegrated.whilst interesting confer neither benefit or significance to your health.

        But once numbers are attached to say an environmental property the means will soon be made to justify their recording,and before long a data bank of information about the distribution of substance x or radioactive isotope y will exist.It is a small step to compare the different databanks ,and in the nature of statistical distributions there will be a correlation of the distribution of substance x and disease Z

        It is no exaggeration to observe that once some curious investigator pries open such a niche,it will be filled by the opportunistic growth of hungry professionals and their predators.A new subset of society will be occupied in the monitoring of substance x and disease Z as will as the makers of the instrumentation. Then there will be the lawyers who make the legislation for the beaurecrats to administer and so on.”

      • You are an old pro, you do the science. I am new to this game, and not particularly good at it. But that’s not the aim, it’s more fun living dangerously on the edge of science.
        Dr. Curry’s recent comment:’ …I find it interesting, but have no idea what to make of it’ is like a Xmas present.
        Some comments could be even entertaining.

      • That’s all good – you’ve developed a model with good short-term predictive power in under 30 years.

        The problem to think about is whether it is good enough to carry a political consensus to cut emissions in the face of people drawing attention to its poor ability to hindcast and producing models that fit just as well but show counter-cyclic trends over the next 30 years.

        In other words I suspect more sophisticated models will be required but its a start.

      • In other words I suspect more sophisticated models will be required but its a start

        The first step in evaluating this statement would be to rank models by sophistication.

        The second step would be to see how far into the future each model can reliably see.

        Only then could one ask whether the sophisticated models can see further than the naive models.

        In 1981 there would have been the temptation to make the sophisticated models agree with intuition. No such luxury would be permitted the naive models. In retrospect we can now say that the naive models would have beaten the sophisticated models quite handily. But that was far from obvious back then.

        For 2011-2041 the reverse will be the case if we’re now in for a spot of global cooling, as many seem to think.

      • The first step in evaluating this statement is to restate the evaluation criteria.

        I was simply suggesting that if the models are to be useful for aiding political decision making they probably will need to be more sophisticated (in fact one can probably say this with absolute certainty given that your model hasn’t been picked up by responsible authourities around the world as the basis for doing something about CO2).

        I quickly mentioned a few issues in my earlier response, but consider the problem you will have with even other advocates of simple determinism like Akasofu. They’ll want to stick in the linear trend of temperature rise since the LIA and suddenly your CO2 component will change. You are already into a discussion about more sophisticated models and you haven’t even had to deal with those who demand evidence of causality.

      • Vaughn – you seem to be saying some climate models can predict future climate. How can that be if climate is chaotic. I truly want to know, not being disingenuous.

      • > The problem to think about is whether it is good enough to carry a political consensus […]

        This kind of “good enough” criteria never seems to suffice for market regulation, but oftentimes found “good enough” for deregulation. This works so well as to create a path of infinite resistance against any kind of regulation for complex systems.

        Here are four easy steps. First, develop the best econometrical models around in the secret labs of your basement. Second, see what kind criteria would be enough to reject these models as not “good enough”. Third, come out in public saying that all models are not good enough for any restrictive regulation. Fourth, question the pledge to empiricism of anyone who opposes to your audit.

        It might even be possible to formalize this rhetorical path. (Game semantics, anyone?) We could compare this trick to Gorgias’s , and ponder on how econometry has turned into contemporary sophistry. Nonetheless, we might prefer to refer to this trick as **Procrustean testbeds**. I like the idea that we talk of beds in the two cases.

        We could then reserve the epithet **Gorgian way** to the overall skeptic scheme. More on the idea of Gorgian skepticism here:

        http://neverendingaudit.tumblr.com/post/872135681/gorgian-skepticism

      • (Game semantics, anyone?) We could compare this trick to Gorgias’s ,

        “Let me put it this way. Have you ever heard of Plato? Aristotle? Socrates? Morons.”

        That classic line of Vizzini’s from “The Princess Bride” would have lost its punch with the substitution of Melissus, Xenophanes, and Gorgias. At least for me, since I hadn’t heard of any of them, and had to go to the web to figure out Gorgias’s relevance to climate science. (So thanks for that, Willard!)

        The former three all came after, and were generally critical of, the latter three. Apparently history has preferred the former to the latter.

        Makes one wonder how history will treat the “climate skeptics,” or should we follow your suggestion and call them climate Gorgians? Certainly Baa Humbug is confident time is on their side.

      • Haha Good on you Vaughan.
        A more interesting question might be “I wonder how history will treat climate alarmists”?

        happy new year mate

      • Right, my mistake, I should have wondered how history will view the debate rather than the debaters. There’s almost always a losing side and a winning side. If Germany had won WWII the history books would be judging the participants quite differently (“WWII was WWI executed correctly,” for example).

        30 years is a long way off however. For most people I imagine ten years would be of more immediate interest.

        One can see two things from my graph. First, if the black curve labelled “Residue” were to flatline, then the smoothed HADCRUT3 curve would track the purple model curve perfectly and therefore rise 0.16 °C. (The extra 30 years of data turns out to reduce that predicted rise to 0.15 °C, close enough to count as a sign of unusual robustness for any empirically based climate model today.)

        Second, the black curve never flatlines over 10-year periods, though it does a pretty fair imitation over half a century or more. Various forces, seen (volcanoes? other aerosols?) and unseen, can easily push it 0.04 °C or more either way—WWII and the baby boom with no reliable emission controls on cars, neither under the bonnet nor in the back seat, may have pushed it down a full 0.1 °C.

        Unfortunately none of this has much bearing on the Intrade horse race that 2019 will be 0.2 °C warmer than 2009 and its mate that replaces 0.2 °C by zero. This is because my 12-year smoothing flattens faster moving events such as El Nino and solar cycles, which will play a big role in that bet. Besides the oscillations and CO2, only the longer term effects of volcanoes and maybe the Tunguska event and WWII appear to remain with 12-year smoothing.

        This is intentional on my part because I’m more interested in what happens 30 years out and beyond than in predicting the global temperature in 2019. For shorter term projections like these Intrade bets one needs the sort of analysis found in Lean and Rind‘s paper, which takes much faster influences on surface temperature into account.

        To put this in perspective, consider the difference between 12-hour and six-month projections at the equator and the poles. At the equator there’s a huge difference between noon and midnight while six months makes a difference mainly in wetness (monsoon season) rather than temperature. At the poles it’s the dead opposite: there is negligible temperature variation over any 24-hour period, but a huge variation between summer and winter.

        Obviously neither is relevant to global climate. So we apply a minimum of 12-month smoothing, which simplifies things by removing most of this huge dependence on latitude. (But not all as Arrhenius noted in his 1896 paper: climate sensitivity should be a tad higher at the poles, he argued.)

        But unless we’re placing bets on 2019 instead of 2040, we don’t really care about the decadal events either. This is why I’ve applied 12-year smoothing, to simplify things yet further to rock bottom, namely just the long term ocean oscillations and the influence of CO2. The Arrhenius-Hofmann law as I’ve dubbed it, which is the most plausible law for those with no political agenda, seems to do quite a reasonable job of modeling the latter.

        (Those with agendas will of course find complex models wonderfully obfuscatory. As Chairman of the Royal Australian College of Knowledge Prevention through Obfuscatory Terminology I can speak with some authority on that very effective approach. Ptolemy got away with it for a millennium and a half before Brahe, Kepler, and finally and definitively Newton stepped in. The obstacles faced by Kepler are not unlike those faced today by debaters of global warming—may the best side win.)

        The same principle of averaging out irrelevant short term phenomena is used in tracking the moon, which wobbles a little in the short run but these are smoothed out longer term. In my day high school teachers (and hence we kids) knew nothing about these wobbles, we all just pictured the moon sailing gracefully and tremendously predictably through the sky. For all I know this is still true today, and in any event is essentially correct with the requisite smoothing. The moon’s wobbles are like the black residue curve in my graph. In the short run, 20 years or so, they can be significant, but longer term they flatten out.

        Anyway, you’re going to have a happy New Year in 5 hours (if you’re on the East coast) while I have 24 hours to go. On the other hand I can wish you a Happy New Year in 5 hours (consider it done) but in the other direction you jumped the gun by 36 hours. I guess it evens out that way. :)

      • Vaughan Pratt – do you wonder whether or not climate scientists have missed something about WW2 that might have exacerbated the dip?

        For instance, could they have unintentionally engineered and dispensed an ultra aerosol that stayed in the atmosphere longer than normal? Seems very unlikely, but when I look at the graphs…

      • Well, the RAF did put much of Hamburg (July 24-31, 1943), Dresden (February 13-15, 1945), Pforzheim (February 23, 1945) and other German cities (Nuremberg, Lubeck, …) into the atmosphere in 1945 while setting what remained on fire, while the US did something similar with Tokyo (March 9-10, 1945), Hiroshima (August 6, 1945), and Nagasaki (August 9, 1945).

        As solid buildings those cities were built to last. Conceivably that was also true of them as dust.

        Since that dust has presumably long since dissipated, and since we don’t seem to be on the verge of an opportunity to repeat the experiment just yet, it might be difficult to decide whether powdered and burnt construction materials make longer-lasting aerosols. But you raise an interesting question there.

        Then there were the endless bombing raids and dog fights throughout the war. WWII might be described as a sporting event to the death, and highly motivated participants in such events naturally tend to exert themselves more strenuously, with a resulting higher-than-usual consumption of fuel.

        Unlike the steady exponential growth of fuel consumption during the rest of the century, its vigorous consumption during WWII would have been our one example of a step function in increase of both CO2 and aerosols. I interpret what we’re seeing there as the immediate cooling effect of the stepped-up aerosols, much as with a volcano, followed some decades later by the warming effect of the stepped-up CO2, whose normally exponential growth otherwise masks this delay. (I found the data best fits the theory when a delay of around three decades is postulated, which makes the actual climate sensitivity closer to 3 °C, rather than the 1.83 °C obtained by assuming no delay.)

      • It was global; it was combustion intense.

        Think about this. My father’s 75mm SPM platoon had four vehicles. On Iwo Jima they fired roughly 12,000 rounds each in 26 days of action. One tiny artillery unit, 48,000 rounds. Prior to the invasion, the Japanese evacuated the island’s civilian population. They had scratched out a living by fishing, raising sugar cane, and mining sulphur. The Japanese quickly saw a defensive use for the sulphur mines, and significantly excavated additional tunnel networks. Lots of dust laying around.

        Pre-invasion bombardment lifts on Iwo Jima, sulphur island:

        http://www.history.navy.mil/photos/images/g410000/g415308a.jpg

        The number of explosions on Iwo Jima, including the Japanese practice rounds when they prepared their precision, ballet-like rain of death, is in the multiple millions.

        Refined oil sheen on the oceans? Had to be there.

        On 100-octane aviation fuel, I found a source that says US production topped out at 600 million barrels per day. Another source says just under one billion gallons of toluene was produced, mostly for TNT.

      • JCH, your anecdotes are fascinating. This would be really interesting to dig into more deeply. About the 600 million barrels a day, do you mean during WWII?

        From time to time I bug our Mark Jacobson on these sorts of things. I have his book “Atmospheric Pollution: History, Science, and Regulation” on my desk right now, but his history section considers only the pollutants and not the polluters, and does not mention anything about wars.

        This article says “After World War II, the industrial economies of Europe and the United States were revving up to a level of productivity the world had never seen before.”

        That’s NASA’s view. This would seem to suggest that no climate scientist has considered the war itself as a source of aerosols (other than aerosol sprays which came into widespread use then), only the postwar boom.

        Definitely worth getting better calibrated on.

      • Substantive phytpoplankton fe experiment into fe limited ares in WWII

        EG http://www.battleships-cruisers.co.uk/merchant_navy_losses.htm

      • And those are just British merchant vessels. Mindboggling. What about other kinds of ships, and other countries?

        What’s the connection with phytoplankton? Are you saying they thrived on all that sunken iron? Was there a noticeable impact other than in the vicinity of the sunken ships?

      • Vaughan – apologies, I remembered it wrong. By 1943 they were making 300 thousand barrels per day and they peaked at producing 600 thousand barrels per day – 100-octane aviation fuel.

      • What’s the connection with phytoplankton? Are you saying they thrived on all that sunken iron? Was there a noticeable impact other than in the vicinity of the sunken ships?

        Any ferrous substrate in a marine environment will attract life as bacteria convert it to feric iron,making it bio available for phytoplankton .

        Its only a passing observation as the tonnages in WWII seem so high.If we use the example of iron fertilization from volcanics as was previously identified Watson 1997 suggested the tonnages into the southern ocean for an observable decrease in pCO2 and subsequent increase in O2 ws only around 40000 tons,eg

        By November 1991 the Pinutabo stratosphere aerosol plume had reached the southern latitudes ,a recent estimate of the mass deposition flux there being 9×10^-13 g cm^-2s^-1 at that time. If 1% of this flux was iron sustained for 3 months over the area of the southern ocean this would amount to roughly 4×10^10g iron. Given a typical carbon/iron molar ratio of 10^5 for phytoplankton in iron limited regions,this would enable additional new production of about 7×10^13 mol carbon .Such an increase would then give rise to the observed pulse of the order of 10^14 mol of oxygen into the atmosphere (Keeling 1996)

        Its something that should be kept in mind when we look at the wiggly bits.

      • The obstacles faced by Kepler are not unlike those faced today by debaters of global warming

        Keplers non scientific beliefs were of course his rationality he was a Zoroastrian ie sun worshipper.

        eg Betrand Russell Skeptical essays pg27

        It was primarily by such consideration as the deification of the sun and its proper place at the centre of the universe that Kepler in the years of his adolescent fervour and warm imagination was induced to accept the new system

      • Putting aside whether its is just sophistry to imply I’m a Nihilist (h/t Plato), the important point is that modeling is purposeful, and therefore utility is a basic criterion with which to judge a model.

        I’ve been pretty explicit in linking climate modeling in the context of this discussion back to its use in aiding political decision making, and in democratic states that introduces requirements around political saleability (in totalitarian states the issue revolves around saleability to power elites) .

        In any event for models to be useful for this purpose requires an objective test that goes beyond just satisfying willard.

      • In any event for models to be useful for this purpose requires an objective test that goes beyond just satisfying willard.

        Amen to that.

        Since it was your suggestion, you get the right of first refusal to suggest an objective test. Go for it.

        I’ve been pretty explicit in linking climate modeling in the context of this discussion back to its use in aiding political decision making

        Hopefully I’ve been at least as explicit in not doing so. Unlike you I couldn’t care less about the politics. All I care about is getting the science right, which the politics seems to be screwing up right royally just now. Much like the Scopes monkey trial.

      • On your last point that’s a perfectly admirable and legitimate goal, although you might find that “getting the science right” as an absolute concept is a slippery eel to grab hold of, and I wonder if your proposed model would be widely regarded as meeting that criteria (even in a methodological sense your treatment of uncertainty is weak).

        I should add that having a goal of aiding political decision making has nothing to do with whether climate change is happening or not (the issue with the monkey), its about (1) quantifying the risks and uncertainty associated with it (2) in a credible fashion. The first criteria speaks to the purpose of the model (and is something engineers and applied scientists deal with on a day to day basis) and the second speaks to the quality of the science (but adds a fitness for purpose dimension that avoids some of the issues around absolutism).

      • I should add that having a goal of aiding political decision making has nothing to do with whether climate change is happening or not.

        To the extent that climate change is irrelevant to political decision making, that seems like a fair statement. So are you saying then that it is irrelevant? If so then I take it you would judge the series of United Nations Climate Change Conferences a complete waste of people’s time.

      • Remind me of the achievements of those conferences…the last two in particular?

        Apart from learning how to build snowmen in Copenhagen, working on the suntans in Cancun, burning zillions of tons of aviation fuel and taxpayers money on pre-Christmas shopping trips and jollies. As well as plotting new ways to spend public money on schemes more akin to sacrifices to Gaia than anything practical or useful.

        Was there some other stuff I missed?

      • The irony is that your military and NHS waste far more money in a year than the sum total of all climate research done in the UK, ever. You’re arguing over pence when many millions of pounds have been waylaid.

      • D64, come back when the tax you pay on fuel comes remotely close to what we pay.
        It costs me around 15% of my take-home pay just to commute to work. And it’s getting worse all the time.

      • I won’t bother to ask about alternatives; your complaints remind me of those who choose to live in remote areas and then not having quick and cheap access to the goodies associated with higher-density living.

      • D64, you have absolutely no idea what you’re talking about.

      • Like I said, your choices carry costs; it’s extremely unlikely that your choices are so constrained as to make your current lifestyle immutable.

      • Listen buddy, you know absolutely nothing about my circumstances, so I’d suggest you leave it there before you inadvertently say something insulting and I retaliate in kind. I’m having a hard enough time of things without know-it-alls sticking their butts in.

      • I apologize – your commuting costs are high.

      • Latimer Alder

        So we are all agreed then.

        There were no achievements worth mentioning at either of the last two UN conferences. They were a total waste of time and money and liberated a lot of unnecessary CO2.

        I can’t quite see our politicos queuing up to be associated with the next one. Not so much Air Force One on the tarmac..more like the Wright Brothers Flyer. Perhaps the US will send Big Al to demonstrate its commitment……………….

      • Sorry my comment was unclear.

        You suggested that the act of aiding political decision-making around climate change through science was some how akin to state sponsored suppression of the teaching of evolution. My response was a clumsy way to draw attention to your non sequitur.

        The point I was trying to make was that doing the science with the above purpose has nothing to do with taking political action to suppress views on whether climate change exists or not. Au contraire.

      • The point I was trying to make was that doing the science with the above purpose has nothing to do with taking political action to suppress views on whether climate change exists or not.

        I think I’m losing the thread of our discussion here. Is the above in response to (my question of whether you’re saying that climate change is irrelevant to political decision making? If so, is your answer a yes or a no? If not, please back me out to the relevant point.

      • Vaughan Pratt | January 1, 2011 at 2:26 am |

        My comment has nothing to do with responding to your question that arose because you made an irrelevant aside about monkeys and evolution, that I gave a casual flick at en passant, that you then turned into the subject of the conversation.

        Less time on send and more on receive would be my suggestion.

      • Unfortunately that didn’t clear up anything for me. Sorry to seem dense, but if you were a newspaper reporter I suspect you’d be driving your editor crazy. Editors like their reading public to have clear crisp statements they can understand without having to have a Ph.D. in philosophy. I remain clueless as to what you’ve said so far.

      • Derecho64 is Boulder’s agent provocateur.

      • Vaughan Pratt | January 1, 2011 at 4:50 am |

        Your response arrived 7 minutes after I last commented, which suggests you neither reread the thread nor understood my last sentence (which I thought was reasonably clear).

      • Your response arrived 7 minutes after I last commented, which suggests you neither reread the thread nor understood my last sentence (which I thought was reasonably clear).

        You are just confirming my point that if you were a reporter you would drive your editor crazy.

        Editor: HAS, I’ve spent 7 minutes on this point you’re trying to pitch to our readers and I can’t make head or tail of it.

        HAS: That’s because it’s a difficult point, it would take longer than 7 minutes for our readers to understand.

        Editor: Unfortunately our paper is in competition with papers that can communicate a point to the average reader in ten seconds. Please try to make your point readable in less than 7 minutes.

        If your point cannot be clearly summarized in a single self-contained sentence then I’m afraid I will never find out whatever it was you thought it was important for me to know. When someone asks you for clarification, the only thing clear about the response “but it’s perfectly clear” is a clear failure to communicate.

        You must be on the other team here, the one that feels that CO2 is not a serious concern. The two teams have traditionally demonstrated a total failure to communicate.

      • I’m in two minds whether it’s useful to comment further but it might help increase the S/N ratio in the longer-term.

        By way of context I had an early interchange with Vaughan over Scafetta’s papers. Vaughan headed off with a hiss and a roar misinterpreting the papers and on that basis being sarcastic about them (December 28, 2010 at 3:12 am and December 28, 2010 at 2:20 pm). Over the next day I think we got that straightened out, although not without Vaughan grabbing other statements out of context and making strawmen out of them.

        This particular thread has all the same characteristics. It started with my commenting (December 28, 2010 at 11:53 pm) on possible alternative models to GCMs to forecast 2030 temps.

        Vaughan leapt in to say (December 30, 2010 at 2:33 am) that alternative models would be hopeless, apart from his simple model. I responded (December 30, 2010 at 3:29 am) that simple models were all well and good (and in fact an example of what I had been proposing), but I didn’t think Vaughan’s model was “good enough to carry a political consensus to cut emissions”.

        I doubt that there is much more content in the thread from that point on.

        Rather than address the issue of what models would meet this criteria, or even if this was the criteria, Vaughan chased a number of other issues under the pretext of dealing with the issue at hand.

        He argued naive models are better at prediction (December 30, 2010 at 4:31 am); he didn’t care about politics only the science (December 31, 2010 at 3:47 am), although in this comment he also implied the State is interfering with good science just as it had suppressed the teaching of evolution in days gone by; and finally he has me cast as someone who thinks political action isn’t warranted (December 31, 2010 at 5:22 pm) despite my clearly stated views about informing politics being an important role for modelling.

        In my various responses to this (December 30, 2010 at 3:28 pm, December 30, 2010 at 2:56 pm – to Willard as well, December 31, 2010 at 5:05 am) I tried to deal with the various hares that were running while returning each time to the need for models to meet the political consensus test.

        By now the Scopes monkey trial analogy that had no relevance to any issue under discussion had a life of its own. So I tried to kill that (December 31, 2010 at 5:52 pm).

        I do however confess that after 3 days I had some irritation in having statements picked at random from my last comment and used as a strawman just to keep the comments flowing. So I had my first flick at this behaviour. I must say in having that flick I did rather wondered whether Vaughan could clearly remember any of the multiple conversations he has on the go at a time.

        I got what I deserved: a couple of lectures on my ability to communicate.

        I have tried over the years to improve myself in this regard but I think that the various editors and PR people I’ve had working for me over the years would agree with Vaughan that I’m a hopeless case.

      • I am going to start a new scenarios thread in the next few days. personally i found your comments to be lucid and interesting.

      • curryja | January 3, 2011 at 8:14 am

        A further thread will be interesting.

        Despite appearances I have been poking around a bit on other things and turned up Environment Canada’s resource on climate literature at http://www.ec.gc.ca/sc-cs/default.asp?lang=En&n=4752148A-1 and I’ll have a closer look at the detection and attribution area as time permits. I am interested in this area (when applied to observations, not model output) as a means to identify the material factors that might go into a top level black box (or is it “skin”) model of the atmosphere, and to then also identify which subsystems need dissection (perhaps a thread on identifying and ranking the top 10 factors that influence temperatures).

        I suspect all this has already been done but I haven’t yet found it in my poking around.

        In this regard I’ve also had a bit of a closer look at some of the work applying Granger Causality to climate problems – because it does offer some ability to inform the debate about causality and feedback in climate based time series.

        In doing this I found a couple of recent papers out of machine learning/data mining from IBM’s T J Watson labs that touch on this (“Spatial-temporal Causal Modeling for Climate Change Attribution” (2009) A. Lozano et al and “Learning Temporal Causal Graphs for Relational Time-Series Analysis” (2010) Yan Liu et al). One suspects that the quality of the CO2 data may be problematic (and it’s not clear if they transformed the data in any way, or are just working with ppm, or if this particularly matters on short time frames).

        Also of idle interest “Tracking Climate Models” (2010) Claire Monteleoni, Gavin Schmidt, And Shailesh Saroha references Lozano et al but only as an example of prior work using data mining techniques (and what the paper refers to as “data-driven climate models” – I guess that’s what I should be defining my predilections as). The references in this paper on this point are on my list of things to look at, but of additional passing interest is the discussion in section 2.1 on the problems with GCMs.

      • Sigh.

        Please ignore everything that has been written before

        > Let’s analyze HAS’s response.

        I should not work on all my writings in the same file.

      • Let’s analyze HAS’s response.

        First, note the rhetorical mode. Willard calling HAS a nihilist. Democracy and the public opposed to totalitarian states and the power elites. The need for HAS to satisfy myself.

        Second, the topical mode. The need of utility of models. The need of political saleability. The need of objectivity in testing.

        Putting aside that I have not called HAS a nihilist, the false dichotomy between democracy and totalitarism and the personalization amounting to a red herring, we could underline more important points.

        A first one would be that connecting models (specifically those that implement empirical theories) into a utility function is easier said than done. That both look like bunches of numbers does not garantee that we can rely on some “bridge laws”. Lots of assumptions are at stake behind this program, many of which bear already know epistemogical problems.

        A second one would be that connecting utility functions with saleability has never really be done before. At the very least, I have never seen an engineer-level formal derivation of the two. Talking about the democratic saleability of utility functions leads us outside the comfort zone of any formal apparatus known to date. Unless we’re ready to get up our armchair and do field work, all the good things that has been said so far rest on a dispositional concept void of any empirical content.

        A third one would be that connecting all this with “objectivity” is the question that is not to be begged. Oftentimes this concept is conflated with rationality, testability, or whatnot. If we’re to believe in an objectivity, we must presume that there exists a world independent from our knowledge, our instruments, and our formal criterias.

        The world was there first. It’s even more stubborn than the strictest econometrical mandarins.

      • I know I’m going to regret this but …

        First, are you saying the act of building a model is possible without a purpose? In my day we would have called that a low redefinition of the word “model”.

        Second, perhaps you should talk to an engineer going through a public consent process about utility of design and its connection to saleability. For more formal treatment have a look at public policy analysis.

        Third, you suggest that there is a risk of confusing models et al with the real world. Perhaps there is in some circles (I often see GCM output treat as if it was empirical observation) but let me assure you, not in my mind.

      • Here is the Procrustean testbed in a nutshell:

        /1. We need sophistated (“good”) models to make policy decisions.
        /2. The models we have for now are not sophisticated (“good”) enough.
        /3. Therefore we should do await until we have better models to make policy decisions.

        An important presupposition in that argumentative scheme is that doing nothing is to be the policy decision to take by default. Another important one seems to be some kind of rational model of policy analysis:

        http://en.wikipedia.org/wiki/Policy_analysis#Rational_model

        We can see some troublesome assumptions: “the problem is unambiguous”, “there are no limitations of time and cost”, “the government is rational, etc. This model deserves due diligence.

        Whatever we may think of this scheme, it serves a purpose, a purpose that is not the same kind of beast as a “utility function”. This scheme too deserves due diligence.

        How the decision will be sold depends on what we want to sell and to whom. This presumes a model of policy analysis that has a purpose, after all, a purpose that again not the same beast as a “utility function”.

        Perhaps we should leave all these theorical considerations and return to Vaughan’s challenge:

        > [HAS gets] the right of first refusal to suggest an objective test.

        Anyone up to the task should go for it, be it by way of Granger causality or else. Let us wait for what kind of reasoned analysis for policy inaction will show up.

  27. More on sun-climate interactions – (hope this isn’t redundant of something I missed here):

    If we cool significantly in the next 25 years, Sun-climate interactions and the PDO seem to be the ones that have to do a lot of the heavy lifting. I’m happy to wait for the results, but we’re not off to a good start, given that 2010 is likely to be the warmest year in recorded history. This graph gives the solar-climate connection people reason to worry:

    http://www1.ncdc.noaa.gov/pub/data/cmb/images/indicators/solar-variability.gif

    Nevertheless, there’s still hope for a speculative connection between solar activity and cosmic ray flux. The Beryllium-10 concentrations:

    http://en.wikipedia.org/wiki/File:Solar_Activity_Proxies.png

    very closely reflect century scale observed temperature trends. In fact, it’s a remarkable fit, IMHO, thus worthy of serious attention.

    The underlying theory, discussed sensibly here
    http://www.space.dtu.dk/English/Research/Research_divisions/Sun_Climate.aspx

    still has a long way to go before it is solidly confirmed. There is a significant chain of cause and effect here: Cosmic Rays react in the upper atmosphere and cause increased cloud condensation nuclei to rain down. These in turn cause increased cloud cover and/or increased cloud optical thickness (particularly for marine stratocumulus in remote oceanic areas). But there are lots of other natural causes for changes in cloud condensation nuclei – changes in land cover (desertification) increased dust, changes in wind causes more dust and also more oceanic salt nuclei. So this connection is *very* intriguing but still not as firmly established as the direct radiative effects that CO2 has as a ‘blanket’ (I like that analogy better than the greenhouse analogy).

    • Dr. Wetzel
      Beryllium (10Be) data from the two major Greenland research projects are highly unreliable, so much to render them useless. I had a short look into this subject and found it requires more scrutiny.
      http://www.vukcevic.talktalk.net/CET&10Be.htm

    • Dr. W,
      I think declaring something significant from such paltry evidence as is provided the temp. anomaly system is fundamental fallacy.
      Have you established
      – anything like confidence that global numbers of temp. claiming accuracy to 2 or more decimal places is meaningful? No.
      -that temperature anomalies are showing anything significant going on outside the world of those compiling them? No.
      -that temperature alone in general is showing anything significant about the climate? Doubtful at best.
      -that 20 or 30 year time claims about horizons for climate are actually climate horizons in the first place? Not really.

  28. Number of scientist and commentators postulate that the AMO index is somehow detached and independent from the underlining NA temperature movements.
    In this short study I show that AMO and the North Atlantic temperatures (as expressed in CET data) are closely linked as far as they appear to be driven by the same source .
    http://www.vukcevic.talktalk.net/NAP-AMO.htm

    • Vukcevic,
      “The physical process of climate science never considered”
      BRAVO!!! BRAVO!!! EXCELLENT!!!
      This is where I find the clash of current physics to ACTUAL PLANETARY MECHANICS.

      This is exactly the area I am following with pressure changes, salinity changes, wind changes and weather pattern changes.
      H2 16O and H2 18O are water BUT have huge differences in density and characteristics.
      Huge differences from sealevel to heights of land in the atmosphere as well of differing pressure.

    • Vukcevic,
      Mountain ranges are starting to have an interesting pattern to the current weather and stalled systems.

    • In this short study I show that AMO and the North Atlantic temperatures (as expressed in CET data)

      It would be nice, Milivoje, if you’d say something about my objections to your using the Midlands of England (CET since the 17th century) as the basis for global climate projections.

      If they correlated well with more global data it would be reasonable, but they don’t, and it therefore isn’t. As you point out yourself, CET shows little sign of the AMO. However more global data such as the tree-ring data bank using data from thousands of sites over six continents clearly shows the AMO.

      CET is about the last thing anyone researching global climate should be using.

  29. The Laken paper seems to indicate according to their claims that the correlation between short-term changes in GCR flux and cloud cover is not a cloud condensation nuclei process – because it happens over land and sea where the nucleiation process is different, and may be more due to the electrical field in the atmosphere modulated by the varying ionisation caused by GCRs.

    However that paper and others looking for a solar influence, direct, indirect or second order, have to face the fact that the solar effects are predominately equatorial and imply a ‘fingerprint’ involving different changes to the ratio of troposphere/stratosphere and polar/equitorial effects than is observed.

    Some of these solar theories seem to be contradtictory. The Lanken paper indicates tht while some correlation between GCR and clouds can be found, there is no corralation between other solar indices and cloud/temperature beyond the expected response to the small energy changes in the Schwabe cycle.

    At the most simplistic level, climate prediction based on extrapolation of any existing trend would be a start. In the Arctic we have at least 30 years of good data which can be reduced to a monthly falling trend in ice extent, greatest in summer, smaller in winter.
    A continuation of the same trend is a reasonable first order prediction unless there is a KNOWN reason for the trend to change.

    In fact the present rate of ice-loss in the Arctic appears to be accelerating. The rate of loss is greater than the simple straight-line extrapolation would predict, and the ice extent today is well below the trend.

  30. Judith, can you have a look at this:
    http://cdnsurfacetemps.wordpress.com/2010/12/29/simulating-random-variation/
    Previously here I have been trying to explain that random noise plays a significant role in what the temperature profile would be for any given location. Hence when added together to give a ‘global’ temperature, there will be a significant random noise which would swamp any trend from supposed global warmer due to the short time frame we have done our measurments in so far.

    This is not meant to be a simulation of the climate, but an example of how random variation plays an important role. My graphs don’t match perfectly to real world, how could it if randomness is important, but looks quite close.

    I’d appreciate you telling me what you think of this and how much you think random variation might be playing in global climate.

    • interesting analysis. at any given location, you see huge interannual variations, and sometimes no trend, and sometimes huge multidecadal oscillations. the random variations are undoubtedly important, but its not clear that they add up into any sort of trend (provided that you look at a long enough time series?)

      • Of course, but our short lifetimes and scientific measurements are not long enough. The test was just a quick sampling of what variations caused by randomness can contribute in the short range. And yes, over a much longer range of several hundred years would level out into a flat trend with wild fluctuations. But add into that hundreds of years other cycles, cycles with 200 or 400 year periods, and our short time frame only appears to be a trend in a direction.

        The point being, statistally how is it possible for climate models to predict anything in the future with various period random cycles? It’s just not possible. It also begs the question, how much of such randomness is incorporated in these climate models? For if there was would they not get a different result with every run? Doesn’t sound like they do.

      • The point being, statistally how is it possible for climate models to predict anything in the future with various period random cycles? It’s just not possible.

        Is your pessimism hereditary or acquired?

        There is no point looking at 400-year periods when asking about the likely temperature in 2050 or 2100. But the same reasoning makes annual and even decadal variations irrelevant.

        For projections to 2050 or 2100, we should be paying attention only to periodic phenomena with periods between 30 and 100 years, and to any aperiodic phenomena such as global warming and whatever else you might have in mind.

        Since that rules out a lot of the phenomena you’re arguing as being confusing, what would you suggest as relevant here?

        My impression is that there is remarkably little in the frequency range of interest for those making 2050 and 2100 projections.

      • Since random variation over time increases the inability to predict the future, then predicting the future climate in 50 or 100 years from now would be just as accurate as reading tea leaves.

      • Amundsen-Scott station will be colder than Kuwait City for the 2100-2120 period.

        Am I wrong?

      • Since random variation over time increases the inability to predict the future,

        That’s argument 108 at skepticalscience.com, favored for example by Lord Monckton.

  31. I’d like to add an interesting observation from this summer. I do target shooting, and one of our sessions was from 900 meters ( I couldn’t hit the damn target from that range, but most of the rest of the shooters had no problem hitting bullseyes!). Anyway, I had the chance to look through another shooter’s scope which he had set at 40 power. It was nothing but a shivering haze as sun heated the ground sending up heat plumes, you couldn’t even tell there was a target. But as soon as a cloud came over the range, it virtually stopped right away and the target was visible (hence getting bullseyes). Seems to confirm that the earth does not retain much of the sun’s heat, it’s sent into the atmosphere right away.

    • At Bisley, many shooters still favour the old short magazine Lee-Enfield WW1 service rifle for long range up to 1000 yds. Something serendipitous about the bolt stiffness. And a good coach reading the wind flags.

      • many shooters still favour the old short magazine Lee-Enfield WW1 service rifle for long range up to 1000 yds.

        .303, standard issue for cadets, the Aussie (etc.?) counterpart of US high school ROTC. Our high school armory (Knox Grammar School) had hundreds of them.

        I ended up in signals and the transceivers back then were so heavy we in signals had to forgo carrying rifles. And a good thing too in my case: before I went into signals in 1960 I once shot someone at close range with a blank while on manoeuvres, aka silly buggers. Hard to imagine doing anything more stupid, except maybe using live ammunition. Electronics rendered me harmless.

        I had a Lee-Enfield in my dorm room as a college freshman in 1962 when I was on the rifle team—had there been a war on I would have either been a sniper or in signals. It never occurred to me to try to sneak some ammunition back from the range with me, god was I naive back then. Nowadays it’s the gun they don’t let you take home, forget the ammo. (Post 9/11 anyway—in 1987 I was able to take a six foot bow and arrow on board as carry-on on a flight from Manaus to Miami by arguing strenuously with the lady at the gate in Manaus that it was just a harmless native artifact.)

        Living in the boonies of north-east NSW in 1957, some of us made guns out of a bicycle pump clamped in a vice, a 3″ firecracker drawn down via a thin wire attached to the fuse, and a marble, all easily obtained for a few pennies. The marble would go straight through a 44 gallon drum (not sure why no one ever complained to us about the holes). Amazingly we never saw a pump explode though we certainly worried about it. Sometimes the pump would shoot backwards out of the vice so we were careful not to stand behind it when lighting the fuse.

        My immediate family is heavily into gun control and is naturally appalled at what we did as kids. I suppose they’re right to think that way, sigh.

  32. Hamish McDougal

    Dr Curry

    I have previously asked for a response, but never mind – I expect no more. You must be overwhelmed.
    Do you:
    1. Accept Sir Karl Popper’s view that science (not post normal) is defined by falsifiability?
    2. Not wonder about non-falsifiable Climate ‘Science’? (viz: the current contortions about the world-wide low temperatures – except in areas where there is no measurement).
    3. Not consider Dr. Syun Akasofu’s paper, being purely evidence-based, and being strongly in accord with Occam, though simplistic, significant?

  33. Brandon Shollenberger

    Okay, this is getting ridiculous. I hadn’t looked at this site much lately due to the holiday season, and today I decided to catch up on what I had missed. While I found the blog posts interesting, reading the comment sections was a chore. Specifically, whenever I saw a comment from Derecho64 or those responding to him, I wanted to close the window.

    I have no intention of pointing fingers, but the sheer inanity of many of these exchanges is ridiculous. There is no attempt to engage people. Nobody is trying to actually have a discussion. It is just people being rude to each other. Heck, there are a large number of comments now which are nothing but insults. This sort of nonsense serves no purpose other than to degrade this blog.

    So as a reader of this blog, I’d like to make some requests. Can people try to avoid rudeness in their posts? Can people try to avoid exchanges which will become nothing but petty bickering? And finally, can we perhaps get some sort of moderation input?

    It is crazy just how much room in these comment sections is being devoted to childish behavior.

    • Brandon, thank you. While no one has been violating blog rules, there has been much silliness and rudeness. I am going to go back to stricter moderation on technical threads.

    • David L. Hagen

      I strongly second Brandon’s request to raise the standard of discussion to professional scientific discourse, not gutter trash talk.

  34. MODERATION NOTE: This is a technical thread. Please send your political comments or put downs onto one of the open threads, or better yet minimize this kind of exchange.

  35. I would agree strongly with Brandon on his following remark.

    the sheer inanity of many of these exchanges is ridiculous.

    I would also strongly disagree with him on the remarks into which he segued smoothly from the above.

    There is no attempt to engage people. Nobody is trying to actually have a discussion.

    Given that a number of us on this thread are trying to have a discussion, several in fact, I would have to say Brandon is not one of them.

    Brandon, shut up and start paying attention to what people are saying on this thread.

    Judith, pay no attention to Brandon.

    • Brandon Shollenberger

      Vaughan Pratt, before you decide to be rude to someone, you should make sure you understand what they have said. You complain about me saying, “Nobody is trying to actually have a discussion.” Your complaint stems from the fact people on this page are trying to have discussions.

      Unfortunately, your complaint makes no sense. I referred to a specific set of exchanges, saying, “[T]he sheer inanity of many of these exchanges is ridiculous.” Of course, “these exchanges” was referring to the exchanges I mentioned in my previous paragraph, “[W]henever I saw a comment from Derecho64 or those responding to him.” Everything I said referred to those specific exchanges. Nothing I said referred to all exchanges.

      I clearly criticized a subset. You got upset and were rude because you applied my criticisms to the whole set. There was no reason for this. If you had just taken some time to think about what I said, you probably would have realized what I meant. If not, you could easily have asked for clarification before resorting to hostility.

      There is nothing to be gained by attacking people. It only serves to degrade discussions. If you are right with your attacks, you look petty. If you are wrong, you look foolish. With this in mind, perhaps next time you take issue with something someone says you could try talking to them before assuming the worst.

      • You’re quite right about my being petty, and I was feeling badly later about it even before I read your response. I apologize for that.

        I clearly criticized a subset. You got upset and were rude because you applied my criticisms to the whole set.

        Oh. That wasn’t obvious from how I was reading it. Guess I need to hone my mind-reading skills, sorry about that.

        But if as you say you were limiting it to Derecho64 and his respondents, namely Jim, hunter, tallbloke, Bob, Bruce Cunningham, juakola, and TomFP, why don’t you do what the non-inane people do and simply skip over whatever you consider inane instead of weighing in with your opinion of what is and isn’t inane here? Metacomments on inanity are themselves inane (including this comment of mine).

        In any event I don’t understand the dynamic of the exchanges with Derecho64, whom I took to be trying to discuss the Scafetta paper, contrary to your claim. If you disagree with his points then argue them with him in the way you’re advocating, i.e. have a real discussion with him about what’s right and wrong with the Scafetta paper instead of dismissing him out of hand as inane. Otherwise you’re just the eighth inane respondent to Derecho64, which then makes you a hypocrite.

      • Brandon Shollenberger

        Vaughan Pratt, you’ve once again misrepresented the extent of my criticisms. I have never leveled criticisms against every comment made by Derech064 and those responding to him. My criticisms were clearly leveled against “many” of the exchanges. In no way does my criticism preclude Derech064 (or those talking to him) from making good posts. Saying someone has made inane comments is not saying every comment he has made is inane. Moreover, you refer only to one exchange Derech064 has had on this page, even though he has had several. Your representation of me is made all the more absurd as I had clearly referred to exchanges both on this page and on others.

        As for why I didn’t skip over the offending comments, the reason is simple. I hoped by highlighting a problem, the problem might be fixed. Seeing as Judith Curry has thanked me for my comment and said she will be using stricter moderation now, it would seem my comment had a positive effect like I hoped.

        The criticisms and insults you have leveled against me are completely based upon misrepresenting what I’ve said. Given that, I cannot see anything of value coming from continuing this. My initial comment has had some positive influence like I had hoped, and that is enough for me.

        If you wish you to continue criticizing me, I cannot stop you. All I can do is ask you to read what I said. I have no problem clarifying things if need be, but nothing you have said could possibly be justified by my comments.

      • I have never leveled criticisms against every comment made by Derech064 and those responding to him.

        Wait, what? You wrote “whenever I saw a comment from Derecho64 or those responding to him, I wanted to close the window.” How is that not leveling a criticism against every comment of his?

        You have a bad habit of telling us what was really on your mind only after I’ve criticized what you actually wrote, and then claiming what you really intended was obvious. It was the same when you wrote “While I found the blog posts interesting, reading the comment sections was a chore” and later claimed that what on the face of it looked like merely an illustrative example was actually intended to narrow the scope of your complaint from the whole comments section to just that example (which I did not pick up on at all from any part of that whole comment).

        It’s like someone who asks “Why did they do that with this to them?” and expects everyone to read the speaker’s mind as to what all those pronouns are referring to because it’s obvious to the speaker.

        The criticisms and insults you have leveled against me are completely based upon misrepresenting what I’ve said.

        That’s certainly how you try to make it appear. Fortunately what we’ve both said is in plain sight on this thread, which should make it unnecessary for us to rehash it. Those who care either way (hopefully no one) ought to be able to make up their own minds at this point without further hints from us.

        Sorry to find myself ending the year on this sour note.

      • Vaughn – It IS obvious he meant he didn’t respond to every post of Derech064.

      • It occurs to me that maybe the members of the same team in the climate can finish each other’s sentences, but not members of opposing teams. One would then expect a lot of talking past each other and other symptoms of failure to communicate.

        I’m relatively new here and there are many threads on this blog that I haven’t even begun to explore. (Where the chairman of an academic department finds the time to manage that many threads must set a record in academic multitasking.) Of the two main teams in the climate debate, which dominates on this blog?

      • (correction: “team in the climate can finish” –> “team in the climate debate can finish”)

      • Brandon Shollenberger

        I really don’t care to spend my time dealing with questions like this:

        How is that not leveling a criticism against every comment of his?

        After seeing dozens of derogatory comments which added nothing to anything, I had no desire to read more comments from the same people. Perhaps my meaning wasn’t perfectly clear, but it wasn’t particularly hard to figure out either.

        If there is something which still remains unclear about what I meant by my initial remarks, anyone is welcome to ask me for clarification. Otherwise, I am quite finished. I really have no desire to subject myself to further hostility over asking people to be less hostile.

      • Brandon, people have now figured out you were referring to Derecho64 specifically. Thanks for bringing up this issue, it has been a recurring one, and I am working on dealing with it.

      • Brandon Shollenberger

        While I was referring to him in part, he was not the only one I was referring to. I listed his name because he was a particularly problematic offender, and he seemed to be a focal point for many of the issues.

        I didn’t want to point fingers, but I also didn’t think my point would be made without providing any example. I did make sure to include “those responding to him” in my comment to indicate he wasn’t the only one I felt had crossed the line.

        In any event, hopefully it won’t matter who said what from here on.

    • Actually, the thread that brandon commented on was pretty good, some of the other threads that I suspect brandon was reading were more deserving of the descriptor inane. I’ve engaged with brandon periodically before, he is a credible character IMO

      • the thread that brandon commented on was pretty good

        Right, that had been my impression too. What’s a particularly egregious example of an inane thread?

      • well the religion thread is pretty entertaining, see also the greg craven thread

      • Brandon Shollenberger

        I don’t know why you two are talking about “threads.” It seems you are suggesting my criticisms were leveled against the entirety of the comment sections of certain blog posts. Perhaps I misunderstand you two, but if not, you two misunderstood what I said.

        My problem is with specific exchanges within the comment sections, not with any comment sections as a whole. I suppose I could provide examples of what I’m referring to, but I don’t know what would be accomplished by it.

      • Not to worry, it had stopped being about you.

      • I see what you mean. :) Not quite as entertaining as the Moscow Circus, but it’ll do.

        Here are the commenters common to both threads, ranked by the geometric mean of the number of their comments on each thread. (The geometric mean beats the arithmetic mean at promoting the indiscriminate commenters.)

        81.4 Derecho64 – 59.2 Latimer Alder – 54.7 hunter – 35.7 kuhnkat – 26.7 tallbloke – 25.6 curryja – 23.0 Jim Owen – 22.4 randomengineer – 19.7 Brian H – 19.2 TomFP – 15.5 Baa Humbug – 14.4 Richard S Courtney – 14.2 Saaad – 10.3 Dave H – 10.0 Nullius in Verba – 9.7 Labmunkey – 7.7 John – 7.0 Jim – 6.9 chriscolose – 5.7 Barry Woods – 4.4 andrew adams – 4.4 David Wojick – 3.7 David L. Hagen – 3.4 Peter317 – 3.1 kim – 3.1 Steven Mosher – 2.8 BlueIce2HotSea – 2.0 Willis Eschenbach – 2.0 RobB – 1.7 harrywr2 – 1.7 Kan – 1.4 Pat Cassen – 1.4 Mike Smith – 1.4 Michael Larkin – 1.4 Mark F – 1.4 Ken Coffman – 1.0 Michael Tobis – 1.0 Bob

        This answers my all too naive question about the Derecho64 dynamic. I should have been more sympatico with Brandon.

        Incidentally the number of posts to date on these two threads are, religion 829, Craven 743, with geometric mean 785. So Derecho64 is carrying just over 10% of the load of these two threads alone. Counting the responses to his posts he must be generating over 20% of the traffic on them.

        Those of you who’ve followed any of the several Amazon discussion groups on climate change will recognize D64’s doppleganger, TruthSeeker, who posts even more than the notorious Mug Wump (conjectured to be a computer program with human assistants) summed over all the relevant Amazon groups, on the same team as D64.

      • David L. Hagen

        Judith
        Recommend policing D64 for seriously degrading your blog.

        D64 is acting as a self appointed Chatbot with a corresponding level of integrity and value.

        He is particularly abusive with ad hominem attacks and refused to reform
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26132
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26305

      • Judith
        Recommend listing the non-sinners and moderating down all first stones cast by sinners.

        As long as you keep the non-sinner list empty these threads will remain refreshingly free of stones.

        I would suggest throwing stones be made a sin were that not a bit too obvious.

      • I have added moderation notes to these posts, will be watching D64 closely

      • Hello Dr. Curry,

        A little history.
        I arrived at your site somewhat by accident. Maybe a link from Dr. Jerry Pournelle’s blog, I’m not sure. At any rate, my inaugural visit was to the thread re religion, conservatism, and resistance to ‘solving the problem of CAGW’.

        Although your site concentrates on climate SCIENCE, you specifically sought opinion on this thread. I provided mine.

        D64 was not amused. At all.

        After a couple of exchanges, it became apparent that ‘resistance was futile’, so I quit.

        I also noted the commentary of some of your regulars about D64 and his contributions, and decided to rummage around on other more science oriented threads to see how he (?) behaved on those. Turns out, pretty much like he behaved in response to my opinions on the thread addressing religious/conservative resistance to proposals for ameliorating CAGW.

        Which brings me here.

        I make no claim to be a scientist, climate or otherwise, but I will make the following observations:

        Although you seem to be in basic agreement with the ‘climate establishment’ in that climate change is real (probably; has it ever been otherwise?), it is (likely) catastrophic (not unless the change is toward ice age), and it is anthropogenic (so far in the noise as to be opinion rather than science), you are very fair with posters who disagree and take the time to address their points directly. And politely. Most of your regulars, to varying degrees, take the same approach. With one obvious exception, which is the reason I am writing on this sub-thread: for the specific purpose of agreeing with regulars Vaughan Pratt and David L. Hagan, above.

        As I pointed out, I am a newbie to the site and lack the scientific credentials to contribute as a regular. I have also not conducted anything like an exhaustive search of all threads, but based on a cursory browse, if D64 has actually contributed anything useful, I didn’t run across it (my opinion). And I encountered lots of samples.

        As part of my ‘browsing’ I did read some of your congressional testimony (Follow-up Part II). Without committing to whole-hearted agreement with every word, including ‘a’, ‘and’, and ‘the’, I will have to say that you did an excellent job. Thank you.

        Bob Ludwick

      • Man up, David. To make the claim (actually, by Don Easterbrook) that “Global cooling has occurred since 1999″, you need more data – which means more time. 10 or 11 years isn’t enough – I know that, you know that, Judy knows that. It’s not *ad hominem* to point out Easterbrook’s claim is false – the result of a mere cherry-pick.

      • this is a much better way to make your point than your previous statement. Need more time for what? relative to what? It depends on the specific point that is being made.

      • Unfortunately, Tamino’s analysis is no longer available on how long of a time series of this type one needs to determine a trend. The minimum is 15 years. 11 ain’t enough.

        Here’s one example of Don Easterbrook’s deceptive cherry-picking… And another.

      • Latimer Alder

        Did this Tamino guy give any reasons for his opinion? If so, can you reproduce them for us?

      • Like I said, Tamino’s analysis has been lost, until the wayback machine somehow gets ahold of it. Scott Mandia does an analogous exercise and similarly finds the claims of “global cooling” over such a short period to be false.

      • I don’t buy Tamino’s trend argument (I remember it). The issue is what you are trying to demonstrate. If you are trying to illustrate a trend associated with anthropogenic forcing, then you need to start and end at places that make sense in the context of the major modes of natural internal variability (e.g. compare the 1930’s with 1995-2005; the averages are better because otherwise you need to factor out ENSO). Trying to do a trend of 10 or 15 years doesn’t make sense in the context of attribution of climate. but you can still make legitimate statements about one segment of temperature vs another; you just can’t say anything about attributing the cause of the differences or trend

      • I don’t recall that Tamino was attempting attribution, he was just showing that cherry-picking a short period to show “cooling” was deceitful. There’s also the issue of autocorrelation, and of course, determining significance.

        We’ve all seen “skeptics” draw a straight line from 1995 or 1998 to 2008, 2009, etc., and say “global warming ended in 1995 (or 1998)”, and we know that’s just fakery.

      • Yes, and you can even generate your own Tamino-style graphs as I describe below in response to LA.

      • Latimer Alder

        He showed it was ‘deceitful’?

        Not just ‘incorrect’ or ‘without foundation’ or ‘an invalid period to draw such a conclusion’?

        But actually ‘deceitful’..i.e being able to demonstrate that it was intended to deceive? Brave guy to be able to work that out rather than the milder remarks above.

      • What would you call deciding *a priori* to show “cooling” and then finding a too-short period that apparently illustrates that? Honest? Nope.

      • Latimer Alder

        The question isn’t what you or I would call it, it is what the Tamino guy called it. You say he called it ‘deceitful’ in a paper that is no longer accessible. Perhaps we will never be able to confirm this one way or another.

      • Latimer Alder

        Joe Sixpack might think wonder why ten years is not long enough, and if there really are good reasons for it being too short, how long is needed to determine whether cooling or warming has actually taken place.

        Because Joe’s cynical brother Jem has noticed that the required period is always too short to demonstrate cooling, whereas warming can apparently be shown within a very few years…But that’s just Jem being difficult.

        Perhaps the way to keep him happy is to put down a definite time (in advance) that can be unarguably agreed to be ‘the right amount’. And that no claims one way or another are made until that time has passed.

        Any objections to this obvious wheeze?

      • Rather than simply taking Tamino’s word for these graphs you can easily generate them yourself using Paul Clark’s WoodForTrees software.

        What I like about ten years or so as the ideal time frame for judging which way the climate is going is that it offers great flexibility, easily seen using WoodForTrees.

        Let’s say you wanted to demonstrate that the rapid rise of the last three decades was all caused by a single decade, the middle one, and that the other two barely climbed at all. For this you would use this graph at WoodForTrees.org.

        It shows that the rise of 0.48 °C from 1980 to 2010, which is 0.16 °C per decade (the violet line), is almost all due to a rise of 0.25 ° per decade during the 1990s (the green line). The red line shows 0.05 °C per decade for the 1980s while the blue line shows an even flatter .03 °C per decade for 2000-2010.

        Now let’s start each of these three decades a mere one year earlier while shortening them by a mere one year. We then have this graph.

        This shows the 1980s barely increasing at all at 0.01 °C per decade. Each of the next two decades however rises at very close to the same rate as the overall 30-year trend. Moreover instead of the 1990s rising faster than 2000-2010 it is now rising slightly slower.

        There is no deception here, you can see all the numbers in plain view, and you can experiment with further adjustments yourself using the menus on the right.

        But now try breaking up those 30 years into two 15-year periods. By fiddling the endpoints a little you can still create some variation, but none as dramatic as with 10-year periods. About the flattest you can get each 15-year period is with the two periods 1980-1995 and 1995-2010. Just about anything else is closer to the 30-year trend line.

        For really representative trends, 20 years is about right. You can see this by applying a 20-year moving average to the last 50 years, namely this graph, whose good linearity tells you that looking at the last 20 years of data gives a pretty good idea of what’s going on no matter what year you look back from (though 30 years ago the previous 20 years looked a bit flatter then than it does today).

      • Sorry Vaughn,
        Tamino’s argument falls apart pretty quickly. By his logic 20th century warming is just noise on longer timescales and he is simply cherry picking a sample period to exaggerate his view of AGW:
        http://westinstenv.org/wp-content/postimage/Lappi_Greenland_ice_core_10000yrs.jpg

      • He has written, I think, more than one article that addresses statistically significant temperature trends, but this may be the one in question:

        http://web.archive.org/web/20080501120647/tamino.wordpress.com/2007/08/31/garbage-is-forever/

        In 2007-2008 the blogs were all excited by the La Nina. It was cold outside.

      • Oops!
        http://www.woodfortrees.org/plot/hadcrut3vgl/from:2001/to:2010/trend/plot/hadcrut3vgl/from:2005/to:2010/trend/plot/hadcrut3vgl/from:1998/to:2010/trend
        A trend is simply a trend. It gives us a visual picture of what was happening during a very specific time period, no more and no less. The temp trend over the last 5 years is down (cooling), but this gives us little to no information about the next 5 years, 10 years, 100 years. It is very easy to over-analyze a period in the past and mis-interpret the data in forecasting future trends. Prior trends give little indication of future trends until we have a clear understanding of all the mechanisms involved. We know that ever increasing GHGs must cause increased warming at some level. What value are 20th century warming trends achieved during a solar grand maxima if the sun goes quiet long term and the ocean cools 2C in the 21st century?
        http://www.ncdc.noaa.gov/paleo/pubs/solanki2004/fig3a.jpg

      • I would say an objective measure would be the detrended standard deviation divided by the trend. This might give 10-20 years. I haven’t done it myself, but I think the detrended SD might be about 0.3 degrees. This ratio gives a measure of how long the trend takes to displace the variability by one standard deviation, which should be significant, and requires a longer averaging period for more noisy data, or smaller trends.

      • Yes, but how to determine the correct standard deviation?

        First of all annual variation must be left out. Thus only full year averages should be used, but this is not enough as ENSO influence should also be averaged out as should all other autocorrelation effects.

        How well do we then know the standard deviation and how strongly do the autocorrelated effects like ENSO phase still influence?

      • I think this idea that there is some period that will legitimately allow the derivation of a “trend” is a distraction.

        What should be done is to postulate a model and test it. This can be a simple linear model (see http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26990) or a complex ARIMA or even GCM model. The testing tells you whether you are going beyond the limits of the data.

        This also forces assumptions out into the open, and in the case under discussion gets people to be clear about what they are saying when they refer to a “trend”.

      • HAS,
        As you wrote, for a proper statistical test, the hypothesis must be well specified, or actually the null hypothesis must be well specified to exclude it.

        This is always model-specific. It is always necessary to make some assumptions concerning the nature of the deviations from the model. A far too common assumption is that all deviations are random and uncorrelated in all respects. This assumption is often so badly in error that the conclusions are totally worthless. Time series data is particularly bad in this respect, as typical time series contain in most cases some autocorrelation and very often also systematic errors and other factors that may be very important.

      • In reply to Pekka, yes, annual averages only, but I would not try to do anything about El Ninos because these go into the trend itself. The detrending is the tricky part, but would have to use a linear trend that minimizes the standard deviation (i.e. the best fit). So for any given time range this mean trend can be defined uniquely, as can its detrended SD, thus allowing a time period for significance to be determined by this method.

      • Pekka Pirilä | January 2, 2011 at 5:59 am |

        I totally agree that tests of models must include testing that the statistical (and the physical) assumptions hold, that’s why I think being formal about the model specification is critical. Without that there is no formality about the assumptions, and you get reduced vague statements about variances etc.

        In your and Jim D’s discussion you can specify a ARMA model to take out El Nino and autoregression and test whether its a good model (assumptions and fit), and get more formal indications of the appropriate sample size.

      • HAS,
        Using ARMA is certainly much better than ignoring the whole issue, but ARMA is still only one model out of an infinity of models. ENSO is another example. Calculating the trend over a interval containing only few ENSO cycles makes the resulting trend dependent on the starting and ending phases of ENSO. The presence of ENSO is visible in AR, but AR cannot tell everything on ENSO. If a better model of ENSO can be justified, its use leads to a more accurate determination of the trend. Similar issues apply to AMO etc.

        Paleoclimatic temperature series provide another example of difficulties of estimating the accuracy, and in this case very significantly the skill of the methods in finding out the correct signal covered by a lot of noise, a multitude of auto and cross correlations and also potential systematic errors, whose consequences may be amplified by the statistical methods. I do not agree with all criticism presented by skeptics, but on the other hand I feel that the risk of certain systematic errors has been overlooked even by the skeptics.

      • David L. Hagen

        See Judith’s Blogospheric New Year’s resolution especially:

        Respond to the argument, not to the person. . . .No ad hominem attacks, slurs or personal insults. Do not attribute motives to another participant.

        Your accusation was explicitly ad hominem , attacking the person, and accusing of lying which was why I called you on it. So too your false accusation of “cherry-picking”. Read it again:
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26132
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26305
        Go learn what “ad hominem” means and avoid it.
        http://en.wikipedia.org/wiki/Ad_hominem
        http://www.fallacyfiles.org/adhomine.html
        http://philosophy.lander.edu/logic/person.html

        See my response re “global cooling” in context of the PDO cooling cycle, not millenia change.
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26750

      • The earth has not been “cooling”. Period.

      • Latimer Alder

        Ahh. Assertion by Authority.

        But no evidence attached I fear. :-(

      • Go to woodfortrees.org and do your own analysis, Latimer. Don’t take my word for it. Just be careful you don’t fall into the trap of cherry-picking, as Easterbrook does – which Hagen still hasn’t figured out yet.

      • That may may be correct but 0.6c+/- 0.5c in the last century is only just warming

      • Derecho64 – You can argue the cooling isn’t a climate trend, but there are definitely cooling periods in the temp record. Calling someone deceptive for pointing out the obvious seems over the top.

      • David L. Hagen

        The pontiff of global warming has spoken.
        So let it be written. So let it be done.
        As in the 70s, so is is today. http://wattsupwiththat.com/2011/01/01/time-magazine-and-global-warming/
        As affirmed by his apostle Kevin Trenberth in 2009:

        “The fact is that we cannot account for the lack of warming at the moment and it’s a travesty that we can’t.”

        And as confirmed in November 2009 by his spokesman Tim Flannery (The Weather Makers):“In the last few years, where there hasn’t been a continuation of that warming trend,”
        http://icecap.us/index.php/go/joes-blog/noaas_magic_wand_waves_away_2000_2009_cooling/
        As further supported by NOAA evidence for the 2000-2009 period showing a cooling trend of – 0.73 deg F/decade.
        http://www.paulmacrae.com/wp-content/uploads/2010/08/screen-shot-2010-07-31-at-121626-pm.png

        As further confirmed by: “there appear to have been periods of ice free summers in the central Arctic Ocean.” From: New insights on Arctic Quaternary climate variability from palaeo-records and numerical modelling

        And as further supported by Peter Hodges:
        “Quantitative reconstructions of LIG (Last InterGlaciation) summer temperatures suggest that much of the Arctic was 5C warmer during the LIG than at present”
        http://wattsupwiththat.com/2010/09/10/study-arctic-was-5c-warmer-during-the-lig-last-interglaciation-than-at-present/

        We pray his greenship will spare us further pontifications to preserve us from great consternations.

      • If you want to impress, don’t reference WUWT.

        Here’s Trenberth’s paper that provides additional context to his context-free remark from the stolen emails:

        Trenberth, K. E., 2009: An imperative for climate change planning: tracking Earth’s global energy. Current Opinion in Environmental Sustainability, 1, 19-27, doi:10.1016/j.cosust.2009.06.001.

        As well as a video presentation on the same subject.

        Yes, we know that the Arctic has been ice-free in the past. Doesn’t mean that AGW isn’t happening now. There can be more than one reason.

        As for “cooling” over 2000-2009, I must reiterate than a decade isn’t long enough to change a longer-term trend. Is that not clear?

      • David L. Hagen

        Derecho64
        My reply just above was responding to your again breaching net rules at:
        http://judithcurry.com/2010/12/27/scenarios-2010-2030-part-ii-2/#comment-26731

      • Easterbrook has engaged in deceptive practices before, as I’ve detailed. Do you not have a problem with that?

      • Latimer Alder

        Seems to me that there are ‘deceptive practices’ all over the place in Climatology. I’ve said before that it seems to be impossible to get a straight answer form a Climatologist.

        Which is why it pays to be deeply suspicious of everything that is claimed. And even more so when it is claimed without evidence or explanation. And even more so when the claimed evidence is in fact no more than ad hom attacks.

        Ad hom attacks are the last refuge of the BS merchant when out of their intellectual depth or understanding

      • Unfortunately, your “skepticism” runs only one way, Latimer.

      • Do we need to lighten-up this thread a little?
        Here you go

        http://stevengoddard.files.wordpress.com/2011/01/liberalsheadupass.jpg?w=400&h=305

      • Latimer Alder

        ‘Unfortunately, your “skepticism” runs only one way, Latimer’

        Since it is for the AGW proponents to make their case, then clearly that is the case that needs to be probed and prodded.

        Like Einstein said, it doesn’t matter how many people to agree with him to show that he was right..it takes only one experiment to show that he is wrong.

        I certainly don’t agree with a lot of the detail of what other sceptics say here and elsewhere. But that is not the issue. The issue at the moment is whether the AGW proponents have made their case.

        Sorting out which of the other views is the correct one will come later, and I will be just as suspicious of people’s claims when that day comes.

        But (sadly) we ain’t there quite yet.

      • ‘Course, your “probing and prodding” is extremely cursory, being extremely generous.

        Do some of your own reading and research – somewhere other than a blog. That will benefit you greatly.

      • Latimer Alder

        @D064

        Thanks for the advice. I will resolve to be even more sceptical and ask even more Joe Sixpack questions in future.

        But there’s still quite a stack of unasnwered onesa rleady asked to go through. Like the length of time to show a real trend…and why.

      • Do about a year’s worth of reading first – and see how many you can answer all on your own.

      • Good one, BH. In addition to lightening up we should also all strive to be fair and balanced.

      • @VP

        Touche’

        Though I don’t know which would fit easier for Rush, his head in his ample a$$ or his a$$ in his mouth.

  36. Dr. Curry
    here is my closing comment for year 2010:
    – No climate model is viable without including AMO and PDO events.
    – De-trending is not conducted properly
    – There is no understanding what the source of these events is.
    – Such inadequate information is grafted into climate models as if it they were predictable variables (which is nonsense).
    In this link
    http://www.vukcevic.talktalk.net/CD.htm
    I show true relationship between AMO and PDO (properly de-trended, on 11 year m.a. bases, hence only up to 2005), as well as, the to the climate science ‘unknown’ variable, the North Atlantic Precursor (rounded to nearest integer), which is not a function of any solar or climatic parameter. Result is either an extraordinary coincidence or alternatively one of the possible climate drivers, that the climate scientist will have to pay more attention to.

  37. ok, the comment is interesting, but no idea what the graph means, or what the north atlantic precursor is.

    • AMO and PDO are de-trended going back number of decades from reconstructed data. If all trends are removed, just left with inter-annual oscillations, then there is a close correlation between AMO and PDO (see above link for the graph’s blue & green line). Prior to 1950’s correlation is more intermittent.
      If pulsating of SST in the North Atlantic and Pacific are so closely correlated on the annual basis than cause could be solar irradiation, but there is no trace of it in the TSI.
      Some months ago I discovered existence of another factor that may affect North Atlantic climatic events ( NA precursor- NAP ). Plotting crudely rounded off raw data (wish to keep its nature private, until I am certain of its mechanism) shows a degree of correlation, which could identify NAP as a possible ‘driver’ of inter-annual AMO/PDO oscillations.
      I shall email you with some more details, sometime in January, then you will be able to properly assess value of my observations. HNY.

  38. I don’t know if this has been brought up, but I had never seen this chart before. It is temp vs. length of solar cycle. It shows one heck of a correlation. This casts doubt on the entire global warming hypothesis.

    http://wattsupwiththat.com/2011/01/02/do-solar-scientists-still-think-that-recent-warming-is-too-large-to-explain-by-solar-activity/

  39. HAS, a comment on your previous post. yes this is the sort of thing that the new post will discuss. Re Granger causality, it is very interesting, i played around with it a bit, and got some things that didn’t quite make sense for my data set.

  40. However, there are several online interfaces available where one needs to click on different types of
    options to send HTML code in email or to generate HTML code.

    He knew the system well enough to not pay many of his suppliers and sub-contractors,
    then would cover it up up by handing out fake lien releases to make it look like they were
    paid. Once safely at Thebes, though, the obelisks were brought to the temple
    at Karnak with much fanfare.

  41. However, there are several online interfaces available where one needs to click on different
    types of options to send HTML code in email or to generate HTML code.
    It’s a good idea to have separate email promotions for prospects and customers, too, because you typically need to send different information to the different groups. Therefore it’s better to always do the
    intelligent point and take these tips.

  42. By investing once in chicken coop, you will become
    free for always. Depending on the climate as well as the breeds,
    some hens yield even more or perhaps much less eggs