by Judith Curry
What do these three papers share in common? All were written by scientists well outside the fields of atmospheric and climate science.
Minding our Methods: How Choice of Time Series, Reference Dates, and Statistical Approach Can Influence the Representation of Temperature Change
Kalb Stephenson, Lillian Alessa, Mark Altaweel, Andrew Kliskey, Kacy Krieger
From the closing section, Future Recommendations:
Choice of time scale, reference dates, and statistical approach can severely impact the representation of climate change. In particular, special consideration must be given to major climatic events, such as the 1976 PDO shift, which has the potential to interact with nearby reference start dates and introduce erroneous information into reports. Having a time range that covered well before and after the 1976 PDO would help to minimize the effects of this event; a method that de-emphasized extremes, such as a running mean, could be used to estimate long-term temperature trends. A cautious approach should be taken when comparing temperature trends from multiple studies that do not use identical methods or when considering the selection of reference start date from a year exhibiting a temperature extreme. Furthermore, different types of climatic patterns and anomalies are captured when using various local and global methods, suggesting that caution is needed when comparing estimates from these different classes of methods. The chosen method should be able to capture trends in a data set without being overly sensitive to variation. A subfield of robust statistics has been developed to handle this.
The ability to adequately describe temperature change may, at some point, be compromised given the increase in temperature extremes in contemporary climate change. Therefore, an essential part of future estimations of multidecadal temperature trends in Alaska and elsewhere should be the implementation of a comprehensive and comparative analysis of time series and sensitivities to start dates and statistical methods. Applying multiple methods with different start and end dates, along with varying window size and filters used, should be considered and outputs tested using sensitivity analysis in order to determine how sensitive results are to extreme variations. Methods that over- emphasize extreme variations are likely to be less useful to climate scientists and policy makers interested in long-term temperature trends. Furthermore, comparisons of studies carried out by different groups would be best served by using standardized time periods along WMO guidelines (see above). This is especially important when different methods are used to analyze data.
There are a number of complex drivers underpinning arctic and subarctic climate, and therefore caution should be used against making large, broad-scale, or sweeping statements about climate and climate change. In the recent past, some regions of Alaska have been warming while others appear to have been cooling. However, the summarized or reported direction and degree of change depends heavily upon the choice of time scale, reference date, and statistical approach. For Alaska, greater variation in microclimates could lead to temperature trend estimates being more sensitive to reference start dates, and thus greater discrepancy between temperature changes reported by different statistical methods. This has implications for management practices that rely either on historical trend estimates or on anticipated temperature trajectories. It also has strong implications for Northern cultures that either directly or indirectly depend on a certain level of predictability in seasons and seasonal events (e.g., freeze−thaw events, cold snaps, first snowfall, spring melt, density of the snowpack, storm frequency, sea ice availability or thickness, river ice thickness) for their acquisition of food and fuel, their socio-cultural identity, and their safety while traveling for subsistence purposes.
If scientists are able to utilize the methods and approaches described above appropriately, there is likely to be better representation of changes in climate, shifts in seasonality, and any resulting influence on subsistence fish and game species or existing infrastructure. As policy makers contend with deve- loping responses to climate change and its impacts in Alaska and beyond, it is imperative that the use and interpretation of scientific studies to support policy development minimizes any potential for bias by giving due consideration to the methodsused to estimate temperature change.
Published online in the ACS journal Environ. Sci. Technol., full paper is available online [here].
Biosketches: Kalb Stevenson is a postdoctoral scientist affiliated with the Department of Biological Sciences and Resilience and Adaptive Management Group at the University of Alaska Anchorage. Lilian Alessa is a Professor of Biological Sciences at the University of Alaska Anchorage and co- leader of the Resilience and Adaptive Management Group at UAA. Mark Altaweel is a Lecturer at University College London and a visiting scientist at the University of Chicago and Argonne National Laboratory. Dr. Altaweel is interested in researching past and modern social ecological systems as they pertain to water, agriculture, and trans- portation issues. Andrew Kliskey is a Professor of Biological Sciences and Environmental Studies at the University of Alaska Anchorage, where he co-leads the Resilience and Adaptive Management (RAM) Group. Kacy Krieger is a Geospatial Scientist with the Resilience and Adaptive Management Group at the University of Alaska Anchorage. Kacy has a background in geomorphology.
Evaluating explanatory models of the spatial pattern of surface climate trends using model selection and bayesian averaging methods
Ross McKitrick • Lisa Tole
Abstract. We evaluate three categories of variables for explaining the spatial pattern of warming and cooling trends over land: predictions of general circulation models (GCMs) in response to observed forcings; geographical factors like latitude and pressure; and socioeconomic influences on the land surface and data quality. Spatial autocorrelation (SAC) in the observed trend pattern is removed from the residuals by a well-specified explanatory model. Encompassing tests show that none of the three classes of variables account for the contributions of the other two, though 20 of 22 GCMs individually contribute either no significant explanatory power or yield a trend pattern negatively correlated with observations. Non-nes- ted testing rejects the null hypothesis that socioeconomic variables have no explanatory power. We apply a Bayesian Model Averaging (BMA) method to search over all pos- sible linear combinations of explanatory variables and generate posterior coefficient distributions robust to model selection. These results, confirmed by classical encom- passing tests, indicate that the geographical variables plus three of the 22 GCMs and three socioeconomic variables provide all the explanatory power in the data set. We conclude that the most valid model of the spatial pattern of trends in land surface temperature records over 1979–2002 requires a combination of the processes represented in some GCMs and certain socioeconomic measures that capture data quality variations and changes to the land surface.
Online version published by Climate Dynamics. Full paper is available online [here].
Ross McKitrick is no stranger to those who follow the climate debate. Ross is a Professor of Economics at Guelph University, specializing in environmental economics and policy analysis.
Did the global temperature trend change at the end of the 1990s?
Abstract. The apparent leveling of the global temperature time series at the end of the 1990s may represent a break in the upward trend. A study of the time series measurements for temperature, carbon dioxide, humidity and methane shows changes coincident with phase changes of the Atlantic and Pacific Decadal Oscillations. There are changes in carbon dioxide, humidity and methane measurement series in 2000. If these changes mark a phase change of the Pacific Decadal Oscillation then it might explain the global temperature behaviour.
Accepted for publication in the Asia-Pacific Journal of Atmospheric Sciences. Link to full paper [here].
Tom Quirk’s biosketch: Tom Quirk trained as a physicist at the Universities of Melbourne (M.Sci.) and Oxford (D. Phil). He has been a Fellow of three Oxford Colleges and has worked as a high energy physicist in the United States at Fermilab, the universities of Chicago and Harvard and at CERN inEurope. In addition he has been through the Harvard Business Schooland subsequently worked for Rio Tinto. He was an early director of Biota, the developer of Relenza, a new influenza drug. In addition he has been involved in the management of gas and electricity transmission systems as a director of the Victorian Power Exchange (electricity) and Deputy Chairman of VENCorp, the company that managed the transmission and the market for wholesale natural gas in South East Australia.
JC comments: Each of these papers provides a fresh perspective on interpreting surface temperature data trends. The authors of all these papers have academic training and expertise that lies well outside of the field of atmospheric science and climatology. I have long said that fresh perspectives from outside the field of climate science are needed. Congrats to Ross for getting his paper published in Climate Dynamics, which is a high impact mainstream climate journal. My main question is whether the lead authors of the relevant IPCC AR5 chapter will pay attention to these papers; they should (and the papers meet the publication deadline for inclusion in the AR5).
Thanks Dr. Curry.
There didn’t seem much besides the obvious in the first paper, though perhaps the obvious needs pointing out to some who mannipulate the stats. I’d add reliance on just a few climate stations in an area of microclimates to the list of pitfalls. Also, nice to see the 1976 PDO shift get a mention – it seems to be on it’s way to being airbrushed out of AGW-style climate literature.
Dr McKitrick’s paper seems more substantial, but it’s difficult to see the IPCC welcoming his conclusions that many climate models are less accurate than chance.
I guessed that the ES&T article (first paper) was a “feature” in ES&T lingo as opposed to an original research article. I was right. I’m pretty familiar with that journal having struck out twice there. Features are somewhere in between academic research articles and magazine pieces (this is not meant as a slight). Here’s what they say:
‘Balanced examination of significant developments relevant to environmental science & technology community; written in magazine style, assuming 1 year of university level physical and environmental sciences/engineering knowledge; peer-reviewed.’
Presumably that was why it was very easily accessible to non specialists like me. I could read it in one go and grasp the content.
If climatologits could be persuaded to adopt a style somewhere between the impenetrably turgid and the arrogant smug superiority that so many feel is the birthright of ‘real climate scientists’, they might be more persuasive in making the case that not all of climatology is pseudoscience.
I published several papers  in ES&T, in bygone years
Editors of ES&T and other research journals were as blind as the rest of us to the manipulation of information and society in response to a rapid sequence of events at the end of the Second World War, that included:
a.) The destruction of Hiroshima on 6 Aug 1945 by releasing energy from the cores of uranium atoms.
b.) The destruction of Nagasaki on 9 Aug 1945 by releasing energy from the cores of plutonium atoms.
c.) The establishment of the United Nations on 24 Oct 1945 to “save the world” from destruction by “nuclear fires”.
d.) Two landmark papers [2,3] in 1946 abruptly changing the nature of “nuclear fires” in the cores of stars were adopted worldwide without debate or discussion .
e.) George Orwell wrote a futuristic novel in 1948 about life under a tyrannical government that used control of information and electronic surveillance in “1984” .
Much like the drone planes used now for surveillance of US citizens
And UN Agenda 21 adopted in 1992 and 2002
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
1. K. Y. Chiou and O. K. Manuel, “Chalcogen elements in snow: Relation to emission source”, ES&T 22, 453-456 (1988), etc.
2. Fred Hoyle, “The synthesis of the elements from hydrogen,” Monthly Notices Royal Astronomical Society 106, 343-83 (1946). http://tinyurl.com/8aal4oy
3. Fred Hoyle, “The chemical composition of the stars,” Monthly Notices Royal Astronomical Society 106, 255-59 (1946). http://tinyurl.com/6uhm4xv
4. Fred Hoyle, Home Is Where the Wind Blows: Chapters from a Cosmologist’s Life (University Science Books, Mill Valley, CA, USA, 1994, 443 pages) pp. 153-154
5. George Orwell, “1984″ (Harcourt Brace Jovanovich, Inc, 1949, First Signet Classic Printing, 1950, 328 pages)
Is it obvious that climate is warming? Or could it be cooling?
Furthermore, climate has been warming since the last glaciation, cooling since the Holocene Climatic Optimum, warming since the Little Ice Age, little warming since ~2000, and presumed cooling to the next glaciation. Yet if one does not “believe climate change is real” one is castigated with pejorative adjectives by those who equivocate between “climate change” and “catastrophic anthropogenic global warming”!
cui bono | June 21, 2012 at 6:15 pm |
..many climate models are less accurate than chance..
A model that is different from pure chance may be quite interesting.
Sure! It can tell you, inadvertently, what is NOT likely to happen!
Brian H | June 22, 2012 at 3:29 am |
Predictive power resides in any significant outcome differing from chance. Whether that difference is greater or less, the fact that some part of it reliably differs (if it’s reliable; given the context, of course, it isn’t) from random results means there is something to the method that bears study.
A student who gets 25% on a four-option multiple choice exam likely knows nothing, or spent too much time studying issue trees. A student who gets zero certainly is far more interesting.
But a student who gets 24%, 23%, 22% etc is not so interesting. I went to school with at least one who managed this feat.
Michael Hart | June 24, 2012 at 8:53 pm |
Is that 24%, 23%, 22% down to 0% in order on 25 consecutive fair multiple choice exams?
That would be a feat to some doing.
The point was, and is, that such low scorers can provide a reliable guide to what not to accept/believe. I have found that taking the 180° converse of (C)AGW/IPCC-supporter positions has reliably put me on the data-supported side of things.
The use of models in the same bucket so to speak as data-based variables in McKitrick’s paper is interesting. Has this been done before in climate studies”
Have just read summaries, but not yet the whole papers themselves, but to your question:
Based on past performance I’d say that, regrettably, IPCC will only “pay attention” to papers that help them sell the consensus premise that AGW has been the principal cause of late 20th century warming and, thus, represents a serious potential threat to humanity and our environment, unless actions are taken to curtail GHG emissions.
The link for the full text of McKitrick’s paper takes me to a paywalled version.
See http://www.RossMcKitrick.com where Ross provides a preprint link. See too McKitrick’s opeds in the Financial Post: Part I and Part II.
wow and wow. I was aware of the weak predictive power of the models, but the level surprises me still.
Simulation models can’t tell you anything you don’t already assume about the inputs and how they behave, but calibrating them to reality can tell you a lot about those assumptions.
Was the use of the Bayesian approach merely climatologists’ way of improving communication between experts and the public? Or, was it a way to undermine the public’s confidence in the usefulness of the scientific method?
Data analysis is part of atmospheric science. There are more data issues than physics issues in the climate debate.
The problem of attempting to gain an understanding of the world through reductionism – as GCM model-makers attempt to do – means that the degrees of freedom can never be known. Accordingly, statistical significance can never be achieved…
The GCM model-makers would like to indulge the notion that the more components we have the more we will understand the system. Actually the reverse is true: the more components we have, the more we will have to combine, the more interactions there will be that we do not really understand, the more complex the system becomes to undertand and model as it seemingly acts independently of all the rules we developed that it is supposed to follow and refuses to obey.
In other words, the climate system becomes more not less mysterious.
“Dr McKitrick’s paper seems more substantial, but it’s difficult to see the IPCC welcoming his conclusions that many climate models are less accurate than chance.”
Paywalled, unhappily … but at least this is now in the peer-reviewed literature
Stated before…but the full pre-print version of McKitrick & Tole’s paper is here:
Thanks for the DL link
BTW, the “stated before” sneer from Gates – the relevant comment was posted at almost the same time as mine, so Gates’ comment is just superfluous
Facts are facts: it’s all just academic–literally. Is the Earth is warming? The answer clearly is absolutely ‘Yes’ but, ‘Not Really’…
It is trivially true that the Earth has warmed over the last 20 thousand and also over the last 150 years. An alternative viewpoint is that the Earth has cooled over the last 10 thousand years; it all depends upon the length of your piece of string. But most importantly of all, and over the time scale that counts for testing the hypothesis of dangerous global warming, since 1998 the Earth has failed to warm at all despite an increase in atmospheric carbon dioxide of more than 5 per cent. ~Bob Carter, 23-Nov-2010
One comment about McKitrick & Tole’s paper. In the conclusion they state:
“Thus we conclude that a good model of the spatial pattern of surface climatic trends likely requires a combination of the processes represented
in some GCMs and measures of data contamination induced by regional socioeconomic variations.”
Now, one of their “socioeconomic variations” is the change in coal consumption. Coal consumption!? What immediately occurs to me that coal consumption has two direct forcings on the climate…one long-term and one short-term, and both in opposite directions. In the short-term, coal increases aerosols and causes a negative forcing on the climate. But (once the burning of the coal stops) these aerosols are fairly quickly removed from the atmosphere and the negative forcing ends. In the longer-term the CO2 produced from the burning of coal produces a warming effect.
Why does this matter? McKitrick & Tole find the combination of some GCM’s and socioeconomic variations to best represent the spacial pattern of surface climatic trends. Though coming about it from a completely different direction, this is exactly what other attribution studies have found when looking at the “asian brown cloud” and economic activity from China and other asian countries. Some amount (though certainly not all) of the flattening of temperatures over the past decade has certainly been caused by the rapid increase in industrial activity in Asia, fueled greatly by the burning of coal and a measured increase in anthropogenic aerosols. Those attribution studies, such as Foster & Rahmstorf 2011 and others have aptly noted this, and, though they might not admit it, McKitrick & Tole 2012 are saying something similar…include the change in the amount of coal burned along with some of the GCM’s and you can be much better at having a better model of the surface climate trends. Without intending to, (or probably willing to admit it), McKitrick is agreeing with Rahmstorf. Funny world…
” What immediately occurs to me that coal consumption has two direct forcings on the climate…one long-term and one short-term, and both in opposite directions. ” That is more currently true that over the long haul. Black carbon and other pollutants vary with time and regional regulations and can have unpredictable longer term impacts. Black carbon deposited on a glacier a century ago could only now be exposed by melt. Combined aerosol impacts look to be pretty tricky business.
It takes a leap when a paper shows models have worse predictive power than random, to say that it is making claims based on control data that is not even supposed to be detectable : ).
The coal and other socioeconomic had apparently been filtered out of the temperature data already (or it had been shown UHI doesn’t exist and not filtered etc). So it isn’t measuring temps vs growth, it is measuring temps with growth filtered out vs growth. All it says is there is a problem with the filter if whatever remains has a stronger signal than the models themselves.
Every time something runs through a computer, people tend to think, ahh, now we know! Even when it just told us we don’t know : ).
Rather begs the question, of course. Par for the course.
Temperature evaluations need to include “Type B”, systematic or bias error. . More importantly, conventional temperature analyses need to incorporate longer term natural variability that appears about twice the conventional standard deviation. See:
Markonis, Y., and D. Koutsoyiannis, Hurst-Kolmogorov dynamics in paleoclimate reconstructions, European Geosciences Union General Assembly 2010, Geophysical Research Abstracts, Vol. 12, Vienna, EGU2010-14816, European Geosciences Union, 2010. http://www.itia.ntua.gr
The first paper investigates the importance of the starting date (and other statistical parameters) on the conclusions drawn from time series data. I am surprised that no one has noticed that the UN’s IPCC used that trick in publications by applying or condoning the use of a false zero line for their estimate of normal and anomalous atmospheric temperatures. I refer to the zero line positioning in graphs showing anomalous atmospheric temperatures. This results in 1905 temperature being apparently anomalous to the extent of -0.5C while 1940 and 1980 temperatures have no anomaly! By this means the climate experts were able to avoid explaining the 1905 – 1940 rise of nearly 0.5C. Why has this subterfuge been allowed for all these years. See my paper on my web site.
#1 assumes greater climate extremes in contemporary climate change. Fail.
#2 Welcome, Citizen Kulak.
#3 A Baker’s Dozen years before the fork comes out clean.
IMPORTANT INTERPRETATION OF GLOBAL MEAN SURFACE TEMPERATURE (GMST) DATA
Effect of trend period on global mean surface temperature trends for Hadcrut3 => http://bit.ly/MkdC0k
Global mean temperature trends are cyclic with the amplitude of oscillation dependent on the trend period. For the 30-years trend, the amplitude is about 0.1 deg C per decade. This 0.1 deg C per decade must be removed from climate’s trend calculations.
In view of this result, we can respond to IPCC’s claim as follows.
1) IPCC: “For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.”
No IPCC. Because the GMST trend for 1970-2000 is about 0.2 deg C per decade does not mean that this trend will continue. This is because the 0.2 deg C per decade trend includes an oscillating warming rate of 0.1 deg C per decade as shown in the above graph.
2) IPCC: “Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.”
No IPCC. We will move to 0.1 deg C per decade when the cyclic component of the global mean temperature trend moves from its peak to its neutral position as shown in the above graph, and this movement to 0.1 deg C has nothing to do with either greenhouse gasses or aerosols.
AGW is not supported by the data.
Girma | June 21, 2012 at 8:02 pm |
Three moderately decent papers, and not even a dent in the tinfoil hat.
Such a shame.
It is definitely cool that different disciplines are looking at the unmistakable signature of anthropogenic CO2.
Maybe someone will actually build an Aqua model and uses something like that mystic dark science formerly know as thermodynamics.
More like the unmistakable signature of Aqua Budda.
Aqua Budda versus Carbonyl Karma, coming this weekend to a Moshpit near you, yabetcha :)
The Left is practically ready to ban CO2 and yet the EPA treats Dihydrogen Monoxide like it’s water.
Psycho Alpha Disco Beta Bio Aqua Do-Loop (Aqua BOOGIE not Budda):
I have given up on attribution and have decided instead to work on reducing my carbonara footprint.
Climate science has failed to stake out a field or domain of excellence. Others, whose craft has some cross-over, view the climate scientists as poorly prepared to make the assertions they do. Were it not for the shenanigans of the climate minority, climate science might have developed along the lines of biology or geology rather than weather forecasting or some social sciences. As it has evolved, climate science has become devoid of science, mainly encrusted with advocacy, and projected for a demise reminiscent of eugenics. Dr. Curry, I realize you have hitched your wagon to climate science. I am afraid, that too few of you have spoken out at critical times and the whole shebang will collapse, participants scattered and engaged and funded in other endeavors. Arrhenius’s equation will pass as an anecdote in history, briefly stirring a flurry of ill-conceived papers whose conclusions over-reached the actual data. I say this with some sadness, as I see climate science now as the quintessential collaboration of environmentalism and politics instead of an exploration of how our world works.
No matter who is warping the B/S, no matter in what colour of wrappings – as long as they put the big / small, constant climatic changes with the ”phony GLOBAL warmings” in the same basket – intention is: to confuse even more, the already brainwashed!
Selectively using individual places, where is warmer than normal – but avoiding my proofs and the laws of physics, that: in the same time, other place / places MUST be colder – it’s the same Evil Cult. Communists lie about one future GLOBAL warming / communist haters lie about serial ”proxy” phony GLOBAL warmings – lucky psychiatrists. I’m glad, not to be a B/S Addict.
“What Is Found
What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.
The inevitable conclusion of this is that we are depending on the various climate codes to be nearly 100% perfect in removing this warming shift, of being insensitive to it, for the assertions about global warming to be real.
Simple changes of composition of the GHCN data set between Version 1 and Version 3 can account for the observed “Global Warming”; and the assertion that those biases in the adjustments are valid, or are adequately removed via the various codes are just that: Assertions…. ”
Here is a graph that shows the global warming rate of the 20th century => http://bit.ly/MkdC0k
The global warming rate has a cyclic and a linear component. The amplitude of the cyclic component varies with the trend period. Longer trend periods have smaller amplitudes.
As the secular global warming rate is the linear component, global warming rate has increased at a uniform rate from about 0.04 deg C per decade in the 1910s to about 0.09 deg C per decade in the 2000s.
Girma | June 22, 2012 at 12:01 am |
You really have never heard of the concept of regression to the mean?
“Longer trend periods have smaller amplitudes.”
The more data you include, the more strongly weighted to the mean the result.
Congratulations Mr. Orssengo. You have, in Statistical terms, reinvented the wheel.
Did not the IPCC claim “accelerated warming” because the shorter trend period has greater trend?
IPCC: Note that for shorter recent periods, the slope is greater, indicating accelerated warming.
Girma | June 22, 2012 at 12:52 am |
What, because the IPCC does something, makes it right?
Though, to be fair, I did it too. Difference between what you did and what rational people do being, the rational way uses the shorter trends all on lines with the same smoothing.
You’re smoothing to multiple different levels. Entirely not the same. And that you don’t understand that says you simply do not grasp the fundaments of statistical reasoning.
A fascinating result: Global warming rate did not depend on the length of the trend period 3-times in the 20th century at points E1, E2 and E3.
Or two cyclic periods, one short and one long. Measuring the upslope of a sine wave and predicting a linear increase is not particularly clever.
JamesG | June 24, 2012 at 3:54 pm |
Two cyclic periods, one short and one long, tell us also that what we have is not a regular periodic progression; indeed, it tells us we have no single cycle whatsoever.
And while measuring the upslope of a sine wave isn’t a linear increase, measuring the increasing slope of like points on successive supposed waves is:
(I regret BEST land-only is the only dataset I have easy access to that is long enough in duration to show three or more successive periods of the supposed 60 year or longer ‘waves’; however, on all of the datasets longer than 120 years, the claimed waves degenerate, and the appearance of acceleration for pretty much any proposed period longer than 50 years persists across all global datasets.)
So, no. The periodic global temperature argument is so far a complete bust, subscribed to only by those who fail to grasp graphical analysis.
Just because you see a unicorn does not make unicorns real.
Sorry, mouseover error made (inconsquential) mistake in last post.
But note even 54 years won’t get you what you want (www.woodfortrees.org/plot/best/mean:59/mean:61/plot/best/from:1965/to:2009/trend/plot/best/from:1911/to:1965/trend/plot/best/from:1857/to:1911/trend/plot/best/from:1803/to:1857/trend) — and why should it? 54 years doesn’t really correspond to any known major driver, does it?
But 88 years also fails you, and that’s the Gleißberg.(www.woodfortrees.org/plot/best/mean:59/mean:61/plot/best/from:1921/to:2009/trend/plot/best/from:1833/to:1921/trend); but then, we knew that would fail after 1960 because the Hale cycle’s correlation to global temperture degenerated after the 1950’s.
Do you think the Seuss cycle or the Hallstatt might furnish the 800+ year long period it would take to account for the 200+ year long upward arm of your sine curve? Seuss is far too short. Hallstatt is out of phase. Both are just hypotheses. Orbital eccentricity, precession and obliquity are also out of phase with this sine curve claim. Solar activity? Also doesn’t fit well enough to explain away anthropogenic contribution, nor to make CO2 rise less of a concern, but given the added Uncertainty of inputs from the Sun makes it more of an issue for consequences.
The data say you’re just plain makin’ stuff up.
Some comments, and some things I wish I knew the answer to, and maybe someone here can answer.
1. in their paper, McKitrick and Tole say:
“If users believe the [GCM] models were never designed to meet this [spatial trend variation] standard, then our results will not be surprising, and could be interpreted merely as confirmation that this is not something that should be expected of climate models. But since the socioeconomic variables do significantly improve such forecasts, it is arguable that the climate models ought to be able to do so as well…”
On the face of it, this sounds wrong to me. A good “whole U.S. economy macromodel” will not do a very good job of predicting the trends in a randomly selected state. I think we had adequate demonstrations of this from the 1960s to the 1980s, when it seemed pretty clear from experience that those 500 equation models were no better than (and usually worse than) a half-dozen equation model at predicting the U.S. economy. Yet obviously the latter does a shitty job of telling you what’s going down in Clevelend, while the former might have a snowball’s chance at that.
Scale and levels of explanation matter. But, they follow up with this:
“extensive use [of GCMs] for the prediction and interpretation of the spatial patterns of temperature change at the Earth’s surface, and the use of such projections in reports for policymakers (e.g. Parry et al. 2007), leads us to the view that it is appropriate to assess
their usefulness in this regard.”
Well maybe, but the other interpretation is that Parry et al. 2007 are attempting something that no sensible person ought to.
2. I would like to know more about which GCMs add useful information to spatial trend variance and which don’t. In the paper we have a cryptic insider’s paragraph, listing off an alpha-numeric soup of GCM titles, but nothing along these lines: “The GCMs that add something are differentiated by characteristics X, not shared by the GCMs that add squat.” I would have REALLY liked to know what X was!
3. As Bart says above, a model that contributes LESS than chance (to explaining spatial trend variance) can actually be a source of important information. Specifically, what NOT to do, if you want to explain spatial trend variance! I would have really liked to hear more about that too.
Well, I have a question for Ross.
I downloaded his data. In his data package he has a spreadsheet named
This contains his input data of things like population, GDP etc.
In line 195 he has the following data
Latitude = -42.5
Longitude = -7.5
Population in 1979 =56.242
Population in 1989 = 57.358
Population in 1999 = 59.11
Land = 240940
In his code he performs the following calculation
SURFACE PROCESSES: % growth population, income, GDP & Coal use
// land is in sq km, pop is in millions; scale popden to persons/km2
// gdp is in trillions; gdpden is in $millions/km2
generate p79 = 1000000*pop79/land
generate p99 = 1000000*pop99/land
So, at latitude -42,5, Longitude -7.5 he has a 1979 population
of 56 million people and 240940 sq km
and a population density in the middle of the ocean that is higher than
50% of the places on land. Weird.
A guess that is supposed to be Latitude = 42.5, Spain — although it shows zero coal use, which would be wrong according to this:
No, I think he as made a misatke. I’ve written him to ask
Its even more clear when you look at his population data for the united states
Lets take Chicago area
Line 425 of his spread sheet
42.5 -87.5 225.06 246.82 272.88 9573110
What he is doing is the following. he is taking the TOTAL US population
for 1979 ( 225 Million) and he is UNIFORMALY spreading those people
across the entire country so that every 5 degree lat lon for the us has the SAME population density.
EVEN Alaska line 407
62.5 -147.5 225.06 246.82 272.88
Oh gosh, I think that explains line 190. It is in the middle of the pacific, but has a land area of 549190 – which is exactly the land area of France. Ok, then if you look closely at that spot in the pacific, it is actually French Polynesia. Yikes.
Yup !.. what a bone headed mistake. Now get this. he has used that data since 2007 and nobody caught it.
Man, one has to assume further mistakes, if this has more predictive power than most models we are seriously in trouble : ).
Apparently there is a weather station on Gough Island, potentially the most remote place in the world. I’m guessing it is your intrepid 195 : ).
You think he has some issues with the French, wanting to smear them across the ocean around French Polynesia? I hope you realize that you are now officially a Merchant of Doubt because you questioned peer review literature. Oh wait! I think that is incorrect, finding heat smearing is merchant of doubt, finding people smearing something else.
Did you check to see if they got the right answer? Getting the right answer using the wrong method I believe is allowed :)
If this is McKitrick you are talking about, so what else is new?
Years ago, Deltoid described how McKitrick screws up yet again
Whether it is getting radians and degrees mixed up, or doing elementary sanity checks on the data, this stuff isn’t that hard to verify for quality. Could it be that some people just don’t have the feel for the data? Or that they rely too much on blindly shoving numbers into stats packages? McKitrick’s paper has that sheen of mathematical formalism that can obscure the fact that he lacks some the skill of a practical analyst. Beats me as to his real skill level, or that he is just sloppy.
That’s why lots of people work on these topics,the poor performers get weeded out and the cream can rise to the top.
I love it; demonstrable errors and better than the models.
robin | June 22, 2012 at 3:30 am |
With enough tuning, and enough Economic data, anyone can outperform GCMs.
The surprise is the output didn’t perform with the same scale of precision as Mr. Orssengo’s similar tunings.
Perhaps if the accuracy had been that good, it would have caused less adept auditors to look into the data.
Steven Mosher can only be so many places at once.
And line 195 would be Britian, probably Tristan da Cunha island. Also includes part of the Antarctic Peninsula, line 181:
yes, the shape file he used for administrative boundaries would call those places “england” and since he doesnt really check geography he gets it wrong.
Lines 371 and 372. Antartica. 56million people and an interesting GDP
I wish you had your own blog, Mr Mosher.
There is another on line 190:
populations of 53.606 56.436 59.082
Wiggling the negative there gets three pacific ocean hits, and one in Northern Territories AU. Hmm.
ya but their literacy rate is 99%. I dont think Ross is familar with spatial stats. I just plotted up his population density using R and its clear that he merely smeared the whole population of countries uniformly across all their land. He didnt calculate the population density for each 5 degree bin. he took a countries total population and just smeared it everywhere.
So New York city and Nome alaska have the same population density and growth in Ross’ model.
Well if everyone can read that is something.
yes, literacy is a predictor …and explains that.. err,, people who can read cause UHI.. kinda sorta
Funny how nobody checks the data when they like the answer
So true – though I always like to do a sanity check on data at least, like just plot things in a grid and see a rough world population map emerge. Just to verify data entry if nothing else. Not sure type of sanity check this would pass when you were able to spot things looking at raw data.
Good spotting. It will be interesting to hear what Ross says about it.
What about any similar errors in the other two papers?
The dataset has been in every paper since 2007.
since there is a “flag” for land and water he may just set this to zero at the end. However, he has significant populations in antarctica and his method
gives alaska the same population density as he continental US.
“Funny how nobody checks the data when they like the answer”
Nobody in the denier community. Real scientists do it all the time.
‘Real scientists do it all the time’?
Who checks Phil Jones’s data?
‘The most startling observation came when he was asked how often scientists reviewing his papers for probity before publication asked to see details of his raw data, methodology and computer codes. “They’ve never asked,” he said’
Nice catch. Any insight on how the bad data for these remote locations gets used in the downstream analysis of regional predictive power?
Ross tells me that he removes Antarctica from the analysis. Nevertheless, there are substantial differences between his version of population density and the truth. There is also an issue with how the coastal cells are handled. In fact, you cant even call the variable population density. It may be the case that his results will STRENGTHEN by using the correct procedure. As it stands the data is not what he thinks it is and not what is represented in the paper.
Did anyone look at Greenland verses Denmark?
0-0 draw. Denmark through on penalties.
Reducing yr carbonara footprint is cheaper than a carbon tax policy and more humane lol.
I get very confused about the competing claims for global temperatures in recent years. Its either going up, going down or broadly static.
If its doing any of those for a couple of years only thats not significant. 5 years or so becomes fairly interesting. 10 years or so is starting to become intriguing if that trend is counter to what was expected i.e going down or remaining static.
How about some of the data modelers here showing the rest of us what is happening in recent times with some graphs and an explanation as to why you are right and someone posting contrary information is wrong.
You have raised some interesting questions.
First of all – I’m not a “data modeler”.
The problem is that there are several temperature records, which all show slightly different trends.
There is BEST, which is limited to land data only, there are the HadCRUT3, NCDC and GISS surface temperature records covering both land and sea (IPCC has used the HadCRUT3 record as its principal benchmark indicator) and there are the tropospheric (satellite) records of UAH and RSS, which give truly global values with no distortions of urbanization, poor sea surface coverage, etc.
For ease of comparison, I’ve tended to stick with HadCRUT3 (as does IPCC), “warts” and all, recognizing that there might be a distortion due to the UHI effect and poor sea temperature coverage and reliability (issues which you have raised in the past).
As can be seen from the curve below, the HadCRUT3 “globally and annually averaged land and sea surface temperature anomaly” shows slight (if statistically insignificant) cooling over the past 15 years (180 months).
Kevin Trenberth has referred to this “unexplainable lack of warming” as a “travesty”, and has suggested that it may be a result of energy going back out to space, with clouds acting as a natural thermostat. Others have tried to tie it to Chinese aerosol emissions, unexplained natural variability, etc.. etc. But, however one rationalizes it, this “lack of warming” has caused quite a bit of stir in climate circles (as witnessed by this and many other blog sites)
There is some disagreement about how long a record must be before it becomes “statistically significant”, but a paper by Ben Santer has concluded that 17 years should be the minimum time period.
So we still have two years to go.
Let’s assume (for argument’s sake) that the current “lack of warming” continues for another 24 months, bringing the total time period to 17 years.
Will the 17-year goal-post be moved (say to 30 years)?
Will more papers be published that show the “lack of warming” could be caused by a) human aerosol emissions, b) natural variability or c) a combination of the above, which offset the GHG warming from added human CO2 that would otherwise continue according to IPCC model forecasts?
Or will the models be reprogrammed with a lower 2xCO2 climate sensitivity to reflect the observed results?
But I do not personally believe that IPCC will abandon its consensus CAGW premise just because the empirical data on the ground (or in the air) no longer support it. CAGW has become a multi-billion dollar big business, and these do not die easily (until they finally collapse, as in the case of Enron).
On the other hand, I also do not believe that the CAGW hysteria can survive another few years of slight cooling.
As Abraham Lincoln once said:
Max and tonyb. I have tried to make the case that looking at the recent trend in temperatures is looking throught the wrong end of the telescope. If CAGW is true, then adding more and more CO2 to the atmosphere, as we are still doing, should cause a CHANGE in the temperature trend. That is, temperatures should cease to trend as they have done since we have had recent records, and a new trend should appear. Temperature/time graphs should be seen to have a CO2 signal, from which climate sensitivity can be directly measured.
So, in other words, the proponents of CAGW need to identify a CO2 signal in the temperature/time graphs; HAD/CRU, NCDC, GISS, RSS and UAH. No such signal has appeared that can be demonstrated to have been directly caused by CO2. There needs to be a change from what has happened in the past, and no such change has occurred.
It is this absence of a CO2 signal that is important; not the fact that global temperatures stubbonly refuse to go on rising.
Abraham Lincoln sent this along:
JCH So? Where does this either prove there is a CO2 signal, or show a new trend that has not been present since records first became available? Just because temperatures rise, does not prove that the rise is caused by adding CO2 to the atmosphere.
How’s that ice recovery doing?
Do you mean Abraham Lincoln, Vampire Hunter?
Whoda thunk it possible?
And if you go back 17 years look what happens
If we use your data the waring for the last 17 years is very very mior.
It does not support belief in CAGW !
#Least squares trend line; slope = 0.00704807 per year
But of you go back 15 years it seems to be true that you get a slight cooling trend.Do you think politicians or the general public would be surprised to hear that?
1. estimating the trend with a linear fit is physically wrong.
2. even then the cooling trend is not significant.
3. The subjection reaction of suprise is scientifically meaningless.
Lets not get over complicated about this. If a politician -who are simple souls- asked you if it was true that there had been no overall increase in temperatures so far this century, and asked you to provide a graph illustrating how long this trend had been going on-what graphic would you show them?
By 2015, it will be ~flat/cooling (1995 – 2015).
5 years of cooling/static temperature you can ignore. 15 years is significant. 20 years-should it occur would surely be the death knell to cagw. It does perhaps already give the lie for calls for urgent action. It seems to me that we have plenty of time to consider the subject in a rational manner without the need to be stampeded into futile actions-like investing in certain forms of renewable energy.
Yes, 20 years of no warming should be the death knell and it’s needed to stop the madness. Nature will demonstrate – solar activity is very low, PDO is already negative and AMO will (maybe) follow soon. I think the real cooling will start after the SC24 max (~2015) and globe will cool faster then it warmed in the 80s/90s.
Thanks for that. I’m out for the next few days but it would be interesting if anyone could produce the yearly data points in bar graph form as that might allow more ‘eyeballing’ to try to discern the trend.
Anyone here care to challenge Max’s analysis which in the absence of any competiion is the current ‘gold standard?’
Steven Mosher | June 22, 2012 at 3:52 am:
“Funny how nobody checks the data when they like the answer”
Oh, come now, Mosh. It’s not that, because I liked the result, I didn’t check. I didn’t check because I wouldn’t have a clue how to do that, and I guess that would apply to many here! :-)
But luckily for me and for science, there are folk like you who do have a clue how to check, and that’s perfectly fine. Let’s see how RM deals with your crits and how they affect his conclusions. If they blow them out of the water, so be it.
But let’s remember that RM made his data and code available, and if he responds appropriately, all well and good. If he doesn’t respond appropriately, then, sceptic or not, I’ll be just as annoyed at him as I would be with any warmist who refused to deal with criticism.
Let’s wait and see how he responds.
Would not understanding our planet and the temperatures generated not include looking into the many areas that change our climate temperatures daily?
From planetary tilting to velocity differences to solar activity to atmospheric density differences and also suns angles of rays on our planet.
But temperature data excludes any physical component and the scientific conclusion are just guess of the data as they have NEVER looked at the physical components of changes.
Yes, Ross does make his code and data available. However, He is using old data and has had every opportunity to improve the accuracy of it. I suspect he will say it makes no difference. But the only way we will know is if he reruns his model using the correct population density. In the end it will be up to him to rerun his model with the correct population density. I mean, giving alaska the same population density as conus is just wrong. I dont care what the answer is.
I’m in contact with him and have sent along the data. I think with finer resolution his findings might even be stronger. we will see. basically,
I like to see things done with the best data and methods.
Plus one on that.
Through temperature data, they are still just hunting for trends to the exclusion of the parameters the planet and sun has changed.
Noticed altitude also was not a factor as well nor velocity differences.
Just plotting data and making guesses again through projected trends and NOT looking at any physical components.
Interpreting trends without having closer look into data’s spectral composition is misleading. Global temperature’s primary spectral component is identical to the solar magnetic cycle:
Arctic temperature oscillations appear to match those found in the geomagnetics
on the other hand changes geomagnetics are inversely correlated to the changes in the solar magnetic output
this would indicate that the temperatures trends and cycles are fundamentally related to the solar activity; no surprise there.
Refusing to acknowledge obvious is not ignorance it is obscurantism.
Judith said in the post: “Each of these papers provides a fresh perspective on interpreting surface temperature data trends.”
Had I been a reviewer, I would have sent two back for revisions.
Stevenson et al indicate the 1976 shift in Alaskan temperatures results from the 1976 PDO shift. The PDO is simply reflecting the impact of the 1976 Pacific Climate Shift on the 1st PC of detrended North Pacific SST anomalies. A PDO shift is not the cause. The SST anomalies of the entire eastern Pacific basically shifted up 0.19 deg C after the 1973/74/75/76 La Niña event:
And I would have asked them to use the Northeast Pacific (Alaskan Coastal Waters) SST anomalies instead of the PDO in their analyses. The Northeast Pacific SST anomalies exhibit an upward shift in 1976 of about 0.36 deg C:
By using the Sea Surface Temperature anomalies instead of the PDO, Stevenson et al could then have shown that the negative trends of the SST anomalies before and after the 1976 Pacific Climate Shift…
…and how they related to the negative trends in Alaskan surface temperatures before and after 1976.
The Quirk paper, as far as I can tell, makes a common mistake. It assumes the PDO and AMO are similar datasets. They’re not. The AMO is detrended North Atlantic Sea Surface Temperature anomalies, while the PDO is inversely related to the detrended sea surface temperature anomalies of the North Pacific (north of 20N) on decadal timescales. In other words, the Quirk paper needs a total makeover, using detrended North Pacific SST anomalies instead of the PDO.
So you would have accepted the obvious errors in the data of the third paper
I haven’t read the third paper, Steven.
Quirk’s paper shows evidence for the positive water vapor feedback.
The long term warming is 1/2 degree per CENTURY.
[according to CRU]
Since 1998 this rate has gone to ZERO !
How does that lead one to believe in CAGW ?
I wasn’t even talking about the temperature trend, but since you mention it,
No it hasn’t.
Since 1998 the trend is -.005 +/- 0.155, gosh the error is 31 times the trend, that is by HADCRUT 3v.
The all new and improved fancy HADCRUT 4, shows +0.083 +/- 0.172, with error only a little over twice the trend.
Could it be a cooling trend due to the cool phase of the PDO offsetting the warming trend due to CO2? Maybe the recent extended solar minumum had and effect.
I don’t even believe in CAGW, though I think the evidence supports it.
positive absolute humidity?
Bob, water vapor is a regulator. It responds to energy and regulates with both positive and negative “feed backs”. None of these three papers are all that illuminating, it is just nice to see more disciplines jumping into the fray :)
An appeal to Steven Mosher and JCH. Like Willis, I try and put up science; maybe not very good science, but the best I can do. I ask for evidence of a CO2 signal in temperature/time graphs, which can be proven to be caused by adding CO2 to the atmosphere. I get references to temperature/time graphs, which I am already fully aware of, with no discussion of how they are supposed to relate to the subject I am trying to discuss.
Please, if these graphs relate to what I am interested, could, you please explain how they are supposed to add to the discussion.
You will never see the “evidence” of the “cause” in a plot of temperature versus time. Never. You can never prove that C02 or anything if the cause of the rise in temperature. That you ask the question proves you dont understand science or how “proof” works.
That said, if you have a long enough time series ( wait a little bit ) you can illustrate how the temperature is consistent with the physical theory.
Simply. A plot of temperature versus time is just that and only that.
steven you write “You will never see the “evidence” of the “cause” in a plot of temperature versus time. Never. You can never prove that C02 or anything if the cause of the rise in temperature. That you ask the question proves you dont understand science or how “proof” works.”
Let us assume you are correct. Then logically it will never be possible to observe a rise in temperature which can be shown to be caused by adding CO2 to the atmopshere. Presumably, this means that natural causes are always going to be so strong, that they mask any rise due to the additional CO2. In addition, it will never be possible to actually measure the climate sensitivity of CO2. If there was a discernable CO2 signal which could be shown to be caused by the additional CO2, then it is a simple matter to measure the climate sensitivity. If this is never going to happen, then we can never actually measure climate sensitivity.
So, if we can never measure the climate sensitivity, we will always will be left with hypothetical and theoretical estimates of what this value might be. I have been a student of the history of physics for some time, and I can think of no other instance in the whole history of physics where it was never possible to make a measurement of the value of something, yet all the appropiate scientific organizations have declared that we can attribute a value of this number that is beyond dispute. If you can give me a similar instance from the history of phyics I would be grateful.
So, you claim that I dont understand how science works. If I am correct that CAGW is unique in the annals of physice, then I would suggest that on the contrary, you are the one who does not understand. I still believe in the spirit of what William Wordsworth wrote, and I remember reading on the front cover of Nature when that magazine really was the gold standard of scientific reporting.
“To the solid ground of Nature; Trusts the mind that builds for Aye.”
It would be possible to prove, if we could measure every square meter of outgoing and incoming radiation, every cubic meter of ocean heat content, and every square meter of surface temperature, all over a period of several years. It is just the size of the measurement that is daunting, unlike other branches of physics where you can prove things by enclosing everything in a controlled environment.
No, then because you are using a model and correlating the results of the model to the time series, which is why you need physical models of course:)
Well, OK, you may need the full IR spectrum, but these measurements could rule out conceivable competing theories. Also it may not be as pessimistic as I put it, because these measurements are getting better at constraining what could be happening to fewer possibilities all the time.
“You will never see the “evidence” of the “cause” in a plot of temperature versus time. Never. You can never prove that C02 or anything if the cause of the rise in temperature.”
That is because the affect of CO2 is so small. But if warming of CO2 is amplified by a factor of 2 or 4 times, one should be able to see it.
Another way to see the evidence of small change is to accurately measure the temperature.
If we had same quality satellite measurements of earth we have had since 1979, and instead had this level of accuracy since say 1850, it seems one should able to see any “fingerprint” if there was one.
Another huge fail in terms of measurement is the carbon cycle.
The biggest and most obvious lie regarding this whole business is that we somehow lack time- that waiting a decade or two is kind failure to take responsibility.
Even the dullest of persons should realize we aren’t going to do their heart desire [essentially killing billion people in the near term] if no other reason than the sensible Chinese will not allow us to do this.
The whole guilt trip of America making the most CO2, no longer works. America and it’s neurotic are no longer in the driving seat in terms any affect upon global CO2 emission. The dream world conquest with world government taxing us to death, isn’t going to happen.
This whole thing has been non serious from the beginning, it was never possible, the only thing vaguely possible was creating something like the Great Depression, and perhaps in the regard they have had some success.
The good news is the crazies may exceed the debt limit on being crazy,
and as result may be better governance may on the horizon.
gbaikie youn write “That is because the affect of CO2 is so small. But if warming of CO2 is amplified by a factor of 2 or 4 times, one should be able to see it.”
If we measure the amount that global temperatures actually rise in the real atmosphere when CO2 concentrations increase by some amount, we measure TOTAL climate sensitivity. That is we measure both the no-feedback effect, AND the effect of all the feedbacks. So what you are trying to say, I have no idea.
“If we measure the amount that global temperatures actually rise in the real atmosphere when CO2 concentrations increase by some amount, we measure TOTAL climate sensitivity. That is we measure both the no-feedback effect, AND the effect of all the feedbacks. So what you are trying to say, I have no idea.”
Hmm. Well the 20th century was a warming period which followed a centuries long period which was one the coldest periods on last 10,000 years.
During this recovery, some people have been looking for any warming which may be caused by increasing global CO2. They can guess about no-feedback effect and all the feedbacks, but there is no correlation between the doublings of human emissions from fossil fuels [or slight rise of global CO2] to temperature.
It is claimed we at highest levels of global CO2 in timescale of a 100,000 years, but we probably aren’t warmer than we have been in various periods within last several thousands of years. which should indicate that CO2 has little affect upon global temperature- instead some think the expected warming must be hiding somewhere.
I think you need to review The Consistent With chronicles by Professor Pielke Jr.
A dependent variable plotted as a function of time can never carry any information whatsoever about causality.
The Everything Else Held Constant case might provide some clues about causality. But then Everything Else Held Constant never occurs in the physical domain.
Having read through some of the comments of this thread the conclusion seems to be that jumping to a new field of science is not easy (I thought so also before reading this thread). While the outsider may have some good and fresh ideas he or she cannot easily avoid errors, even errors that are so essential that people who have worked longer in the field and who have proper educational background would never make.
Should these papers be given attention and be noted by IPCC authors? The answer appears to be in negative although perhaps one of the papers may be revised and then have real value assuming that the results remain interesting, when corrected.
These conclusions are not based on my analysis, but that’s the impression that I have got after reading this thread.
Agreed, they suffer from all the weaknesses of enthusiastic amateurs.
A decade or so is far too short to discern trend reliably against a significant natural variability. Some indication can be gained by looking at peaks and troughs – but very roughly and I would suggest that monthly data captures the variability more than annual data.
Changes are within the envelope of the 1998/2001 ENSO dragon king. Of course dragon kings bring up all sorts of questions about expectations for the decadal future.
Some better ideas on decadal variability should emerge from AR5 – but I have been disappointed before.
Monthly data suffers from autocorrelation, terribly
Hmmm – you mean the next month is pretty much like the last – a little warmer or cooler as the case may be – except for how ENSO is behaving in the month? Well that’s a surprise.
Actually – that’s pretty funny Eli. What we have is a temperature that is changing with time and we are discreetly averging the quantum. Autocorrelation is a statistical procedure to distinguish signal from noise. In this case we are a definitely into noise. These quibbles seem to autocorrelate to ever more desperation.
I have suggested elsewhere that this is the result of anomalies appearing in the fabric of the AGW theory with renewed but doomed efforts to incorporate or reject disparate information. I call it the AGW space cadet syndrome.
We have commenced building a rehab centre in Somalia – that bastion of libertarian thought.
Uh, it is the Sahel, Somalia cancelled due to some background check issues :) Liberia came up with the name, UNtopia for the facility.
I like it Captain – I will cogitate a UNtopian poem made of wind, sun and desert stars. It will ignite a new world.
Kangaroo, Don’t talk about stuff above your pay scale. You always embarrass yourself when you venture into that territory. Autocorrelation can be a stochastic property of the system, characterizing the probability of occupying specific states. The noise may in fact be the signal, a physicist would not jump to conclusions on that point. Days and seasonal variations are the autocorrelation that Eli is referring to, giving the behavior a more deterministic behavior.
Amazing that you can be that ignorant on the one hand, and then have the obsession with dragon-kings makes you look like a real poseur. Only one person uses that “dragon-king” terminology, Sornette, and he is talking about outliers beyond the power-law tails, which is twice removed from anything remotely quantitative.
I will post this to Eli’s site should he not see it. He will get a good laugh out of it.
Tubhead – I am sure my payscale is way beyond yours – I have been doing it for such a long time.
‘We emphasize the importance of understanding dragon-kings as being often associated with a neighbourhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.’ http://arxiv.org/abs/0907.4290
I noticed Judith using the term in the resent ‘state shift’ post. It is such an important concept otherwise known as noisy bifurcation or some other similar term. Dragon-king is so much more dramatic to describe events of such power and gravitas.
Autocorrelation is child’s play – and quite irrelevant in this context. All we want to see is the total anomaly averaged over the month. It is not a ‘stochastic’ process but completely determinant based on the partitioning of energy between the atmosphere and oceans.
The variance is likewise simple. It is just the difference between early 1998 and early 2000 – much of which can be attributed to ENSO. Averaging the data over a year reduces the peaks. I am always surprised at the vehemence of your typically quite idiotic utterances. You quote CK and then without any rationale at all declare his ignorance.
You always seem to throw around ideas at random – ‘the noise may be the signal’ – autocorrelation may happen by chance – etc. Perhaps you can tell us Eli – do you find his sycophantic, sociopathic attentions more or less of an embarrassment? I would certainly not hire him.
There are better ways of doing that, which kinda shows what you don’t think about is important.
CK , yer can’t have a UNtopian (Distopean) poem and bring in desert stars.
Desert stars are … well … ‘heavenly.’
The topic of climate change has attracted widespread attention in recent years and is an issue that numerous scientists study on various time and space scales. One thing for sure is that the earth’s climate has and will continue to change as a result of various natural and anthropogenic forcing mechanisms. This page features the trends in mean annual and seasonal temperatures for Alaska’s first-order observing stations since 1949, the time period for which the most reliable meteorological data are available. The temperature change varies from one climatic zone to another as well as for different seasons. If a linear trend is taken through mean annual temperatures, the average change over the last 6 decades is 3.0°F. However, when analyzing the trends for the four seasons, it can be seen that most of the change has occurred in winter and spring, with the least amount of change in autumn.
Anchorage has a subarctic climate (the Köppen climate classification is Dfc) but with strong maritime influences that moderate temperatures. Average daytime summer temperatures range from approximately 55 to 78 °F (13 to 26 °C); average daytime winter temperatures are about 5 to 30 °F (-15 to -1.1 °C). Anchorage has a frost-free growing season that averages slightly over 101 days.
People had commented on the spatial distribution. It was just people who didn’t agree with the skeptics, so the criticism was ignored.
“JC comments: Each of these papers provides a fresh perspective on interpreting surface temperature data trends. The authors of all these papers have academic training and expertise that lies well outside of the field of atmospheric science and climatology. I have long said that fresh perspectives from outside the field of climate science are needed. ”
That makes no sense at all. Should we be having climate scientists telling brain surgeons how to do their job, chemists telling theoretical physicists how to do their job. Given all of McItrick’s many errors, you could at least pick a better economist to tell climate scientists how to do his job.