by Richard Tol
There has been some brouhaha over a guest post by Lüdecke, Link, and Ewert. I think the quality of work is so bad that Judith should never have spotlighted these papers. Publicly berating Judith, I of course drew more attention to their work, and her response added to that.
So why I do think this work is so bad it should be ignored?
There are two papers. The first is by Lüdecke, Link, and Ewert (LLE). It opens with: “It is widely accepted that global surface air temperatures have increased in the past 100 years, with a pronounced rise in the last 50 years. However, the question is not yet answered which forcings cause the actual global temperature change. One assumes [people have blamed] natural fluctuations, the Atlantic and Pacific Multidecadal Oscillations, increasing green house gases, urban heat island effects (UHI), natural impacts as volcanic eruptions, condensation nuclei generated by cosmic rays in which their concentration is governed by the sun’s magnetic field, anthropogenic aerosols from insufficiently altered coal power plants and further impacts.”
This is an important question: Why has the world warmed?
LLE address this question using statistical methods. Their analysis uses one particular model and one particular method. The model is persistence. Some time series are uncorrelated: An anomaly at time t has no effect in later periods. Other time series are autocorrelated: An anomaly at time t is repeated in later years, but its magnitude fades away at an exponential rate. Yet other series are integrated (or order one): An anomaly at time t remains forever. And some series are somewhere in between autocorrelation and integration: Anomalies fade but only slowly. This is called persistence, long term memory or fractional integration, and is associated with such people as Hurst and Mandelbrot. Persistence was first analyzed in the temperature record by Bloomfield and Nychka and more recently by Rea, Reale and Brown. It is a perfectly respectable line of inquiry.
As their method, LLE use detrended fluctuation analysis (DFA). DFA is a perfectly respectable method to estimate the correlation function, power spectrum and Hurst exponent in one go.
So far so good. Unfortunately, fluctuation analysis does not work on trending variables. Therefore, LLE use DETRENDED fluctuation analysis. That is, they first fit a polynomial of order two to the data, remove this trend, and study the deviations from the trend.
Having removed the trend from their data, LLE cannot answer the question: What caused the warming? They eliminated from their analysis the very thing in which they are interested.
There are other flaws. You need a lot of data to reliably estimate the Hurst exponent. 100 years of data is not enough. Therefore, LLE switch to monthly data. Although this increases the number of observations to 1200, it simultaneously reduces the time step of their analysis by a factor 12. 1200 months is still only 100 years. No information was added. They cannot estimate the annual Hurst exponent with 100 years of data. With 1200 months of data, they can estimate the monthly Hurst exponent – but they still do not know the annual Hurst exponent.
In fact, in a further act of pre-analysis data manipulation, LLE removed the seasonal cycle. Such data manipulation is known to affect the later analysis, perhaps substantially so. LLE do not provide any detail, however, so the impact is hard to assess.
LLE then estimate the Hurst exponent. The paper omits information on the adequacy of the statistical fit. No indication is given on the precision of the estimates.
LLE then use the estimated model to create a large number of synthetic temperature records, and thus compute the probability of observing the actual temperature record. They find that the observed trend is actually not that rare, which is in line with their Hurst exponent of about 0.65. Again, confidence intervals are not provided, but as they cannot reliably estimate the annual Hurst exponent, confidence must be low.
Crucially, LLE use the 20th century record to define natural variability. That is, they use the observations of the 20th century to assess whether or not the 20th century was natural or otherwise. This is tautological.
LLE do not test the hypothesis of “natural variation” against any other possible explanation of the warming of the 20th century.
The second paper, by Lüdecke, addresses some of the concerns. Instead of 2250 100-year temperature records, 5 200-year records are used, again in monthly time steps. Furthermore, 2 2000-year proxy records (annual time steps) are used. The same procedure is used: The trend (of interest) is removed, and DFA is used to estimate the Hurst exponent. In the temperature records, the Hurst exponent is about 0.6. In the proxy records, the Hurst exponent is about 0.9.
There can be various reasons for this. Maybe something changed in the last 200 years, maybe you cannot compare annual and monthly Hurst exponents, maybe the proxy records are bad proxies. No confidence interval is given for the monthly Hurst exponents, so it may be that the difference is not even statistically significant.
Lüdecke then computes the Hurst exponents of the sunspot cycle, using annual observations for the period 1700-2000, and decadal reconstructions for the period 9,500 BC-1900 AD. Again, the long-term record shows a higher Hurst exponent (1.0) than the medium-term record (0.6). As sun spots follow an irregular 11-year cycle, chances are that the difference is caused by different resolution of the two sunspot records. Lüdecke argues that sun has caused the difference in the Hurst exponent between the temperature observations and the temperature proxies. However, he does not compute the long-range cross correlation (the bivariate generalization of the Hurst exponent), so this is speculation.
Oddly, Lüdecke omits carbon dioxide.
In sum, these two papers are badly done. We do not learn anything. The authors argue that natural variability may well have caused the observed warming in the 20th century, but the first paper does not test this and the second paper is inconclusive. The second paper makes a link to the sun, but fails to bivariately test this hypothesis, let alone against other hypotheses. No conclusions can be drawn from this work.
JC note: This is a technical thread that will be moderated for relevance. Make your general comments on EIKE or disinformation brouhaha on one of the previous threads. I invited Richard Tol to make this post, based upon his comments made in the previous threads.
It warmed faster earlier in this century.
No problem. They can “adjust” for that.
Ah, yes, more vacuous accusations of scientific fraud.
You have no evidence and you’ve been found to have been wrong by a number of investigations so far. Bummer for you, eh?
Penn State does good investigations I hear …
I believe that this is the main valid point to be taken from the LL papers. As stated in the conclusion of their post here:
“However, the station density over the Earth is strongly irregular, which makes any global record but also the results given by LL disputable.’
Same goes for BEST, and the rest. Go to the satellites.
That’s the reason why the so called “gridding” is performed, their criticism is therefore a straw man. Worse, it’s an argument against their own paper LL, because most of the 2.400 stations used are sited between 30° and 60°NH.
Another problem in the second paper is the missing homogenisation. Ludecke uses the raw data time series from the Hohenpeisser station. It’s likely that it contains some times of bias, somebody linked a paper about the problems in the Hohenpeisser data in the first thread on this issue.
So both problems should lead to higher uncertainties in their Hurst exponents, but as Richard Tol says, unfortunately no error bars are given.
Who really cares about their papers? I agree with those who say that this crap should not have been posted here. But we ain’t running this blog. Now you explain how “gridding” or “kriging” or any amount of “kluging” makes up for the fact that there aren’t any thermometers in the vast uninhabited regions of the earth’s surface. Tell me about those thermometers that can feel the heat 1,200 km in the distance.
Tell me at what distance you think thermometers “can feel the heat.”
Thermometers don’t actually work that way, but they do work. Do you understand why?
I know how thermometers work, and I am not interested in getting into any bout of semantic quibbling with a pedant nitpicker. If you want to expound on how thermometers work at great distances, have at it. Maybe Josh will volunteer to argue with you, if he is not too busy filibustering elsewhere.
Robert: “Tell me at what distance you think thermometers “can feel the heat.””
It there any point to that question. It’s obvious that using a thermometer that is 1200 km, and in the case of BEST even 2000 km from the place where you are assigning a temperature, is incorrect. If you are looking at shore thermometers, as is the case in most of the Arctic, there is little relationship between what you read there and 2000 km inland. And yet extrapolation, or kriging, is done to get those temperature values.
Go have a look at the discussion at Climate Audit. Apparently, the BEST methods of adjustment can lead to noise that is as much as 50 C off. Along with their negative UHI, talk about a result that should never have been published.
Has anyone explained the effect of gridding on statistical calculations? Obviously, choice of grid can influence statistical results. Is there a quantified explanation of gridding’s effects that is up to scientific standards?
It seems you are raising similar issues with comparisons of too short instrumental records with too uncertain proxy records and inadequate statistical methodology to analyze the two less than adequate series, that have been the subject of debate in the past. Bravo!
What should be the correct methodology and level of uncertainty given the available data?
Many a statistician would answer that there is no correct method, but there sure are wrong methods.
The best advice to the applied statistician is to test, test, test and test some more. There is a large body of statistical research into climate change, using every combination of methods and data feasible. That’s how it should be.
There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming (with one important exception: http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf). There is the important caveat that no one quite knows the extent of natural variability (see http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1300715170), so it is hard to test that.
“There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming”
Bright Sunshine is up according to many papers by Wild et al and even the UK Met. Albedo changed according to Project Earthshine.
The W/sqm change dwarfs the 2.xW/sqm claimed for CO2.
Greenhouse gases fail to explain the significant pauses in the last 110 years.
Final sentence is correct. The short term pauses are not explained by greenhouse gases. Greenhouse gases are only a part of the equation, and they mainly drive a long term rise, not short term various and oscillations.
Bruce, no, please, no surely the warmist meme will have disappeared after 110 years of cooling!
Personally I don’t believe the global temperature can be measured with any accuracy and certainl not to an accuracy of 0.01C. I believe that it’s warming and has warmed, but the are too many areas of the world where temperatures have to be “interpolated” (that’s scienctific speak for “guessed”), which gives scope for anyone, and I have septuagenarians especially in mind here as a group prone to this malady, adjust the record to suit their agenda. Given that where we have thermometers they have many many problems with continuity, quality of data keeping ( bear in mind that the people taking the thermometer record in the past had no idea an entire scientific discipline, well perhaps not entire, would try to get the west to de-industrialise on the basis of a few tenths of 1C Increase in temperature), location data, changes in the environment, the disappearance of thousands of stations, oh, I could go on all day. But let’s say that the data are so complicated that large adjustments must be made (again scope for septuagenarians of our mutual equation to lower temperatures in the early twentieth century), and once these data have been tortured and the interpolated results put into the data and averaged we are led to believe they have an accurate assesment of the globel temperature. Pull the other one!
Knave always regarded the attempts at trying to understand past records, worthy as they may be, more likely to attract people who in another life would be likely to have collections of photographs of radio masts than boogie the nights away. But at least they wouldn’t believe and, and promulgate the belief, that the photographs were the real thing.
At least I hope not, otherwise we’d think they were nuts.
Just to put a number on this… global anomalies are generally quoted as being accurate to about 0.1C for measurements in the present. HadCRUT3 data is publicly available with error bounds as part of the data. The basis of estimations of accuracy are described in the journal articles associated with the dataset.
.1C is laughable as well when El Nino / La Nina alone can change the temperature by .7C.
I don’t see the problem.
Global anomalies are calculated with a 95% confidence range around about 0.1C.
El Nino/La Nina events can lead to excursions of 0.7C and indeed significantly more, but that’s the effect on local anomalies. One side of the pacific is warmer and other is cooler than normal. So you can’t really compare that with the accuracy of the global anomaly.
These events also have a significant effect on the global anomaly, up to around 0.25C or so.
What is laughable here?
Exactly, I have no doubt the conclusions in the papers are over reaches, but I did see some interesting possibilities with their altitude comparisons. Even a flawed paper may have something of value.
I have also wondered why more analysis do not include Tmax, Tmin and Tmean comparsions with dew point temperatures where available. It is artificially inflating the data base, but possibly enough to provide more information on which methods may be better.
Could you point me to some references to the body of literature suggesting “atmospheric greenhouse gases are a statistically adequate explanation of the observed warming”.
Ever notice that economists like to avoid charts and figures in their research papers, preferring tables of numbers? This paper follows this pattern as it was written by a pair of economists. I don’t know why other economists tolerate that, as it has a turn-off effect to scientists and engineers.
Undergraduate economics is taught graphically. Postgraduate economics is taught mathematically. The use of graphs signals that you do not have an advanced degree in economics.
Most academics are snobs, but academic economists are probably the worst.
Thanks for explaining this.
I can handle the math part, no problem, but when they print huge tabulated sets of data, my eyes glaze over. On occasion I end up doing an OCR scan so I can plot the numbers out and visualize the trends for myself.
That’s curious: The use of graphs signals that you do not have an advanced degree in economics.
My PhD is in statistics, and I use graphs with maths all the time. Deming said “Graphs reveal the information in the data, statistics conceal the information in the data”. It is really a poor practice to accept empirical claims based on mathematical analysis without supporting graphics.
I sense that there is an element of attempted humor in that post, but I am not sure.
“The use of graphs signals that you do not have an advanced degree in economics.”
That is surely an inaccurate statement, to put it mildly. Three random examples, out of so many.
There is the important caveat that no one quite knows the extent of natural variability (see http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1300715170), so it is hard to test that.
That is an understatement. No one has produced a demonstrably accurate representation of natural variability, that I have seen. All suffer from the fact that the reliable records are short time series.
There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming
Up until about 7 years ago, I’d have agreed with that. I think now there is a good chance that the role of CO2 is not well understood, as well as the role of clouds. Also, most of the details of heat transfer are not known to high accuracy, though numbers are occasionally printed with 3 significant figures where about 1.5 would be appropriate.. I have begun to feel that the “fair amount of agreement” is a mystery
With something this complex you have to bite the bullet and be prepared for surprises. Over confidence is poor preparation.
Thank you for taking the time and effort to write this critique–and for making it very clear.
If you have the time and energy, would you be able to offer suggestions to the papers’ authors on how they could improve their work–that is, assuming you feel it may be able to be improved?
“There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming”
And I am certain that you can find people who are certain of it. I’m also certain that they are too certain. And for that it seems that I can find a fair amount of agreement.
My advice would be to try to avoid any statement that begins with a red flag like that. History is chock full of contentions about which there seemed to be a fair amount of agreement only to be proven false. Climate science has a recent history that might be seen as an attempt by the practioners thereof as an effort to vault to the top of that disreputable heap. Or at least it certainly seems that way.
Thank you for this post.
Any chance you could elaborate on your comment on the Beenstock paper, particularly why you think it is important, and if possible is it “reasonable”?
Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation.
“The best advice to the applied statistician is to test, test, test and test some more.”
I do not see validation and testing of the science based on observations. I only see testing of a virtual earth science that has nothing to do with the earth we live in.
1) Main-stream science suggests a warming earth, which is false. The surface of the earth is warming, but the upper atmosphere is cooling equally, and the earth as a whole is not warming. Have they tested this?
2) Main-stream science suggests that precipitation is increasing. We know that this is false. Precipitation has been decreasing. Have they tested this?
3) Main-stream science suggests a rising water vapor content in the atmosphere. We know that this is false. Have they tested this?
4) All computer models that are based on the main-stream science failed simultaneously suggesting a grossly flawed science. This is a good test of the failure of the main-stream science. So what testing are you talking about?
Has that Beenstock paper ever actually been published? It seems to have been knocking around for a couple of years but I can’t see any record of it appearing in any form of journal.
Richard Tol: “There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming (with one important exception: http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf). ”
Statistically adequate? Now there is some statistical sophistry for you. How can it be statistically adequate when we have no agreed feedback value and when we cannot identify the percentage of variation that is caused by natural variation. In fact, we cannot even say with any certainty that we know the sources and magnitudes of most natural variation.
Actually, the significance of solar and natural variability increases as the estimated impact of CO2 doubling decreases. We are approaching the lower end of the IPCC range and most of the arguments appear to be based on the upper end of the range.
So the arguments should have a base sensitivity as a reference before evaluation the methodology.
So short instrument records that show hockey sticks, require hdiging the decline, or allow claims of catastrophic climate change are great, but those that show nothign much is giong on are garbage.
I am pleased we have an editor and expert on quality control to tell us this because with that level of authority, we can know it must be so.
The hockey stick is a feature of longer term proxy based temperatures; not instrument records.
We know that, OK ?
Here are Richard T’s summary statements from the previous thread:
>So, we have one hypothesis (CO2 etc) that cannot be rejected, and one hypothesis (natural variability) that seems implausible but cannot be tested.You need long-term proxies to estimate long-term natural variability, but unfortunately the proxies are not very good<
The sound of silence …
That’s a technical argument?
You may know that the hockey stick is supported on the basis of proxies; I was responding to a comment which said it was from instrument records. So we are both agreed on this, which helps underline the only point I was making in response to hunter.
As for Richard’s summary statements; I don’t see any problem with them; but by all means spell out a specific criticism of them if you like. I’m happy to engage differences in good faith and with good will.
Cheers — Chris
The proxy stick is an artifact of statistical abuse.
My point is that at the end of the day we have another defender of the faith who is relying on his authority and circular reasoning.
Shrug. I disagree. There was no “abuse”. There were questions about the best way to use PCA, and subsequent work using a proposed improved statistical method confirmed the original shape. All part of the normal replication and review in science. You may disagree, as is your prerogative — but this is now moving off topic.
Whether you agree with the proxy based analysis or not, I was simply pointing out the error in thinking the hockey stick was based on the instrument record. Fair enough?
Chris, you do know that the “blade” of the hockey stick was created by grafting instrument records onto the proxies, right? And that this was done because the proxies for the same time period drifted downwards, not up, right?
Chris, I will cheerfully send you a complimentary copy of The Hockey Stick Illusion which is a thorough and definitive analysis of Mann’s iconic and deeply flawed graph. Or, you might find The Hockey Stick Blues to be fun and informative.
K Scott, it is only the northern high latitude tree ring proxies which are seen to diverge drastically from measured temperatures post 1960, but which correlated well to measured temperature pre 1960. The divergence has been a focus of a fair amount of additional research, and the evidence to hand indicates it is most likely a unique phenomenon of the post 1960 conditions and not a repeating issue with the proxy.
You still get a hockey stick shape, though not as well defined, when you use proxies only, as proxies have much larger error bars. Subject, of course, to avoiding the post 1960 high latitude tree ring proxies for which a known divergence problem exists.
If you are inclined to use this divergence to dismiss the utility of proxies altogether, then sure; we don’t know anything much about temperatures before measurements were taken. That’s a rather extreme position. I recognize that the uncertainties are larger than with direct measurement, but I also accept the research on proxies which indicates this divergence is not a general problem invalidating the whole idea of proxies.
Ken, I’ll be happy to read the book as fairly as I can, to the best of my honest ability, if you really would like to send it to me. In that case, I would publish my comments at my blog (presently inactive, but still ready to go when I want it).
But this is still all off topic here. We got here only because I pointed out that the hockey stick is a reference to the shape of inferred temperatures over a longer span than you can get with the instrument record. Can we all just agree on that much at least??!
The rest is interesting, but really a totally different subject from the thread. The thread is looking at papers relating to instrument data; not proxy records.
Chris Ho-Stuart –
… it is only the northern high latitude tree ring proxies which are seen to diverge drastically from measured temperatures post 1960…
The high altitude tree ring series from SW California diverged drastically post-1980 and that inconvenient data was also truncated.
The proxy stick is an artifact of statistical abuse.
The famous graph was an artifact of abuse.
The statistical result was overstated and flawed, but defensible. Some of the criticisms were over-wrought. For the latest word, check out the Annals of Applied Statistics, vol 5, number 1 March 2011, pp 1 – 123 and the supporting online material.
Was i not often enough instrumental added onto or otherwise grafted into H.Stick, to form the blade which either replaced or hid the proxy in decline ?
On another tangent, here is Richard Alley further exploring the subject of what the case for GW is built on:
a ) “Hockey Stick and Global Warming”
b ) “Short-term Instrumental and Global Warming”
Richard Alley declares that the Global Warming conclusion isn’t based on what ignorant skeptic non scientists think it is ( hockey stick), rather it’s based on what satellites measure.
I think there is a typo in your question, and I am not sure what you are asking. Be that as it may the hockey stick is a feature (or alleged feature) of temperature data at least 400 years long. It is not “short instrument records that show hockey sticks”, which the comment to which I was originally objecting.
The “blade” (or alleged blade) is an upturn in the present, where we do have instrument data, and the instrument data is usually presented along with the proxy data; proxies are validated using periods where there’s an overlap. You can, however, simply use the proxies alone. You get a fuzzier blade, but still an upturn.
I do understand your simple point from the get-go, but while on the surface it would seem that Hunter’s remark would have been better without the mention of H.Stick, it actually brings something to my attention regarding what your appreciation of the HStick, is.
It’s my understanding that instrumentals are found to be twined into some of the proxy recons then used in other studies as if “pure” proxy…which would be another “instrumental” influence being worked-in, ..one which is distinct from the usually discussed blending, grafting, splicing, covering hiding with instrumental ala Jones/Mann w/ Briffa MXD series.
That extra HStick input from instrumental would seem to be something you are not familiar with.
Corporate Message, that is indeed going beyond what I am comfortable with, and also well beyond the topic here, so I won’t comment further. If we are agreed on the much simpler original point then I am content for the nonce, and will leave it at that. Thanks — Chris
“Crucially, LLE use the 20th century record to define natural variability. That is, they use the observations of the 20th century to assess whether or not the 20th century was natural or otherwise. This is tautological.”
But Richard, don’t you and other warmenizers use the 20th Century to prove warming in the 20th Century was unnatural?
Not data from the 20th century alone, no, they don’t.
Furthermore, the primary use of data from the 20th century for the causes of warming is not correlation based study of time series (which would indeed be a problem of the same kind as Tol indentifies) but more direct causal study of the actual forcings in play, and comparing the natural and anthropogenic forcings.
I think this is getting off topic.
“but more direct causal study of the actual forcings in play, and comparing the natural and anthropogenic forcings.”
Chris, I am quite sure the warmenizers do everything in their power to ignore changes in albedo and sunshine.
BEST had a great opportunity to grab all of the worlds bright sunshine data (and cloud data etc) along with the temperature data. They chose to ignore it all.
That’s incorrect. The sorting out of natural and anthropogenic forcings certainly includes albedo and sunshine as well. They are a vital part of the study to which I refer of direct actual forcings, both natural and anthropogenic.
Any study has a particular focus. BEST was specifically aimed at investigating the temperature record, and potential errors therein. I don’t think it is fair to say they are “ignoring” a completely different possible research topic. It’s all a part of research — study of temperature records, of albedo changes, of carbon cycles, of solar forcings, and of everything else. It is not reasonable to say people addressing one issue are therefore ignoring others! They all are being studied.
Chris, please quote to me the sections of IPCC that discuss bright sunshine and albedo. I couldn’t find bright sunshine when I looked.
All the various drivers impacting recent global climate are reviewed in Chapter 2 of AR4. Section 2.7.1 looks at solar variability. Albedo is not so much a forcing by itself as a way in which some processes impact climate. Cloud albedo is in 2.4.5. Surface albedo is in section 2.5.
The IPCC reports are really summaries or reviews of the literature. They are a handy way to get started looking to find what work is being done.
Cheers — Chris
Solar variablity (TSI) is not bright sunshine.
IPCC Total Fail.
As for albedo, it changes a lot within a year.
What do you mean by “bright sunshine”? I’m presuming this is related to global dimming or brightening, and that is albedo related.
Albedo IS discussed, at length, in the scientific literature, which is the point. You are were incorrect to say that the natural forcings or albedo or “brightness” are being ignored.
To speak of an “IPCC fail” in this context is very strange. I have no idea what you mean. I showed you the sections you requested where albedo is considered, in many contexts. Or are you objecting to the fact that they also considering the natural forcings related to TSI variation? I can’t follow at all what you are objecting to here, sorry.
You said that albedo and bright sunshine is ignored; but that’s simply not true.
Is not climate science to FIND what is causing the changes on our planet?
Whether it is water or atmosphere?
How can just tracking temperature data do this?
It can’t. That’s why science does more than only tracking temperature data. Tracking temperature data is only a part of finding how climate works — an important contribution rather than the whole deal.
On an earlier thread, someone referred to Dr Tol as a straight shooter. In my limited dealings with him, I’d agree with that. So while its a bit disappointing that he is shredding a paper ( version of which) was pubished in one of my journals, Energy & Environment, I don’t object to his doing so. Debate, discussion, about energy/policy issues is good. That was the original vision of E & E. What has, over such a long time, been so disappointing, is the lack of debate; or at least, the lack of engagement from one side. Possibly things are moving forward, and we are leaving behind us juvenile sloganising like “the science is settled”,
Bill, can I suggest that you might attract some actual debate by tightening the review process at E&E. At the moment I wouldn’t consider publishing even a comment paper at E&E becuase I have little confidence that it would get a competent peer review, given the very obvious errors in papers that have actually been published. For example, see doi:10.1260/0958-305X.21.8.953, which has two obvious errors in the first line of the abstract! I offer this as genuine constructive criticism, in my opinion, E&E has encouraged debate, but it has actively harmed the skeptic side of the debate by promulgating unsound arguments.
Some parts of the science are pretty much settled (for instance the subject of the two errors in the paper mentioned above), others are not.
Pretty much settled? Please. Science is NEVER settled. But, I’ll give you the chance to explain what is and isnt settled.
I think the fact that the post-industrial rise in atmospheric CO2 is anthropogenic is pretty much settled, don’t you? There is a reason why I said “pretty much settled” rather than merely “settled”, which is because science is, as you suggest, completely settled. However that doesn’t mean there are not things we can take as established beyond reasonable doubt.
The “juvenile sloganising” of “settled science” is more often a straw man argument from “skeptics” claiming that mainstream scientists claim the science is settled, when actually they are generally only too happy to talk about the uncertainties etc.,
Funny how you linled to RC, who routinely delete posts from people they don’t like.
Let’s differentiate what is settled and what isn’t. Events are settled as they are measurements of the real world. Subject to measurement errors,emprirical observations are “settled” because they are the facts.
Mechanisms, the causes of those events are were “settled” has no business even being discussed.
The rise in CO2 being from us is a measurement. What that does as a caused of other events is no settled in any way.
Interesting you attack us skeptics for pointing out this use of “settled” when it came from the AGW side to begin with. Suzuki uses it ALL the time. Though not always using the word “settled”, they will use many other words with the same meaning.
This is very interesting as per BEST:
“There are certainly a number of climate issues for which the scientific evidence is so conclusive and robust, they can reasonably be called “settled.” For example, the fact that the increase in atmospheric CO2 levels is due to human emissions, the fact that the planet is warming (as confirmed once again by the BEST study), and the fact that the increase in CO2 is responsible for a substantial fraction of that warming.”
“substantial fraction”? Yeah, like what what? If you agree with that statement maybe you can answer what that fraction is.
Richard, if you are going to start your posts with an ad-hominem then sorry, that doesn’t encourage me to want to discuss science with you, especially since if you actually read the RC article you will find that it is about the unsettledness of the science!
I didn’t attack RC, I made a statement of fact. They do delete comments they don’t like or don’t want to answer. They have of mine many times, as others. They wern’t even nominated for best science blog of the year. If you dislike us complaining anout settled science, then maybe you should contact those on your side who use it all the time. Start with Suzuki.
You persist in making this kind of statement. (I’ll also note that a similar comment was made upstairs but I ignored it because this was deemed a “technical” thread. At some point, however there isn’t much point in my respecting that designation if no one else does, because one more non-technical comment does not affect the overall character of such a large thread).
Anyway, when you made a comment previously, I asked you for some evidence. You didn’t provide any. Would you care to take this opportunity to provide some evidence backing up your claim?
Your tactic of pretending you don’t know of the egregious or outrageous things said by believers, along with deciding what criticism of believers is acceptable needs updating.
Strange that you give the doi for Lubecke’s paper instead of saying thats what your talking about – however…and here is the first line of the abstract (+ a couple of others). “This paper analyses the monthly instrumental temperature records from 5 stations in the northern hemisphere, each of which are reliable, local and well over 200 years in length, as well as two reconstructed long-range yearly records – from a stalagmite near Innsbruck and from tree rings in the northern hemisphere that are about 2000 years long”. Perhaps its because I’m neither a climate scientist nor a reviewer that I can’t see the ‘obvious errors’ you refer to. He just seems to be saying what he’s going to do, don’t see how that qualifies as containing ‘obvious errors’. Or maybe you have just wandered in from the disinformation thread?
Bill, the DOI was not for Lubecke’s paper, but another paper published in E&E, with two errors in the first line of the abstract
Ryunosuke Kikuchi, “External Forces Acting on the Earth’s Climate: An Approach to Understanding the Complexity of Climate Change”, Energy & Environment, Volume 21, Number 8 / December 2010, pp 953-968
Which begins “The Intergovernmental Panel on Climate Change defines lifetime for CO2 as the time required for the atmosphere to adjust to a future equilibrium state, and it gives a wide range of 5-200 years [incorrect it is 50-200 years]; however, a number of published data show a short lifetime of 5-15 years. [no, lifetime and residence time are not the same for CO2]”
Both of these errors should have been picked up in peer-review, neither was (one is possibly a typo). Anyone with a basic understanding of the global carbon cycle should have picked this up, and as it is the first sentence of the abstract, I can’t believe that the reviewers didn’t see it.
Bill, “Or maybe you have just wandered in from the disinformation thread?”
It is ironic that you complain about the lack of discussion from one side of the debate and then end your reply to a piece of well meant constructive criticism with rhetoric like that.
Dikran, I was baffled by the DOI taking me to Lubecke’s paper. Now you have clarified that, thanks, and thanks for your comments too.
Peculiar, I have checked the doi and it is correct for Kikuchi’s paper. I hope my comments are useful; I am also keen on discussion of the science and addressing the peer-review problems at E&E is likely to encourage bilateral disucssion of the science there.
did you have a chance to address that trenberth issue I left for you?
did you answer my question about trenberth. you asked for the page number I gave you the full cite and quote.
No, I haven’t, can you remind me of the page number (or which thread the comment was left on)?
Don’t bother, I have found it and replied on the other thread. Yes, Trenberth should have given a reference.
(I have copied this first part of this comment from the ‘Two new papers…” thread, as it seems to address potential deficiencies in the Ludecke paper also. If this is bad blog etiquette, please commence with the verbal flogging..)
In the first paper (LU), “Long-Term Instrumental and Reconstructed Temperature Records Contradict Anthropogenic Global Warming”, a cooling is observed in the 1800′s, followed by a warming in the 1900′s. Along with the small number of stations presented, I am concerned with the readings shown from the late 1700′s through the 1800′s. Following citations mentioned in comments made earlier, I read “Revision and necessary correction of the long-term temperature series of Hohenpeissenberg, 1781-2006″ by P. Winker (Theor. Appl. Climatol (2009) 98:259-268) doi://10.1007/s00704-009-0108-y.
This paper goes into great detail as to the known errors inherent in this data set (+0.5 R until at least 1878). The discussion raises reasonable questions as to the usefulness of uncorrected data from the Hohenpeissenberg observatory as well as other similar observatories, as I would assume they also used similar instruments which would be subject to similar error sources.
I do not see that Ludecke removed this bias from the datasets before analysis. It would be useful if this problem was addressed before any great stock was put on the records shown in the 1800′s. Has anyone found any papers detailing the thermometers and observational methods and practices at the other stations included in this paper?
Is it reasonable to look at the ending portions of Figure 2. shown in LU and ask if the slope of the temperature plots for all four stations show greater increases than the 100 year regressions? That is, if one applies 30 yr regressions, the similarity with the 1800′s would not be as clear.
Regarding Figure 3, please correct me if I am wrong in my interpretation, but it seems to indicate that the 100 year trends are increasing as time increases, while this figure tells little about more recent, short term (less than 100 year trend) changes. Particularly, I was looking at the portion of the curves above the zero line versus the portions below, rather than the number touching 2.
“Crucially, LLE use the 20th century record to define natural variability. That is, they use the observations of the 20th century to assess whether or not the 20th century was natural or otherwise. This is tautological.”
Of course it is tautological but is it mistaken? I think not. If the temperature record is trustworthy and our topic is temperature change then whatever variation is found in the temperature record must be included within natural variation for that time period. The record does not define natural variation for that time period but natural variation must encompass whatever is found in the temperature record. Could you refine your comment in light of these remarks?
“LLE do not test the hypothesis of “natural variation” against any other possible explanation of the warming of the 20th century.”
What do you mean? Natural variation is exactly the range of variation recorded. It is not hypothesis but data. If not, please explain.
If all hypotheses which might explain warming fail to explain something that cannot be attributed to natural variation then the hypotheses are uninteresting precisely because everything they explain was expected to happen naturally.
Please explain how you test the hypothesis of natural variation against any other possible explanation of warming during a time period.
Suppose that I want to answer the question: Is Theo Goodwin smaller or taller than the average man? Would it be enough for me to measure Theo’s height? Or should I measure more men to compute an average?
LLE look at the 20th century only, and concluded that it is average.
The standard way of testing alternative hypotheses is including them in a multivariate regression analysis of some sort.
A better analogy is comparing the height of the average 20th century human to the average 13th century human. Now where are those thermometer fossils?
I think a better anaolgy is that you want to measure the height of the average man and you choose to measure 39,000 men’s height … but not all at the same time. Some you measure in 1940,1941 and 1942. Some you measure in 2010 only. Some you measure and then attribute that persons height to the owner of the house he lived in after the original owner has moved. And most of the men you measure are in the Northern Hemisphere and almost none in Africa and certainly none of the mermen in the ocean unless they fit in a canvas bucket.
And then you average all those together with special “adjustments” thrown in and it turns out there are other people doing the same thing but the answer is different in each case.
Please explain why natural variability is an implausible explanation for the recent warming? Isn’t your only reliable comparison found in relatively modern temperature data? How do you know that there have not been around 4,322 similar temperature excursions in the last 30,000 years, for example?
Is Theo Goodwin smaller or taller than the average man?
That’s not the question. The question is “Has Theo gained or lost weight?” For that we need an estimate of the random variation (e.g. independent of his diet) in order to decide whether there has been a change.
“Richard S.J. Tol | November 8, 2011 at 5:44 pm | Reply
Suppose that I want to answer the question: Is Theo Goodwin smaller or taller than the average man?”
No, this would be the question. Is Richard Tol’s blood glucose level recorded between 3 and 6:30 in the afternoon unusual. We then take Richard Tol’s blood glucose level every 20 minutes over the next 24 hours and examine the fluctuation in his blood glucose. We then see if the reading taken at 3 and 6:30 in the afternoon are in anyway unusual.
Things like this are done routinely.
Two week ago my wifes blood cholesterol was measured and she had a level of mg/dL. This would in isolation would be very bad, However, her HDL was >100 mg/dL. So all the cholesterol is bound.
In isolation neither cholesterol nor HDL means anything. In the same manner, knowing temperature, without knowing the heat content of the medium, means nothing.
“In isolation neither cholesterol nor HDL means anything. ”
That’s incorrect. The ratio of total cholesterol or LDL to HDL is significant, but it’s not the only thing that matters. If your LDL is 2, for example, you have a severe heritary dyslipidemia regardless of what your HDL is.
Natural variation is not a statistical concept as it is applied in climate science. It is a physical concept. As regards temperature, natural variation in the 20th century is the range from the lowest recorded temperature to the highest recorded temperature.
The reason that natural variation is talked about in climate science is that it serves as a test for claims to the effect that temperature increases have become “unprecedented.” If temperature increases have not exceeded the highest recorded number then they fall within natural variation and need no further explanation.
In the field of science known as Population Genetics, natural variation does have a meaning that is more closely linked to statistics. However, in Population Genetics, the statistical measures are defined against living populations of critters. Climate science has not been able to identify anything that might correspond to a living population of critters and for that reason cannot apply the methods of Population Genetics.
Actually on can. The BEST data with its ability to dice and splice allows one to do exactly this. One takes the diced data and reverse half the data sets in time order (so that 1900-1950 becomes invested).
Then splice the diced data and does a reconstruction.
Do this a few 1,000 times and observe the greatest slopes for the greatest time period. Can one generate ‘L”s as often as hockey sticks?
It is the monkeys typing away writing Shakespeare or the radio operator in ‘On the beach’ listening to the Morse signal from the tapping bottle.
I volunteer for the voyage of discovery, monkeys or bottles, I gotta know.
“If temperature increases have not exceeded the highest recorded number then they fall within natural variation and need no further explanation.”
You started out so well, and then this. If “natural” was a complete scientific explaination, science would be an utterly tribal and pointless undertaking.
Understanding why things are happening in nature is science. Whether in four billions years there have been more extreme conditions at some point is totally irrevelant.
We are discussing the null hypothesis. The null hypothesis is natural variation and natural variation is the recorded limits. If no phenomenon violates the recorded limits then it is attributed to natural variation and needs no further explanation.
Once climate scientists develop some genuine physical hypotheses that go beyond Arrhenius, we can forget all this statistical nonsense.
You missed the point; well, actually the entire post. The question about natural variation in climate science can be translated into your example as follows: Does Theo’s height exceed through excess or defect the range of heights measured for non-diseased people during his lifetime. If the answer is “No” then Theo’s height can be attributed to natural variation. That is the way the concept of natural variation is used in climate science. It has no other use in climate science.
If you believe that the concept of natural variation has a statistical meaning in climate science then you have confused climate science with Population Genetics, the only scientific discipline in which natural variation does have a sound statistical meaning.
Population Genetics can define the objects and events that its statistical generalizations are about. Darwin gave them this gift in his Theory of Evolution. The objects are the actual living members of one population of a species in a designated territory on the physical earth and the events include items such as ordinary birth and birth of a genetic mutant. PG’s statistical laws are objective because they are constantly checked against actually existing objects and events.
By contrast, climate science has not been able to specify some set of objects and events which correspond in the appropriate ways to a population of animals or other critters. Just ask yourself what those events would be. Would they be temperature changes? Temperature changes in what? Would they be changes in heat content of a portion of ocean? Climate scientists have not a clue what the objects and events should be. For that reason, when they attempt to apply the statistical concept of natural variation from Population Genetics they will produce nothing but nonsense.
In saying that climate science has been unable to identify the objects and events that it is talking about, I am simply saying that climate science has produced no physical laws that apply outside an artificially constructed laboratory. Arrhenius created physical laws but their use beyond the laboratory is based on sheer speculation for the simple reason that we do not know what physical objects and events they should apply to. In the terminology of philosophy of science, climate science has not a clue what is found in its ontology.
Theo, do you really think that is true?
Over on CA Gavin gave a pair of points from 24 models, and claimed a ratio and SD; 0.95 (+/- 0.07)
I took the same data and treated it like a biological sample and got the slope and 90% slope CI’s; 0.357-1.039-2.995.
Fred appears to think that if you change the name of a flux to a ‘forcing’ you then change its properties.
Yes, I do. No model has ever produced a physical hypothesis. Models are great for analytic purposes but cannot be used for prediction.
What you describe is coincidence.
“Fred appears to think that if you change the name of a flux to a ‘forcing’ you then change its properties.
No, totally predictable. When you do not know what your objects and events are then any change is confusing. I have been saying for years that if climate science were a genuine science, if it consisted of relatively well confirmed physical hypotheses, it would not use terms such as ‘forcing’ but would replace them with references to relevant physical hypotheses.
Isn’t the approach used to define “unnatural” (or external) to be a linear trend in the data, and they test for the existence of that having adjusted for artifacts from noise in the data.
You mught question the tests used and whether they detrended the linear trend out by using a 2nd order polynomial but I don’t understand the point about Theo Goodwin’s height.
If you want to know whether Goodwin’s height is normal (or not), you should look at more men than Goodwin.
LLE study the 20th century to see whether the 20th century is normal.
I thought they studied the 20th century to see if it had a linear trend in it.
And what do “Climate Scientists” do to get their “Anomalies”?
Please give us a definition of “Anomaly”.
We don’t want to know I’d Goodwin’s height is normal, we want to know if it is within *natural variability*. For that, we need only confirm the existence of two other men: one taller and one shorter. Having confirmed the natural existence of these two we can logically conclude that Goodwin’s height is within natural variability.
Except the average human height has changed over the past 3 million years. Looking at just today has the same problem as AGW has, too short a time frame. At least we have physical artefacts that show us human height changed throughout human history. The temperature equivalent to that does not exist.
“Having confirmed the natural existence of these two we can logically conclude that Goodwin’s height is within natural variability”
EXACTLY! Absolutely analogous to the temperature trend of today. The claim today is anomalous disapears soon as one looks at the history of the planet. Tol really should turn his criticisms back onto his own position.
Note that you observe three people. LLE observe only one.
Richard S.J. Tol | November 9, 2011 at 10:18 am | @Scott
Note that you observe three people. LLE observe only one.
Yet AGW scientists only use a small part of one, the 25 year period from 1975 as an indicator unprecidented temperatures and 100 more years of runnaway global warming.
Richard, this goes both ways then. AGW supporters cannot claim the current warning trend is unnatural when they have no temperature records prior to the 20th century to compare to. The issue then is one of a default position and the null hypothesis. The null hypothesis demands than the default position be that all events have a natural mechanisms for their cause. The onus in science is that the AGW side make the empirical link between temperature and CO2 emissions. Empirical because computer models are not evidence.
So I think you should turn your critical analysis of this paper back onto the AGW side with the same enthusiasm.
“Richard, this goes both ways then. AGW supporters cannot claim the current warning trend is unnatural when they have no temperature records prior to the 20th century to compare to. ”
Denying the hockey stick isn’t going to make it go away. And of course there are thermometer records prior to the twentieth century as well as other proxies.
Deniers have completely failed to undermine this science, despite many attempts. You just embarrass yourself by continuing to cling to this fallacy.
McIntyire debunked the hockeystick long ago, the IPCC stopped using it. It’s nothing today. There is more than enough evidence that today is not hotter than periods in the past: http://wattsupwiththat.com/2011/11/10/new-study-shows-temperature-in-greenland-significantly-warmer-than-present-several-times-in-the-last-4000-years/
Right on the money.
Above, you write:
“Except the average human height has changed over the past 3 million years. Looking at just today has the same problem as AGW has, too short a time frame. At least we have physical artefacts that show us human height changed throughout human history. The temperature equivalent to that does not exist.”
This reasoning that you allude to through the example of human height has been fully explicated within the discipline of Population Genetics. They could do that because their statistical claims are about actually existing populations of critters. Climate science has nothing remotely resembling such a well behaved set of objects as a population of critters. This is just another way of saying that climate science has no well confirmed physical hypotheses that go beyond Arrhenius. Those hypotheses would identify in a rigorous way the objects and events that climate science studies. Climate science does not know what it is talking about in the very serious sense that it has no rigorously formulated hypotheses that specify its objects and events.
No statistical genius can make up for the lack of rigorously formulated physical hypotheses. All of the statistical genius exercised in behalf of climate science will necessarily come to naught.
“So why I do think this work is so bad it should be ignored?”
Sorry, but you cannot get away with this statement. Your claims and especially your attitude made it eminently clear that you held Dr. Curry responsible for not suppressing the work on her blog. Please explain yourself.
Full credit to Dr Curry for inviting this guest post. She has said this is a technical thread, and that should be the focus here; so Dr Tol is making the kinds of statements Dr Curry wants for the thread. I think we should respect her directions for the thread, and applaud it as generous and constructive.
You must have failed to notice that what Theo quoted and commented on was part of Dr. Tol’s post. What are you a school marm? Judith has been handling this blog for a while.
Yes, I did notice that. I also notice that Tol does go on to explain precisely why he thinks the work is so bad it should be ignored; and that Theo was using that quote to disparage the technical focus and asked Tol to defend instead the earlier remarks on personal issues and responsibility.
I have wanted to express support and appreciation for what Dr Curry has shown with this thread; and that is all I did in my comment. The technical focus is very welcome to me and, evidently, to Dr Curry. I’m supporting and appreciating that.
Yet you persist in a non-technical food-fight. How many times are you going to interrupt the discussion to carry on with your chastising of Theo? Why don’t you read Dr Tol’s post again, and explain how it is purely technical.
“There has been some brouhaha over a guest post by Lüdecke, Link, and Ewert. I think the quality of work is so bad that Judith should never have spotlighted these papers. Publicly berating Judith, I of course drew more attention to their work, and her response added to that.”
He seems to be bragging about the public berating. Do you think that Dr Tol might have been gracious enough to apologize to Dr Curry for publicly berating her? Or does his technical explanation serve to justify the public beratement?
I wonder who raised some of you people.
“I also notice that Tol does go on to explain precisely why he thinks the work is so bad it should be ignored…”
My God, man, if Tol thought that it should be ignored when he saw it on Dr. Curry’s blog he would have said to himself “It is too late to tell her to ignore it” and would not have posted on it. No, no, no. Please stop whining. Ignoring it was never Tol’s gambit. He upbraided Dr. Curry for not censoring it. He owes us a clear statement of his views on censorship of peer reviewed articles.
Theo, I’m not whining at all. Just the opposite. My comment was entirely an appreciation of what Dr Curry has done here in allowing this discussion. Look at it again. The technical focus requested, and provided, is very welcome. It’s light driving out heat, as Dr Curry says, and I agree. Again, bravo to both for the direction they’ve taken here; and to which I also return. Cheers — Chris
You changed the topic from ethics to science. If you are not trying to prove that you know nothing of ethics as a discipline that exists independently of science, please explain yourself.
No; you changed the topic from science to ethics. I’ve got nothing to explain here. I just expressed appreciation for the technical focus requested by our blog host.
“So why I do think this work is so bad it should be ignored.”
That is a blatant whitewash of his statements on the earlier threads and it gives an obviously false impression of the attitudes expressed there. He called for censorship, plain and simple, and he criticized Dr. Curry for not practicing it.
Tol introduced the comment in this thread, gratuitously, so he has to live with it.
Both papers you are deconstructing were concerned with historic temperatures, which are a special interest of mine.
Dr Mann created a certain view of climate which the met office endorse
“Before the twentieth century, when man-made greenhouse gas emissions really took off, there was an underlying stability to global climate. The temperature varied from year to year, or decade to decade, but stayed within a certain range and averaged out to an approximately steady level.”
Do you agree with Dr Mann and the Met office-and basically the orthodox climate community, that the climate was as stable as is claimed? What do you base this on, because observations by real people at the time and associated records seem to tell a very different story.
I never trusted Mann’s analysis (but have nothing to prove that I didn’t). I think that McShane and Wyner set the new standard in this field:
That is an interesting admission-thank you. Dr Mann’s unusual view of the world has set the climate agenda since 1999. It is a view of the world being promulgated to this day by the Met Office who, as you know, are a very major contributor to the IPCC.
The variability of the climate in past centuries is astonishing. The world has, I believe, been on a warming trend since 1607-Giss et al merely plugged into the end of this warming trend-not the beginning of it.
Do you agree that the warming trend well predates our ‘excessive’ co2 emissions? thank you
I have never looked into that in any detail. McShane and Wyner argue that the proxy records contain very little information, so I guess we just don’t know.
We have very good proxy records in the tens of thousands of first hand contemporary observations stretching back many centuries. I have just spent three days ay the Met office archives reading them. Better than a tree ring any day
“……………..the proxy records contain very little information, so I guess we just don’t know.”
This should be tattooed on every climate scientists’ forehead.
I agree with this and with your incisive demolition of these two papers. I note the absence of any credible defense of the papers and a high incidence of topic changing.. look at the sunshine.. for example.
A bad paper neaty dispatched as you did is a good teaching tool.
However, some wont learn and they use the bad paper as an occasion to thread jack
steven mosher, on at least one issue Richard S.J. Tol has been argued to be mistaken by multiple people, and no rebuttal has been provided. It may not be a “credible defense of the papers” as it is only one issue, but you do a disservice to act as though Tol’s word is the last word on this issue.
As it happens, I think he is wrong on at least one other issue, but it’s a more complicated matter, so I’d rather stick with the simple, unanswered point for now.
I’m suspicious of teaching by bad example. There are always a few students who drop the “not” in “this is how it should not be done”.
Do you agree with Tol’s public berating of Judith, over her “spotlighting” of these papers? Did she really “spotlight” the papers, or just expose them to the light of day and present them as an inviting target? Is this really something for this Tol character to get so hysterical about? He could have made his point without the public berating, and then the bragging about the public berating, when he was given the “spotlight” to criticize the papers, by the victim of the public berating. Climate scientists, for the most part, seem to be a very petty lot. Don’t you agree, Steven?
Richard. I would say that your your critique is a good example. So, its not not teaching by bad example. YOU are the example I am using, so I am saying your are a positive example of a good critique.
In short, it is logically impossible to teach critique without having some to critique.
I read his tweet. I think he’s wrong. I think there are reasons, on occasion, to post crazy, wrong, bizarre thinking. I would love to debate that with richard, but not on this thread. Here I think he’s right.
Go figure. he can be right about science and wrong about Judith
or verse vica ( Monktopus does my latin)
I agree absolutely with Steven Mosher.
This also highlights the illogical nature of your original objections, attacking Dr. Curry with heavy handed accusations (near libel) for not suppressing the papers because they are bad in your view. Instead, prove they are bad, as you have done, and then the next time things like this crop up, we can point back to your deconstruction and stop them before they can go anywhere. And additionally, people can learn a great deal about statistics and how things are done by reading your critique–being able to apply this information elsewhere. This is how constructive discussions and advancement of science is done: by debunking bad methodology and hypotheses, not through suppression of ideas. Only in evaluation can we tell a good idea from bad in the first place.
So thank you for manning up and making the case instead of meaningless rhetoric.
Why would climate scientists need to be reminded that you are ignorant?
I rather think they suspect that already.
You don’t know much about climate science; that’s not a general condition.
Maybe we should tattoo on deniers’ foreheads “MY IGNORANCE OF SCIENCE IS MY OWN RESPONSIBILITY AND IS NOT CAUSED OR SHARED BY SCIENTISTS.”
Does your mother know your on the internet?
I think that McShane and Wyner set the new standard
I am glad that you wrote that. I think that the whole compendium of papers, including critiques and rejoinder and supporting online material is worth the time and effort.
My only objection to the original Mann et al work was that they were obstinate in response to criticism. There are at least two confounced signals in the proxy records: temperature fluctuation and precipitation fluctuation. Ideally, if these are not too confounded, they ought to be separable by PCA or factor analysis. But only if both signals are strong enough.
If you real M&M, there’s a lot more than simply moisture, although that’s the big one. A couple of others are disease (which can be anthropogenic), and the fertilizing effect of anthropogenic NOx. I’m sure there are others, as well as unknown unknowns.
If the proxies contain little information, we are really in bad shape on attribution and the IPCC statement is on very thin ice. You say CO2 is an adequate explanation. Still, that’s a much weaker statement than most climate scientists make. It seems to me that in fact the evidence is that “natural” variability must be large or we would not have had ice ages and warm periods.
For me, the interesting thing about the papers was the attempt to look at longer records. The attribution is just as questionable as the IPCC one.
“It seems to me that in fact the evidence is that “natural” variability must be large or we would not have had ice ages and warm periods. ”
I think ice ages and interglacial periods can mostly explained by the precession of Earth’s orbit.
And recent cooling [tens of millions of years] by Antarctic location via plate tectonic.
I would guess we have increased “natural” variability” due to cooler periods [Ice ages and the very recent LIA].
I think one one has amplification of cooling from year long snow/glacial coverage in temperate zones. And also as glacier melt there is additional warming, but with less glacial coverage at present it seems one would get less warming as the remain glaciers melt.
It seems the most unexplained is the recent Little Ice Age. Was it merely from low sunspot activity or are other equally strong factors which caused this cooling.
That is why I applied the Ludecke DFA analysis to interglacial proxies. See the bottom of this thread for a synopsis.
No, the proxies and Hockey Stick are just one of several streams of research. There are multiple, independent streams of research being done because redundancy is important for making the case. The IPCC statements still stand without any reference to paleoclimatology.
Thank you for stating that plainly and clearly.
We clearly spar on many fronts, but I think this is a deal maker.
Occasionally analysts overlook basic fact that natural oscillation trends are never simple sine waves. The Northern hemisphere’s Atlantic area 100 year pattern which you can see here:
with maxima at 1740, 1840 and 1940 is a result of cross-modulation and there is no 100 year fundamental component. Such trends eventually are reversed i.e. previous maxima are replaced by minima at 2040, 2140 and so forth.
Non technical, but I’ve got to say this is a superb display of the blogosphere at its best. As much as I dislike Professor’s Tol’s previous statements, I give him credit for taking the podium in an attempt defend his position.
agreed, I am delighted the way this thread is evolving. Seems like we sometimes have to get the heat out of the way to get some light.
“we sometimes have to get the
heatmisdirected hot air out of the way to get some light.”
Fixed it for ya ;-) But, seriously … I admire your class, diplomacy (even while telling it like it is, as we used to say!) – and patience with us all!
And, sorry, but I’ve got to say it … My guess is that had Richard Tol sent you an E-mail along the lines of, “Judith, I don’t agree with the two papers you’ve just posted. Could I do a guest post on this?” Your instant reply would have been, “Absolutely! When can you get it to me?” (So I still don’t understand why anyone would choose a different path!)
When you are delighted you are always right. However, Tol seems downright reticent to me.
Any way you could get Ludecke back here to defend the papers? Tols argument is compelling, but still one sided so far…. would love to hear how they would counter his criticisms.
There is some thread jacking going on. Its a good idea to show people what you mean by staying technical and staying on topic by engaging in some snipping. over time people will learn that they cannot come on and thread jack a discussion about a paper into a discusion of their hobby horse. This will be vital if the others authors show up to discuss.
Problem is, with WordPress, there’s an art to snipping without destroying the threading. Every time she snips, the comments end up all over the place, and it’s a roll of the dice where the next one will end up.
Having thought about the problem a bit and done a small amount of simulation, I am not convinced that the Hurst formulation is particularly helpful .
As you say in your post, the model is one of a number of linear operators, each containing delays (using delay as a term to include “persistance”) that influence the time series. Assuming that the system is linear (which a summation of an exponential decay or Integration implies), the Impulse response of the system can be estimated from the ACF provided, one either knows the inputs into the system or one makes an assumption about the bandwidth and distribution of “noise” driving the system. This leads to direct simulation under specific assumptions about the statistics of the driver of the system. There is no need to assume an analytical form of the ACF, which in any real system is likely to be complex and a simple model may underestimate its tail. ( While it is possible to characterise the IR as set of exponential decays, this is a numerically fraught procedure and is frustrated not only by the length of the data, but also because there is essentially only one temperature record.) From this, estimation of the distribution of trends is possible through simulation.
The most important feature of the LL and the LU papers is that they do attempt to quantify trends that arise from delays within the system. The problem being how to model, and estimate the values of, the delays and what assumptions are reasonable. I would agree that the estimate of century long delays is highly dubious, but there are still important questions that can be asked over a shorter time scale, if one can establish the distribution of shorter trends. The BEST study shows an increased rate of temperature rise from 1960 to 2000 and there appears to be a slowing, or cessation of temperature rise over the last 11 years. The variability of trends from a standpoint of a model containing delays is important in interpreting these observations.
My conclusion is that the methods and the analytical model in the LL and LU papers are contentious, but the question asked is extremely important. I would add that the delays are caused by physical processes, and analysis of these processes, and their individual magnitudes, should lead to more insight into the model.
I think you agree that the ACF as calculated from the data is not exactly the same as the impulse response, since we have a noisy sequence of impulses driving the real data. If we can make a delta function from all the forcing functions, then we would have an idea of what the real impulse response is.
So one can get close if we have knowledge of the forcing function, in which case we can try a convolution and try to fit an impulse response that would match the ACF. This may be a different way of phrasing what you stated.
Unfortunately, nowhere in these Ludecke papers are they addressing such a model-based approach.
Yes, with noisy data, the assumption is that the ACF multiplied by a step function at t=0 (because the ACF is an even function) is Rxy and Rxx is nominally an impulse. With Gaussian noise, the power spectrum distribution is Chi2 with 2DF at each harmonic which enables one to get estimates of the cross power spectrum. It’s pretty hideous and I haven’t done it since a PhD course work exercise 35 years ago. The advantage of the ACF in this context, is that if it is determined by direct integration, and effectively approaches zero, you can perform a DFT, without having to window, detrend the time domain signal.
Re your last remark, I think there is a bit of a disconnect between statistical , and “signal processing”, treatments of time series. Delays in a system are a property of the phase spectrum and I suspect that we should be looking at phase more closely in this problem.
I understood the point of LLE was that previous temp analyses could not ‘detect and attribute’ an external trend over a natural fluctuation. So, their approach to resolve that problem is to use DFA to estimate the natural fluctuation then see what leftover in the original record that could be the external trend. If that is so then the criticism of the (external?) trend removal by DFA is mute.
John Whitman, this is absolutely correct (though the word is moot, not mute). Richard S.J. Tol argues the authors cannot explain the warming because they removed it from the series, but the process used to remove it only removes that which does not (necessarily) need to be explained. It is akin to saying an analysis using monthly “averages” of temperature is wrong because it cannot explain why the individual values deviate from that average.
Not being able to explain something which needs no explanation is hardly a fault.
LLE ask what caused the trend? They then removed the trend, and analyzed the detrended data. The detrended data does not tell anything about the trend.
They certainly did not ask what caused the trend. They asked what the probability of the trend being non-natural was, and demonstrated by conclusions like:
Assigning a probability of warming being non-natural is not the same as asking what caused that warming.
That should obviously be “as demonstrated by.” Spellcheck can’t catch everything.
Sure, they can get any answer they want by comparing it against a limited time series.
Yet compare it against the kind of fluctuations one sees in ice core data and the probability drops down to a percentage point with it diving even more quickly if the recent warming continues.
I can show this either through DFA or a histogram-based cumulative probability analysis of ice core data.
So their statement should probably read that the warming trends have some naturally variable components but can’t be completely attributed to natural causes unless we are coincidentally at the 1% historical rate at which this could happen.
Bottom line, I think they used a limited data set to come up with questionable numbers and they then made extravagant claims to what it meant.
Normally I prefer to focus on one issue at time, starting with the simplest. In this case, if we cannot agree about what question the authors asked, I find it unlikely we can agree about the validity of their answer. However, conversations on blogs are intermittent, so it may help if I don’t wait for an answer to my last point before raising the next.
You claim “[The authors] then removed the trend, and analyzed the detrended data.” I dispute this. Detrended fluctuation analysis (DFA) was to find the “Hurst coefficient,” not to detrend data. Because of this, the data analyzed did not have its trend removed. Instead, the data was preserved, and the results of DFA were used in the calculation of exceedance probabilities.
My position directly contradicts yours. If it is right, a major criticism you have raised against these papers is bogus. If it is wrong, you need to demonstrate such. You cannot simply rely on the fact the word “detrended” exists in the name of the methodology to convince people the analyzed data was actually detrended.
It is a curious thing here. http://polymer.bu.edu/hes/articles/hiccs01.pdf makes it clear that “These results indicate that a linear trend has no effect on the scaling obtained from DFA-2. The reason for this is that by design the DFA-2 method filters out linear trends ….”, and LLE talk consistently about using detrended data in their comparisons, but you do wonder why they approached it this way (and even if they did).
In LLE’s words
The detrended fluctuation analysis of order v, the DFAv, can be assessed as an improved FA which automatically ELIMINATES trends of the polynomial order v from the profile Yj
Richard S.J. Tol, I wish you would take the time to make more substantive responses. This one of yours is completely bogus. You highlight this text:
You’ve simply taken this out of context. The profile referred to in this quote is one “segment” of the total series. A (polynomial) fit (fit based upon the value of v) is generated for the profile. The variance from that fit within the profile represents the (squared) fluctuation value of the segment. The reason you “eliminate” trends in this process is so you can find the variance of the profile without being hampered by the fact the series is non-stationary (which is what makes normal fluctuation analysis invalid).
This calculated variance (amongst the various segments) is then used to calculate the “Hurst coefficient.” The Hurst coefficient is then used in calculations of the exceedance probability, calculations which use the actual data series, not a detrended one.
Put bluntly, you’ve simply taken text from a standard description of detrended fluctuation analysis out of context to claim DFA does something it doesn’t do. Even the most basic understanding of DFA is enough to show your “interpretation” makes no sense.
You seem to have missed the point: this is the way randomization tests and some monte carlo tests work (e.g. bootsstrapping.) Eliminate the difference between the groups (or “trend” in this case), estimate the random variation from the residuals (random sample paths in this case), and sample repeatedly from the random variation to see whether the “apparent trend” can occur with there being a “real trend” behind it. If the “apparent trend” occurs frequently when there is no “real trend”, then there is no evidence from the “apparent trend” that there is a “real trend”. that’s what Lüdecke did.
Lüdecke performed a statistical test of the null hypothesis that the “trend” could appear in the data without a “real trend” generating it. They showed that the null hypothesis can not be rejected: indeed, the data are real close to what is expected if the null hypothesis is true.
There you go Brandon. The real real statistician has stepped up and confirmed your suspicions. The famous CS’s statistical abilities do not qualify him to criticize these papers. And he still owes Judith an apology.
This only works well if it the trend is a smaller perturbation on a larger dynamic range. If the entirety of natural signal also happened to be the same period as the nuisance trend, then by removing the trend you have also removed the natural signal. I say this because Ludecke is claiming that 40% to 90% of the trend is made up of natural variations.
For the German data set, they are only dealing with 18 years of records (see Fig.2 in LL) and the monthly data is really useless as that has a seasonal trend built into it. So they have from 1 year to 18 years to discern a bootstrapped sample from, while the trend can be anywhere from 1 year to 18 years. Where are they getting the bootstrapped sample set from if they have detrended the important natural variations and have really no temporal dynamic range to deal with?
That’s why I have been using the Vostok data which has several hundred thousand years available to work with. The probabilities then go down to 3% for the chance possibility of a 0.7 degree change per century as a natural variation.
The fact that I am using a ice core proxy for temperature is a different issue. I appreciate the chance to show how the problem should be formulated.
About the second paper:
Having removed the trend from their data, LLE cannot answer the question: What caused the warming? They eliminated from their analysis the very thing in which they are interested.
That is not much of a problem. They eliminated the apparent trend in order to estimate the background “noise” or “residual” process (which they label as “natural”). With their estimate of the noise process, they compute some summary statistics, principally the autocorrelation function and the hurst exponent of a particular model of the autocorrelation function. With that as a model for the noise, they simulated realizations of the noise, and the showed that even pure noise produce apparent signals a non-negligible number of times. It’s a straightforward test of the hypothesis that the apparent trend could occur in a process with no real trend, and the result is in line with the null hypothesis.
There are other flaws. You need a lot of data to reliably estimate the Hurst exponent. 100 years of data is not enough.
This is applicable to every attempt to test the null hypothesis that has yet been attempted. No adequate model for natural variation (independent of CO2 or anything in particular). has yet been presented.
For this case, I would recommend a procedure similar to their hypothesis test. Draw 10,000 samples of length 1200 months from the estimated noise distribution, and use that to get a confidence interval for the Hurst exponent. The confidence interval is bound to be wide. Sample 100 values of the Hurst exponent form within this confidence interval, and repeat the hypothesis test above. That will give you a range of probabilities of rejection/acceptance, and you can, if you wish, weight them by the distribution of the obtained Hurst exponents. That will give you a better idea how reliable that result is. Now that Lüdecke et al have published their papers, it is obvious why this should be done next.
A few weeks or months ago Santer et al published simulations showing that a 17 year record was necessary in order to have a reasonable chance of detecting whether a change in the climate oscillation had occurred. Their method of devising a null distribution has even less hope of producing an accurate result than Lüdecke et al, wouldn’t you agree? They used none of the information available in the records with respect to natural variations.
We do not learn anything.
I think that depends on who “we” are. With the Lüdecke papers as guides, we can follow up your critiques with other analyses, and see to what degree the inaccuracies in the model and data pervert the actual result. The Padilla et al paper that Dr. Curry linked used an Ornstein-Uhlenbeck model for the natural variation; using a stochastic dynamical model that produces sample paths with oscillations like those that have been observed might give a better null (“natural”) model for the Lüdecke instead of relying on the sample estimate of the Hurst exponent, especially since the GCMs produce trajectories with apparent oscillations with 40-70 year periods.
These papers should be elaborated upon and improved upon, not ignored.
What caused the warming?
Answer, whatever caused the oscillations that occurred before anthropogenic CO2.
I absolutely agree with your first point about showing the trends usng a reasonable noise model. This seems to be a reasonably straightforward Monte-Carlo approach that is widely accepted.
The exact model of the delays, lags, or persistance is less important than the idea that they will produce trends. Much of the argument has been on the minutiae of the Hurst model, rather than the main idea which is to get an estimate of the distribution of the trends. Their modelling shows that the recent fluctuations ( post 1970) lie well within their limits and this needs to be resolved.
As regards the length of time required to get the Hurst exponent, I’m sightly confused by this, having only dimly heard of it before. If the ACF becomes negligble at a certain point, it can be truncated and impulse response of the system can be calculated. If the IR has decayed to being neglible at, say 20 years, which corresponds to a 30 year tuncation of the ACF, it can still produce trends that last 50(p~0.2) years and 100 years (p~0.1) when driven by broadband Gausian noise. The results don’t seem to be grossly sensitive to the analytical form of the IR – i.e. multi-exponential, diffusional. The idea that you need an an ACF that is as long as the trend it produces is just wrong.
I guess an important approach is flesh out the model of the processes underlying the various delays physically so that a better model can be built with some idea of its parameters rather than a possibly arbitrary model from a single ACF (which is I think your point in your last paragraph). The variability should also I think be thought of in terms of the IR of the system because it gives better physical intuition (or at least it does to me).
“I guess an important approach is flesh out the model of the processes underlying the various delays physically so that a better model can be built with some idea of its parameters rather than a possibly arbitrary model from a single ACF (which is I think your point in your last paragraph).”
That is a fairly complex fluid dynamics problem which would require a different model approach which may not provide enough accuracy to help, but it is possible to improve understanding somewhat. What the heck, there are billions of dollars worth of data points and several basic approaches that should converge.
As regards the length of time required to get the Hurst exponent, I’m sightly confused by this, having only dimly heard of it before.
I believe that this is only part of Dr. Tol’s objection. Lüdecke et al are testing a hypothesis about 100 year sample paths: could a sample path such as we observed likely have arisen naturally (independently of CO2)? Treating the 20th century record as 1 such sample path, we would really like to have some more sample paths of 100 years in order to estimate the natural variability of such sample paths, and Dr. Tol says that 10 such sample paths would be a reasonable minimum. Each sample path will give its own estimate of the Hurst coefficient, and without many sample paths you can’t tell whether the obtained Hurst coefficient is near the mean or far deviant from the mean of the population of relevant Hurst coefficients. On the previous thread I wrote in parentheses that he has a good case.
Bootstrapping the Hurst exponents is a poor substitute, but I do believe that Lüdecke et al have done the best job so far at creating a reasonable distribution of sample paths against which to test the hypothesis that recent warming is not unnatural. I have not yet seen anything this good that rejects the hypothesis of natural variation.
If the autocorrelation function declined to 0 after 20 years, you might think of the 100 years as 5 correlated samples ( the second and fourth 20 year periods are not independent of the first, third and fifth, but the first, third and fifth are independent of each other.) That’s better than 1. So you are right about the dependence on the size of the exponent. But the 20th century records are not really like that. The record shows big temperature swings. with periods approaching 50 years.
Stephen Schwartz of Brookhaven gave a talk at the recent Santa Fe conference that Dr. Curry told us of in which he did explicitly model an impulse response function for the climate mean temperature. I saw his name on the conference schedule and asked him for it, and he sent it to me. I expect from your comment that you would be interested in it.
Thanks for your reply.
I am beginning to see the problem more clearly. If one is going to account for a 50 year period, and one will have to have at least a 50 yearACF to resolve it (since one needs to resolve the power spectrum at that frequency) although this should be possible in theory with a 100 year plus record.
I’m beginning to see how one might approach the problem of trends in the record from a spectral perspective. Shape in a signal is dependent on its phase spectrum. For example, the estimated amplitude spectrum of noise is flat with frequency as is also the amplitude spectrum of an impulse. The difference between them is determined by an impulse having a zero phase spectrum (if it is at t=0), while noise has a random phase spectrum. Similarly, cross-correlation is determined by the ratios of the power spectra of the signals and their cross-phase spectrum. Here we are thinking about the delays in the system contributing to the shape of the record. This is dependent on the phase spectrum of the of the system response. In an arm waving way, a pure delay is a linear phase shift with frequency, and in a first order system the phase is asymptotic to -Pi/2 . Phase information is recoverable from the ACF, although its power spectrum is wholly real, because the ACF, formally, is symmetrical about t=0, and if you multiply the ACF by a unit step at t=0, it becomes causal and the imaginary part of the spectrum is recoverable (Hilbert).
A trend is a component of a triangular wave, which has components a 1,3,5,… the period and step is a component of a square wave with the same harmonics. Whether it is a step or a square wave, depends on their relative amplitudes. The key point is that they have zero phases.
Thus if we are going to consider how trends occur, we are going have to look at how the phases of multiples of the trend frequency become aggregated. If we are driving the system with Gaussian noise, the distribution of each harmonic of the power spectrum is a negative exponential (because it is a chi squared variable with 2 degrees of freedom) and therefore, I think that one can compute the probabilities of phases at harmonics n, 3n, 5n being within certain limits with respect to another and so producing a trend. This seems to be quite attractive, because it may go from the ACF to trends more directly.
I agree that the LLE analysis is at least superior to the Santer analsis. Yet the later has become doctrine while the former is supposedly not worth publishing. Go figure.
I would like to hear Toll’s take on Santer et al. By the way, Richard Toll, you have redeemed yourself on this thread in my mind anyway. Everyone sometimes says things off the cuff that are not quite right. I would take as a lesson from this that Twitter is very dangerous to your reputation!!
They detrended using the DFA but never said what the value of “a” was from the detrending. This is the DFA calculation:
The term “a” is the trend which gets removed.
Tol states that they go through all the detrending and then throw away this information, which is why he calls it a tautological exercise. In other words, they remove the exact thing that is the most interesting!
But it sounds like you suggest that the trend can pop back in just from random-walk statistics, which is described from the natural fluctuations.
To show a significance level to this conjecture, I would imagine that you would need to do many Monte Carlo calculations or better yet an analytical argument to find out what the likelihood of that same trend is. With a random walk, it may happen but some other trend is equally likely. So what is the statistical probability that this will happen for a random walk with power law of 1/2?
WebHubTelescope, I cannot find the equation you listed anywhere in either paper. I’ve checked every numbered equation, and none match it. Could you point to where I can find it?
Because they didn’t. You have to go elsewhere to find out what the heck they are doing mathematically. Search on Detrended Fluctuation Analysis.
I forgot to add, just go to the Wikipedia page for Detrended Functional Analysis.
Actually, they remove the trend a1*j + a2*j^2.
They indeed then argue that the trend can pop back in from random fluctuations plus persistence.
It can, of course, but the probability is not very high; and a downwards trend is as likely as an upwards one.
I now most definitely agree and said essentially the same thing right above. How many Monte Carlo trials from a random walk experiment would you think it would take to duplicate that specific trend that got removed?
Notice also the trend is upward concave with a positive a2 term, just like an accelerating warming!
How many Monte Carlo trials from a random walk experiment would you think it would take to duplicate that specific trend that got removed?
The exact trend has 0 probability of being realized.
Ah yes, I referred to it as a specific trend and probably deserved that response.
OK, rephrase the question, what is the probability of matching or exceeding that trend? That answer won’t be zero.
I was wrong: DFA2 removes a quadratic trend from the cumulative record and a linear trend from the original record.
It can, of course, but the probability is not very high; and a downwards trend is as likely as an upwards one.
On what evidence do you assert that the probability is not very high? Their simulations show that the probability is indeed high. Unless you have better simulations, the evidence supports them.
At this point, I retract my earlier comment that you seemed to have missed the point about randomization and bootstrap methods.
If we use historical proxy numbers going way back in time, the probability is very low that we would have this kind of temperature swing in 100 years. The historical numbers are such that it is equally likely to make an up or down standard deviation excursion of 0.2 degrees.
I did this chart a few weeks ago mapping out the cumulative changes for Vostok data:
Look at where a +1 degree temperature per century change occurs on the cumulative. No denying that this is a rare event. Whether it can occur just by chance, sure, no doubt it could, but it is not a very high probability in contrast to what Ludecke imply.
The up or down 0.2 seems very likely, but that doesn’t preclude a rare excursion of 0.4 does it? From the thermo and synchronization of natural internal variability with prolonged solar I am estimating 0.2 to 0.4 with an AMO switch to negative. Which is roughly a 50/50 CO2/natural at this current CO2 concentration.
It is really amusing that you believe that you can jump to those conclusions using only the vostok compressed ice mess data as your springboard. You should at least throw a couple of bristlecones and an upside sediment into the mix. Those are not historical numbers, telescope. They are guesses.
A jump of at least 0.4 degrees per century would have a 10% chance of occurring according to that chart, and a jump of at least 0.7 only a 3% chance. It starts to fall off very rapidly after that.
Looking at it, these are small numbers, that plausibly could occur just out of chance. This is perhaps why climate scientists don’t like to look at them that closely, as they hover round that 5% significance level, which makes it a tough statistical call.
Web, it is not so much the jumps. The Planck response is good at dampening jumps, and the ocean thermal coefficient good at dampening drops, neither are so good at dampening gradual drops over approximately 5 years.
In 1999 for example, average surface temperature dropped 0.26K, the upper end of your range. That is roughly 1.5Wm-2 decrease over that year or ~2.5e22 Joules in a years time. Which was dampened and stabilized in a normal pattern.
Over 5 years for example, that could be approximately a 50% greater heat loss (~.75Wm-2 average over five years) with a much smaller dampening constant. That is a lot of ocean energy potentially released in a prolonged solar minimum without manufacturing any fancy mechanisms.
Kinda what the the ARGO data was indicating before the recent solar bump.
OK, I didn’t mean to say jumps then, I just used jump as a shorthand to define positive changes of at least X amount in 100 years. And a drop would be a negative change of at least X amount in 100 years.
It would be interesting to get Steve MacIntyre’s take on this. Judith?
William Briggs posted Ludecke’s essay also, with no comments from him. I find that rather surprising.
Do you think that will take a tol on his reputation ?
This is interesting. Appreciate your e-salon for encouraging these kinds of dialogues.
Judith, I suggest you allow a response guest post by ‘Lüdecke et al’.
It would be very enlightening.
The BEST data shows that there was approximately 0.35°C of warming over the past three decades as does all the other datasets so regardless of whether the global warming ended after 1998 or 2002 there is still warming concurrent withe the 57.1% increase in CO2 emissions from fossil fuels over the past 31 years when this warming took place.
The two sides of the climate change debate are represented by those who follow AGW orthodoxy and claim without any actual physical proof that the increase in CO2 concentration from 336.78ppmv in 1979 to 389.78 in 2010 is the prime causitive agent for this observed 0.35°C temperature increase; and those who look at physical evidence which tells them that the increase in global temperature is primarily natural resulting from an increase in solar energy reaching the Earth’s surface.
According to global warming orthodoxy the increased atmospheric insulation caused by increased CO2 concentration reduces outgoing energy and the retention of this energy causes the surface to warm. As the surface warms it radiates more energy, eventually reaching a temperature at which the outgoing longwave radiation is equal to the net incoming solar radiation (incoming energy minus albedo).
If there is no significant change to net incoming solar energy then the OLR will not have changed and will be more or less the same that it was in 1979 before the 0.35°C temperature increase and according to the climate models the 5.35ln(389.78/336.78) = 0.78watts/m^2 of CO2 frocing is primarily responsible for this observed 0.35°C increase.
If on the other hand the scientists who base their work on data instead of just published (peer reviewed) conjecture are correct and the observed 0.35°C of warming is primarily due to increased net incoming solar radiation; then the outgoing energy will have to increase by the same amount.
Lüdecke vrs Tol is easily settled by 31 years of OLR measurement that shows outgoing energy (OLR) increaseing by approximately 2watts/m^2 from 1979 to 2010 proving conclusively that it is increased incoming energy from the sun and not increased CO2 concentration that is primarily responsible for the observed 0.35°C increase in global temperature since 1979.
Sun “1”; CO2 “0”
Lüdecke “1”; Tol “0”
Norm Kalmanovitch, do you have a reference for the increase in OLR?
This has been gone through before. It is only in the IR window region and is a direct consequence of surface warming. The rest of the IR decreases to compensate due to increasing GHGs. It is, of course, impossible for total OLR to be increasing over a long period.
“A paper published today in the Journal of Geophysical Research notes observations of the solar energy at the European earth surface significantly increased ~ 3.4 W/m2 per decade during the period 1973-1998. That would be a total of 8.5 W/m2 over the 25 year period. ”
Wild/BSRN has .5W/m2/year increases
More sunshine = more OLR
Remember that TSI is tied to solar cycles with an approximate 11 year period so it is dangerous to make assumptions about incoming solar radiation based on short time frames.
The data that you need is the long term trend that would relate the warming since the Little Ice Age to increased energy from the sun.
on a day to day basis the solar flux can vary by over 3W/m^2 so some sort of reasonable average is necessary to determine a pattern.
It is, of course, impossible for total OLR to be increasing over a long period.
That’s a false statement. It is for sure possible for OLR to be increasing over long periods of time. I think you must have omitted an implied assumption.
OLR responds to forcing by the solar flux. It can only increase in response to changes in net solar flux, but in the last 50-60 years, the change has been small, and in both directions, while the temperature rose.
Bright Sunshine is up by a significant amount. 7-8% in the UK from 1929 on. We are talking 7W/m^2 or more. Thats 3 times the amount CO2 supposedly causes.
I know warmenizers stick their fingers in their ears and close their eyes and hum whenever anyone points this out, but it tiresome.
It would certainly make things worse if the brightening and the CO2 GHG effect occurred together and compounded the warming. I think that is one of the contributing factors to some of the CAGW theories.
1. temperature change has also been slight.
2. net insolation at the surface of earth (for example, modulation by clouds) has not been well-studied, as far as I can tell. Some other people on this thread have been claiming that there are net increases in surface insolation. We shall have to follow up their claims.
MattStat, “It is, of course, impossible for total OLR to be increasing over a long period.
That’s a false statement. It is for sure possible for OLR to be increasing over long periods of time. I think you must have omitted an implied assumption.”
The total OLR change would be small and of course associated with the solar cycle and albedo changes, Ein ~ Eout Not a lot of information in that. What is interesting is that regional changes in OLR for longer periods are possible and tend to follow internal natural variability. When more OLR is measured in the Arctic, that is a indication of more surface cooling because the OLR has a more open window to space. The source of the OLR still has to migrate to the High northern latitudes, via oscillations.
What is more interesting is that the Antarctic is very stable with OLR because of its stable temperature and low atmospheric moisture as measured at the TOA but very chaotic in terms of OLR at the surface. In my opinion, it is one of the keys to understanding changes in ocean heat content.
There is a graph of OLR at http://www.climate4you.com/
Click on the heading Global Temperatures third from the top on the left side.
In the list of contents fifth from the bottom is Outgoing Longwave Radiation Global and several figures down is a plot of OLR since 1974 and further down is a plot of NCDC temperature and OLR since 1979.
There is also a link to the datasource to download the data.
These are not point values but continuous data from 1979 to 2011.
The OLR shows the annual 10W/m^2 in response to the annual seasonal variation in absolute global temperature and the 37 month moving average definitely shows a trend of increasing OLR of approximately 2W/m^2 from 1979 to 2010.
If you think this is innaccurate compare the energy balance of Trenberth 1997 and the revision in 2008. OLR changes from 235W/m^2 in the 1997 version to 238.5W/m^2 in the 2008 revision. Albedo also changes from 107W/m^2 in the 1997 version to 101.9W/m^ in the 2008 version indicating that none of these numbers were actually based on measured data. The OLR measurements regardlessm of how innacurate you say they are; are daily measirements taken over a 31 year period and when this many values are smoothed with a 37 month moving average the 2W/m^2 trend is far more valid than numbers estimated by Trenberth for his energy balance diagram.
Unless you know what you are talking about don’t criticize data that you have obviously never seen or even been aware of.
Norm, “Unless you know what you are talking about don’t criticize data that you have obviously never seen or even been aware of.”
I can be full of surprises. First, with the temperature and pressure measurements it is much easier to calculate net flux values between atmospheric layers. Direct measurements are hampered by Minimum Local Emissivity Variance, MLEV, primarily due to clouds and water vapor near saturation. TOA flux is fairly simple to measure and accurate with a fairly solid total emissivity ~0.61-0.64. Surface emissivity can vary from 0.825 ish to 0.71ish. ishes make life difficult.
Since TOA flux should remain fairly constant, that E in ~ E out thing, it is not all that helpful without resolution of very small changes in spectral line changes. We live on the surface, which is were we would really like to know how things are turning out.
If you look at the site you linked, the Antarctic shows a yellow spot. If you can find a link to an Antarctic Energy budget paper, which I have unfortunately misplaced, there are some rather odd flux measurements around that yellow spot. I will see if I can dig that up for you, but it is very interesting.
I did not see on the site you linked a “direct” measurement of surface flux averaged globally with error bars, which I mentioned would be a bit of a challenge. Calculated values, I can do, fairly accurately BTW.
Norm, this is one paper I am going through. http://gmao.gsfc.nasa.gov/pubs/docs/Cullather446.pdf
There are a few paywalled on MLEV I haven’t gotten to see so far. One paper assumes the errors are due to one thing the next due to another. I haven’t seen anything conclusive, but since the Antarctic is not playing by the rules, something is odd.is going on.
Got it. Thanks.
Norm, if you read the caption, the wavelengths they use are in the window region, and they uses a method to extrapolate to give a total. However, the window region behaves differently from the other areas when GHGs change, and since they refer to methods from 1979, they are not taking that into account.
Norm, with a 0.35K increase in surface temperature the average OLR flux value would increase by 2Wm-2. The instruments used to measure that 2 Wm-2 are accurate to approximated +/-7 Wm-2 with known humidity correction. This Dr. Curry person noted that the Arctic flux values appeared to be off a bit, prompting other researches to determine that flux values in the Arctic were off by 1 RU, equivalent to approximately 20K. I would love to see the statistical methods used to confirm 2Wm-2 with potential inaccuracies in excess of 80Wm-2.
1) “This is an important question: Why has the world warmed?”
There are two meaning to the word warmed; one is an increase in temperature and one is an increase in heat content.
Which meaning do you have in mind?
2) “Having removed the trend from their data, LLE cannot answer the question: What caused the warming?”
Is this the question they are posing? I believe the question they pose is is there second half of the 20th Century as ‘wobbly’ as the first half. Is the temperature record as ‘wobbly’ all the way throughout. Exactly how you define ‘wobblness’ isn’t really important. What is important is the likelihood of observing the 7th wave.
3) “Unfortunately, fluctuation analysis does not work on trending variables”
Then surely this system of analysis cannot be used on any dataset that consists of average values? ‘Average’ temperature is about as close to detrended as can be, consisting of worldwide average of Tmax+Tmin/2.
Analytically, why, mathematically can’t detrended data be used? Have you a simulation that shows it is nonsense? A methematical proof?
4) “Oddly, Lüdecke omits carbon dioxide”
This is surely what we in the biological field call complete bollocks. One does not, a prior, indulge in witchcraft. The warming of the Earth could be due to a huge number of different processes.
If you want to test the CO2 effect, then altitude may be the way to do it.
We know that things cool faster and reach a lower Tmin at altitude. The higher you go, the faster the temperature drops at dusk.
However, CO2 is claimed to warm the Earth surface by reradiation of IR, a radiative process that is a function of the atmosphere. The depth of the atmosphere is very high, the difference between sea level and 1,000 feet is trivial. Therefore, as CO2 increases there should be a bigger impact on the INCREASE in Tmin at altitude, when compared with sea level. Whereas, the effect of height above sea level on Tmax would be much smaller.
Has anyone found that rise in Tmin over the last 60 years is greater than the rise at sea level?
Warming before 1940 accounts for 70% of the warming that took place after the Little Ice Age which ended in 1850. However, only 15% greenhouse gases that global warming alarmists ascribe to human emissions came before 1940.
Real world observation tells us that there essentially has been no significant global warming in the US since the 1940s. Globally, what warming we have seen had to be ferreted out of the temperature records from the coldest and most inhospitable regions on Earth—e.g., the dry air of the Arctic or Siberia where going from a -50 °C to a -40 °C at one small spot on the globe is extrapolated across tens of thousands of miles and then branded as global warming.
For global warming after 1940 it is pretty clear that it must be from solar activity because it was inordinately high. And now we are in a period where the sun is anomalously quiet and we see a global cooling trend.
Actually, no, the Sun is not anomalously quiet. It’s very active at present.
I think you are referring to the solar minimum between the previous active phase and the current active phase, which was a little longer than usual, but is now finished.
Are we all missing something here? It looks like 1930 and heading down–>
Chris, is right. The sun is very active at present.
Compared to what the maunder minimum?
Compared to you saying “are we missing something here.” Yes, you were.
Chris, said the sun is very active now, and the linked chat shows he’s right. Apparently, you weren’t aware the sun is very active now, and asked if you were missing something.
So as the canopy fails to deploy and the parachutist plumments to Earth like a roman candle you’re saying–e.g., yeah but he’s very active at present and I predict a speedy recovery?
Wagathon, my apologies; I had thought you were looking at the a shorter time scale, corresponding to the scales on which people have alleged a “pause” in warming.
Yes, the solar maximums have been reducing in recent decades. Although the Sun is certainly ramping up in activity at present, and is now quite active and putting out some huge storms, this is active by comparison with the solar minimum the occurs between the regular maximums. We are currently heading up to a solar maximum.
From your graph, I now see that you are looking on a much longer scale, and I agree that the solar maximum we are heading for is likely to be somewhat less active than the norm for a solar maximum over the last hundred years or so.
However, this really only underlines the conventional science of solar forcings. The activity of the Sun has an effect, but the variations that exist are smaller in magnitude than the anthropogenic forcings. After all… your graph shows that the mean activity peaked around 1958, and then almost up to a secondary peak around 1988, and has been falling since then. Temperatures on that scale, however, have been increasing strongly. So your graph is actually part of a strong body of evidence that solar activity cannot possibly be driving the warming trend observed in recent decades.
Good post. Polite and to the point.
Are you suggesting that past trends in TSI, for example — that many point to as the cause of global warming after the Little Ice Age (underscoring that it’s the sun, stupid) are irrelevant to climate compared to human-production of CO2 on Earth?
No, Wagathon, of course I am not saying solar effects are irrelevant. I am saying that they cannot possibly be the driver for the increasing temperatures of recent decades.
It may have had a contributory role to warming as we came out of the little ice age around the late nineteenth and early twentieth century; this is a serious proposition being considered. But solar activity certainly cannot explain the warming from the latter part of the twentieth century, because the changes are in the wrong direction. The role of solar activity here is primarily small cooling effect, corresponding the fall in activity shown in your graph.
Climate is not driven just by one factor. Observing that solar activity cannot be driving the warming measured over recent decades is not the same as saying it has no relevance at all.
After all… your graph shows that the mean activity peaked around 1958, and then almost up to a secondary peak around 1988, and has been falling since then. Temperatures on that scale, however, have been increasing strongly. So your graph is actually part of a strong body of evidence that solar activity cannot possibly be driving the warming trend observed in recent decades.
I think your word “strongly” is a stretch. All you have ruled out is that the lag of the observable hypothetical solar effect can’t be less than 10 years. In pro-AGW circles the lagged effect of CO2 is called “thermal intertia” or “in the pipeline”. The concepts apply equally in principle to solar forcing, though the exact lags and exact forcing effects are (as I evaluate the literature) poorly estimated and confounded.
I am happy to leave that as a peripheral point of disagreement. If anything “strongly” understates just how impossible is the notion of solar variation being the driver for heating over recent decades.
There’s a lot of work been done on this, and on fingerprinting of the various forcings involved. The issue of “pipelines” is with the ocean coming into equilibrium. The fact that ocean is lagging the surface and still heating up means that the heating drive is still going strong; and that is precisely why it can’t be the Sun driving things upward at present.
Solar variations do have an effect; but it is physically nonsense to think that effect is the driver of warming in recent decades. It’s not a serious notion, it makes no sense at all of any of the measurements of lags, the direct measurements of solar variation, or the profile of temperature changes in the atmosphere. It’s just not a serious contender at all.
The sun is not the driver of the recent warming. That is somewhat true. The TSI change is not sufficient to cause such change. The solar minimum, if prolonged sufficiently, can produce more than expected cooling, because it makes a larger change in the ratio of atmospheric to surface solar absorption. A very unique and unexpected situation, not in the least obvious until the recent discovery of the larger than expect shifts in side lobe energies of the weak solar spectrum.
While the UV information got most of the press, the near infrared component should not be neglected.
“The significance of the correlation between the GDO, PDO, and global temperature is that once this connection has been made, climatic changes during the past century can be understood, and the pattern of glacial and climatic fluctuations over the past millennia can be reconstructed. These patterns can then be used to project climatic changes in the future. Using the pattern established for the past several hundred years, in 1998 I projected the temperature curve for the past century into the next century and came up with curve ‘A’ in Figure 5 as an approximation of what might be in store for the world if the pattern of past climate changes continued. Ironically, that prediction was made in the warmest year of the past three decades and at the acme of the 1977-1998 warm period. At that time, the projected curved indicated global cooling beginning about 2005 ± 3-5 years until about 2030, then renewed warming from about 2030 to about 2060 (unrelated to CO2—just continuation of the natural cycle), then another cool period from about 2060 to about 2090. This was admittedly an approximation, but it was radically different from the 1° F per decade warming called for by the IPCC. Because the prediction was so different from the IPCC prediction, time would obviously show which projection was ultimately correct.
“Now a decade later, the global climate has not warmed 1° F as forecast by the IPCC but has cooled slightly until 2007-08 when global temperatures turned sharply downward. In 2008, NASA satellite imagery (Figure 6) confirmed that the Pacific Ocean had switched from the warm mode it had been in since 1977 to its cool mode, similar to that of the 1945-1977 global cooling period. The shift strongly suggests that the next several decades will be cooler, not warmer as predicted by the IPCC.” (Don J. Easterbrook, Global Cooling Is Here! Evidence For Predicting Global Cooling For The Next Three Decades — Figure 4. Correspondence of the GDO, PDO, and global temperature variations)
“The real danger in spending trillions of dollars trying to reduce atmospheric CO2 is that little will be left to deal with the very real problems engendered by global cooling.
“CONCLUSIONS — Global warming (i.e, the warming since 1977) is over. The minute increase of anthropogenic CO2 in the atmosphere (0.008%) was not the cause of the warming—it was a continuation of natural cycles that occurred over the past 500 years.
“The PDO cool mode has replaced the warm mode in the Pacific Ocean, virtually assuring us of about 30 years of global cooling, perhaps much deeper than the global cooling from about 1945 to 1977. Just how much cooler the global climate will be during this cool cycle is uncertain. Recent solar changes suggest that it could be fairly severe, perhaps more like the 1880 to 1915 cool cycle than the more moderate 1945-1977 cool cycle. A more drastic cooling, similar to that during the Dalton and Maunder minimums, could plunge the Earth into another Little Ice Age, but only time will tell if that is likely.” (Ibid.)
Dallas, I’m not taking time to look into that, but it has the directions correct, so I don’t see any immediate problem, and am happy to leave it at that.
Wagathon, your graph showed that TSI has been falling somewhat since somewhere around the mid twentieth century; which is also the period with strongest warming of the instrument record. That’s empirical data we have for now.
Your latest comment appears to be proposing that TSI is going to drop sharply in the future and bring severe cooling. The merits of that prediction are not really relevant to the discussion of this thread, so I’m happy to leave the prediction unchallenged. Let’s get back to the consideration and analysis of existing temperature measurements.
Cheers — Chris
Not so fast
“… there isn’t a single piece of scientific evidence (meaning hard facts, not mere factoids or a concocted ‘consensus’) linking the anthropogenic carbon emissions to the atmospheric temperatures… In 2007, an unforeseen combination of weak solar activity and the La Niña phenomenon (a cooling of the Eastern Pacific Ocean surface waters) caused a sudden drop of the world average temperature by no less than 0.7oC, de facto ‘canceling’ in a single year all the warming allegedly registered since 1870 – the pretext for all this fuss about global warming.” (Geraldo Luís Lino, Global Warming: Facts and Factoids, Global Research, June 5, 2008)
“… instead of being scared to death about warming, climate-concerned people should rather think again. Without pressing the panic button, it doesn’t need an actual ice age; a mere drop of 1-2oC in the average temperatures (many scientists who study the cosmic-solar influence on the climate expect a cooling for the next decades) would spell a lot of troubles for the world agriculture.” (Ibid.)
OK, there was a bit there relevant to temperature measurements.
Your reference cites a newspaper article (“Sorry to ruin the fun, but an ice age cometh “, the Australian, April 2008) which focuses on the extended solar minimum that I mentioned earlier and is now well and truly over as the Sun is now into the active phase of the cycle.
From the newspaper article:
As it happens, Jan 2007 and Jan 2008 are the hottest and the coldest months respectively in the GISS dataset for the 21st century. That’s cherry picking.
Even when the article was written the monthly anomalies were back up again. There’s a lot of variation short term. 2008 ended up being the coolest year of the 21st century (though still handily hotter than anything of the 20th century except 1998). 2009 was right up to the warming trend again and 2010 equaled 2005 for the hottest year on record (GISS numbers). (2011 will be back down a bit.)
So the warning you quote is out of date, and is now falsified. Temperatures have recovered as expected.
Falsified you say?
I would have thought you could simply eye-ball trends in solar activity–e.g., like what I think most of us can see taking place since 1978:
No one can falsify the null hypothesis that all global warming can be explainted by natural causes.
To believe otherwise is the reason why scientists in Japan liken belief in AGW to a belief in astrology. Have a read–What was learned:
Among a number of other things, notable features identified by the authors were high temperatures in the late tenth, early thirteenth, and twentieth centuries and a prolonged cooling from ~1350 to 1700, or as they describe it: “warmth during medieval and recent times, and cold in between.” Also of great interest, they report that the coldest decade of the record was the 1810s, and that even though the record extended all the way through 2004, the warmest decade of the record was the 1940s. In addition, they observed that “warm summers seem to coincide with periods of high solar activity, and cold summers vice versa.” Finally, they report that comparing their newest temperature record with other regional- and large-scale reconstructions “reveals similar decadal to longer-term variability.”
Buntgen, U., Frank, D.C., Nievergelt, D. and Esper, J. 2006. Summer temperature variations in the European Alps, A.D. 755-2004. Journal of Climate 19: 5606-5623.
I appreciate this post and thread because I hope to learn something from it by gaining a better understanding of the relationship between the concepts of autocorrelation and persistence on one hand and the mechanisms of climate physics on the other. I have a reasonable grasp of the latter, but the relationship to the statistical models leaves me with unanswered questions.
I’d like to suggest a couple of premises, based on the physics, as a starting point for seeking further explanation.
1) Climate surface temperature alterations, whether unforced or forced, generate a counteracting restorative tendency based mainly on the Stefan-Boltzmann law. A rise in temperature, for example, generates a cooling effect via increased escape of heat to space. On the other hand, a persistent forcing such as mediated by increased atmospheric CO2 tends to create a warming influence over an extended time interval. If it is warming during one year, the forcing is likely to mediate warming in subsequent years even though the climate is resisting the warming to some extent in accordance with Stefan-Boltzmann.
2) If a warming is observed over a particular interval, are we more likely to see it followed by a cooling or a further warming? In the case of a persistent forcing, further warming is likely when the observations are made over multiple years. However, the climate is also subject to short term unforced fluctuations most often due to heat redistribution between the oceans and the surface, so that gain of heat by one of these compartments occurs through loss of heat (cooling) from the other. ENSO is the best characterized example. Fluctuations of this type are typically characterized by dominance of the restorative element (warming mediating a cooling), but to observe this requires short intervals measured in months or at most a year or two.
The essence of the above, as I understand it, is that the correlation of one change with another, including both its magnitude and direction, is highly dependent on the observational timescale.
Given these premises as the physics, how can we interpret the analyses such as those described in the papers under discussion so as to discern whether we are seeing trends or unforced fluctuations such as from ENSO, when it is likely both are operating, but over different timescales? Are the methods in the papers adequate for that task? Is there a danger of seeing the two phenomena as competing for a greater or smaller share of observed temperature change when the different timescales imply that each dominates within its own timescale? On a more technical note, how is the Hurst exponent invoked in the papers useful in making these distinctions?
Perhaps I should add what I see as a third premise to the above. 3) Persistence of temperature effects must have a physical cause. For example, warming in multiple consecutive years can’t occur for no reason at all, but requires the operation of a physical mechanism to sustain the warming. Positive climate feedbacks can do this to a limited extent, but based on Stefan-Boltzmann, and confirmed by observations (e.g.,from ENSO), this will not induce a prolonged persistence, because the long term tendency is always to counteract a warming with a cooling and vice versa.
There are may simple system that we can understand, quantify enmass, but cannot predict the individual level.
We take a cylindrical can and half fill it with a material like sand or sugar. We place it on its side and place it on a board, attached to a stepper motor. We then start to slowly incline the board. At some angle, the can will roll down the incline. We could perform this experiment thousands of time and we would get an average incline and a distribution. However, we could not say at the n+1 time we did the experiment at what angle the can would roll. .
The can full of sand is a very simple system. Any examination of rainfall patterns shows huge variability in rain fall, some episodic, some seemingly at random. The circulation pattern of clouds changes over time, rather slowly as measured by human lifespans, but look at the Clovis people.
Anyone who can think such a complex system is well described is naive to say the last. Someone who has faith in the cartoonish equilibrium box models of fluxes in a non-rotating world is delusional.
The simple box models are just tools, Dozens of simple box models are better tools. Hundreds even better. Properly considered, the Kimoto box, corrected for the other glaring error, is simple and elegant. It is always good to have a rough idea to compare to the more complex models.
Sure Dallas, models, even equilibrium approximations to steady states give you some nice information. However, it is all in the estimations. If you are within 10% of reality, you have done very well indeed. The idea that one could estimate an increase of 4 w/m2 from a background of 240 w/m2 is patent nonsense.
I don’t disagree Doc, it is a very interesting puzzle. Just thinking of the scales involved is amazing. Before the solar UV surprise, solar TSI change might cause 0.1 degree change. With expect 4.5 degrees fo warming, that would be insignificant. Now with 0.8 to 1.2 change due to CO2 doubling, 0.1 is pretty significant. The models estimate something like 10% changes due to natural internal variability. Of 4.5 that would be small, 0.45 degrees that would average out. Now, with potential much great solar impact, that is not something to be over looked.
So nailing a number with any model is unlikely. But, the temperature data is pretty good except for the poles. Physics is still physics. Just kick in a little more intense thermo and fluid dynamics and and we get closer. That is until something changes. That is the nature of a chaotic system, every new start can produce a different result.
So while not perfect, those simple boxes can go through a lot more new starts than a complex model that takes months to run. Simple is better when you need fast results. Right now, exciting stuff seems to be happening, so I want fast.
To elaborate one bit further based on points made in the earlier thread, persistent surface warming over multiple years can in theory reflect either a forcing mechanism or a prolonged type of internal climate mode involving redistribution of ocean heat. In the latter case, the surface and ocean will change in opposite directions- if one warms, the other will lose heat – whereas a forcing mechanism will increase the heat content and temperature of both. Data from the last half of the twentieth century demonstrate the predominance of the latter and exclude a major contribution to long term persistence from a low frequency internal fluctuation.
In the latter case, the surface and ocean will change in opposite directions- if one warms, the other will lose heat – whereas a forcing mechanism will increase the heat content and temperature of both.
That is why the sparsity of the temperature recordings is so critical. (1) it matters whether the Medieval Warm Period was general or confined, and that’s why there is such a strong effort underway to collect proxies of all kinds from all over the world. (2) The Greenland Ice Core data show temperature fluctuations with a period of about 1,000 years, and declining peak temperatures but there are disputes about how general that was. And there are many more examples of geographic heterogeneity.
Data from the last half of the twentieth century demonstrate the predominance of the latter and exclude a major contribution to long term persistence from a low frequency internal fluctuation.
That is the claim that is disputed by Lüdecke et al. That claim is disconfirmed by Girma’s graphs. The evidence does not sing in harmony on this point. The U.S. does not show an overall trend in the 20th century. and the same is true of some other large geographic regions.
I haven’t seen any graphs inconsistent with a 50+ year progressive increase in ocean heat content, associated with the warming of about 0.6 C after 1950. The ocean data rather conclusively exclude a natural internal fluctuation as the cause of more than a very minor fraction of the warming. Without those data (ignored by Ludecke), the issue would be more uncertain. Here, however, it’s important to distinguish short term fluctuations from the longer trends. On short timescales, one would indeed expect to find that surface temperature and ocean heat are not necessarily in sync, because these intervals involve short term fluctuations superimposed on the long term trend. It is also well understood that a trend imposed by increasing greenhouse gases is not the only forced climate phenomenon – changes in anthropogenic and volcanic aerosols have also contributed in both an increasing and declining fashion, particularly between 1950 and the late 1970s, and solar variations have also operated, particularly in the earlier part of the twentieth century. Many factors, not all operating in the same direction, have therefore contributed to trends, but the relationship between any of these and ocean heat content must still reflect the basic principle that an increase in the latter in concert with surface warming reflects an imposed perturbation that causes the climate system to persist in accruing more energy than it gives up to space – the First Law tells us it can’t be due to an internal fluctuation..
U.S. temperature trends are distinct from global trends, for which the U.S. is a very small fraction, and which are dominated by the oceans.
Matt – You continue to frustrate me, because the area in which I can learn most from you involves the questions I asked earlier, and not climate physics. You haven’t really discussed how the Hurst exponent could be used in the manner of Ludecke et al to distinguish between forced trends and internal fluctuations while remaining consistent with the principles of thermodynamics. I have the sense that it can’t, at least as they attempted it, but I would welcome input on this.
I haven’t seen any graphs inconsistent with a 50+ year progressive increase in ocean heat content, associated with the warming of about 0.6 C after 1950.
Is that different from the 50 years that went before.
the First Law tells us it can’t be due to an internal fluctuation..
Unless there is a natural cycle in cloud cover reflecting/admitting light, or some other mechanism not yet well studied.
You haven’t really discussed how the Hurst exponent could be used in the manner of Ludecke et al to distinguish between forced trends and internal fluctuations while remaining consistent with the principles of thermodynamics.
My discussion was simple: it does not provide any information relevant to the distinction that you want.
I believe that your basic frustration stems from your unwillingness to accept that there might be important mechanisms (cloud cover, hydrolic cycle) that are not well enough known to justify precise mathematical/physical claims. Let me repeat something that Doc Martyn wrote above: If you are within 10% of reality, you have done very well indeed. The idea that one could estimate an increase of 4 w/m2 from a background of 240 w/m2 is patent nonsense.
For whatever reason, you have decided everything unknown is too tiny to matter.
Hence my earlier dialogue:
Fred: mercury expands as it warms.
Matt: that thermometer is not accurate enough for clinical use.
Matt, you frustrate me. Why can’t understand that mercury expands when it warms. To do otherwise would be to violate the laws of thermodynamics. It’s pure physics. You shouldn’t criticize physics when you know so little of it. And on and on.
Well, what can I say? The thermometer is still not accurate enough for clinical use. It is, in fact, so inaccurate that you can not even tell by using it whether the fever is increasing or decreasing.
I have had these discussions before;
Biologist: here is how the drug works, etc, much detail.
Matt: the control group and the active treatment group are nearly the same.
Biologist: you don’t understand because you don’t know how the drug works, how it is absorbed and metabolized, how it stimulates the production of xyz-esteratse, and down-regulates the uvw-binding site.
Matt: what can I tell you? The symptoms are the same in both groups. There is no evidence that the drug works in the target population. No evidence at all, just random variation in the symptoms in the two groups. We can’t sell it because it doesn’t work.
“Is that different from the 50 years that went before?”
We don’t know. That’s why I focused on the Ludecke et al claims for 1956-2005, where we have ocean heat content data. The latter exclude a more than minor contribution to surface warming from internal fluctuations, because we know from observations that those fluctuations are rapidly countered by climate responses and can’t lead to an accumulation of 10^23 joules of ocean heat – for that not to be true would require an astronomically high climate climate sensitivity of a “runaway” type.
For earlier, we don’t have the data, but since Ludecke et al are clearly wrong about the later interval, and since we have evidence that total forcings for pre-1950 were roughly similar in magnitude to those post-1950, it’s likely they were wrong about the early interval as well, even though greenhouse gases were less important then and volcanic and solar effects proportionately more important. Even so, it’s still possible that natural unforced internal climate fluctuations might have been somewhat more important pre-1950, and I don’t know how we could exclude that possibility.
From a statistical model perspective, I think the ocean heat data are critical because without them, we could put more credence in surface temperature fluctuations that happened to have long runs in a warming direction even in the absence of a forced component. The ocean data makes that impossible.
Fred, Maybe I missed it on an earlier thread, but I don’t think you have explained how you can rule out other “natural” forcings or feedbacks such as solar forcings or clouds, or changes in albedo. The ice ages resulted from “natural” variations ard involve far larger temperature variations than the 0.6K of the last 50 years.
David – The Ludecke et al paper purported to exclude a deterministic trend by claiming that their data could be better explained by unforced fluctuations. That was completely wrong, and the data (at least after 1950) require a long term forced trend. Once that is clear, the question of which forcings were responsible was not something they addressed, but we have data from other sources on the principal post-1950 forcings operating in a warming direction. Of these, anthropogenic greenhouse gases represented considerably more forcing than the combination of solar forcing and anthropogenic black carbon aerosols. Industrial aerosols of the sulfate type contributed a net cooling influence over that interval, and can be ignored in apportioning warming effects in terms of percentage contribution.
You’ll have to forgive me for not retrieving the links to the multiple comments I’ve made in several recent post providing more quantitative data. However, based on transient climate sensitivity estimates, we can estimate that of the 0.6 C warming since 1950, anthropogenic greenhouse gases could have contributed between about 0.44 C and 0.77 C, with the other forcing agents contributing much less, and with the industrial aerosols bringing everything down sufficiently so as to be consistent with the observed 0.6 C increase.
I didn’t bring this up in specifically criticizing the Ludecke et al paper, but it is an additional reason why their attribution of most warming between 1956 and 2005 is completely untenable.
Matt: “Is that different from the 50 years that went before?”
Fred: We don’t know.
On that we agree. I claim that is the skeptical position on anthropogenic greenhouse gas induced global warming. Paraphrasing Lüdecke et al, the warming of the late 20th century was not unusual.
David Young, I feel obliged to warn you Fred Moolten is giving you bad information when he says:
I questioned, then disputed, Moolten’s representation in an exchange in an earlier thread. He never attempted to rebut anything I said, so as it stands, I cannot see any basis for his position. For you, or anyone else who wants to check the validity of Moolten’s claims, it is easy to do even if you don’t want to read the full papers. Just read the conclusions of both papers, where you will find comments like:
The authors clearly suggest a forced natural fluctuation, as you do in your response to Moolten. Moolten is simply (and presumably inadvertantly) misrepresenting the papers.
The temperature is variable. This is an observation so it presumably is consistent with the laws of thermodynamics.
Why is it variable? Is due to an instaneous response to variable forcings or are there delays in the system? If it is due to delays, this will produce trends, even if the long term energy balance was zero. The question is how much of the observed short term changes in temperature is due to this effect and how much due to energy balance? My feeling is that the former has been underestimated.
Richard Saumarez – I agree that short term fluctuations can appears as short term trends even if not externally forced, but the authors purported to explain a long term trend as due to these fluctuations. They state, “ a conventional examination of a temperature record can
not tell us whether an observed change in temperature is natural or an external trend”. However, the ocean heat data show that the long term trend, at least from 1956-2005, was entirely or almost entirely externally forced. (They further engage in the logical error of implying that the “external trend” they are trying to reject must be anthropogenic as opposed to “natural”. That’s clearly false – external trends can be natural – but the evidence that the trend was mainly anthropogenic has been derived from sources not addressed in the paper).
It’s also noteworthy that the two papers by the group appear to be mutually contradictory. In the LL paper, as noted above, they concluded that rather than an external trend, a “natural fluctuation” was responsible for most observed warming. This is wrong, as explained above.
However, as mentioned by Bob Droege earlier, the LU paper concludes the opposite, stating “The detrended fluctuation analysis (DFA2) evaluated – for the records selected here – very small natural probabilities for both centennial events. Therefore, the probability that they are the result of deterministic trends is high,” – i.e., what was predominantly “natural” in LL had a very small probability of being “natural” in LU.
To me, this implies some confusion in the authors’ mind as to whether they are trying to distinguish “external trends” from “natural fluctuations” as opposed to external trends due to anthropogenic greenhouse gases from external trends due to natural forcings from solar changes or other influences.
Fred Moolten says:
This is only true if you go with the incorrect interpretation Moolten has claimed the authors use, one which I have previous disputed. If you instead go with the interpretation I offered, there is no appearance of contradiction. Personally, I think this is strong evidence Moolten’s interpretation is wrong.
You don’t lose gracefully, Brandon.
Readers can visit the papers themselves to judge this.
Fred Moolten, you don’t behave honestly in discussions. Ooh, look at that. Even I can throw around derogatory remarks which contribute nothing. Oh wait, I’m not like you. I can’t do that while choosing to ignore the issues being discussed.
You see, I’d be happy to have readers visit the papers to judge for themselves. While they’re at it, they could also read the comment where I explained why your interpretation was wrong, the comment you never disputed. Instead, you have simply ignored what I said in it.
But by all means, ignore the points I raised and instead declare victory and say I’m a poor loser. I’m sure you’ll convince everyone who already agrees with you that I’m… something. As for everyone else, I think they’ll see right through you.
“changes in anthropogenic and volcanic aerosols have also contributed in both an increasing and declining fashion, particularly between 1950 and the late 1970s,”
Everyone says aerosols cooled the climate between 1950-70.
What is that claim based on ? Has anyone measured the level of aerosols? Has anyone measured the cooling effect of aerosols?
I think that the quoted phrase is based on belief, rumor and wild guesses, not on data.
Jacob – the evidence comes from multiple sources. One informative review summarizing part of the evidence is at Global Dimming and Brightening.
Jacob: Data on the amount and quality of the fuel burned, and the equipment used to burn that fuel, is quite good. Data on past aerosols is therefore reliable.
Energy transport changes the weather. Changing the weather changes the albedo. You can have exactly the same forcing and have entirely different results should the transport of energy change. If a change in the transport of energy creates weather patterns with consistantly less albedo you can have surface and ocean warming without a change of forcing.
Steven – see above and also the earlier threads on the papers by Ludecke et al.
Fred, I am not arguing if it happened or not. I am arguing it is possible.
I don’t think that the Lüdecke et al papers have any very meaningful implications for the mechanisms (processes, causal effects, etc.) of climate change.
Other kinds of modeling might (spectral analyses, vector autoregressive models), but not this. About the only thing you can get from this study that I can think of is already known: changes can’t be too fast or too slow.
I will be interested to read any accounts of how I might be wrong about this.
Thanks. I just saw this after posting my previous comment. It answers some of my questions.
I know the feeling.
I’ve noticed both here and in an earlier thread that when the ocean heat content change was cited to show that internal fluctuations couldn’t explain most 1956-2005 surface warming without violating the First Law of thermodynamics, some comments were made to suggest that perhaps we were overlooking a mechanism that would allow the ocean to gain 10^23 joules of heat based on simply climate responses to some type of random walk fluctuations that happened to evolve in the warming direction.
I’ve done my best to explain why I see that as impossible in light of how we know the climate responds to fluctuations (see, e.g., Spencer/Braswell, Lindzen/Choi, Dessler for ENSO related phenomena). However, because by definition, what we overlook is something we’re unaware of, an explanation relating these unforced fluctuations to the huge heat accumulation should be evaluated by anyone who thinks it’s possible.
What needs to be done is to postulate some internal fluctuation of a type known to occur, and then show the calculations of how that can “walk its way” to the observed changes in ocean heat. The critical element of this will be feedbacks, since the Stefan-Boltzmann response is to restore a balance rather than perpetuate or exaggerate an imbalance. The other feedbacks will have to be postulated at sufficient strength to lead to the observed result, and the climate sensitivity parameter that results from this postulation should be cited. I am willing to predict that nothing even remotely resembling the observed data can be generated by any parameters and starting points compatible with climate reality, but this might be wrong, and so if anyone wants to try it, they should show their work.
What needs to be done is to postulate some internal fluctuation of a type known to occur, and then show the calculations of how that can “walk its way” to the observed changes in ocean heat.
Start with changes in cloud cover. The more the earth warms, the faster and earlier the clouds form in the forenoon and the more sun they block subsequently. This is a slight paraphrase of a hypothesis formulated by Willis Eschenbach and partially supported by his analyses presented at WattsUpWithThat and linked here by him. The hypothesis is probably testable with the extant TAO data to which he also linked. These fluctuations (cloud formation and afternoon rainfall) are readily observable in many places; it remains to show that the induced changes in energy flow are about right. It only requires that CO2/warming cause an additional 2% reduction of insolation for most of the afternoon most days. I hope to make this the subject of my presentation at the Joint Statistical Meetings next July/August in San Diego.
The hypothetical equilibrium response of the earth mean temperature is about 1% of the current mean temperature. A 1% increased impediment in the insolation is about the right size to block it from occurring.
Matt – Your specific example works in the wrong direction – it involves a cooling phenomenon (more cloud albedo) in response to warming. If you want to undertake the challenge I proposed and demonstrate how, starting with known fluctuations that are unforced, we can add 10^23 joules of heat to the ocean over 50 years in concert with surface warming, you should do the calculations as I suggested above. I expect you will find it impossible to get from the starting point to the observed data.
Fred: Matt – Your specific example works in the wrong direction – it involves a cooling phenomenon (more cloud albedo) in response to warming.
Exactly so. The proposed mechanism damps the temperature increase; it is possible that the mechanism would prevent the maximum temperature from surpassing some bound that it is near to already.
If you want to undertake the challenge I proposed and demonstrate how, starting with known fluctuations that are unforced,
That again illustrates something I claimed of you: that you will not entertain the hypothesis that something you don’t already know all about might matter.
Let me repeat something from above:: Matt: “Is that different from the 50 years that went before?”
Fred: We don’t know.
I agree that we don’t know, and I repeat my assertion: that is the skeptical position on AGW.
I think they should look at distributions of historical temperatures over time. This is a typical curve from the Vostok Ice Core data:
You can estimate how fast or how slow the trends are in comparison to this curve by eyeballing, or else use the exceedance probabilities that Ludecke mentions in his paper. You therefore get an idea of how likely this change is with respect to historical data just by reading off the cumulative probability.
Notice the double sided exponential and how well that fits the data. The double-sided exponential or Laplace is equivalent to a normal distribution with maximum entropy uncertainty in the diffusional random walk coefficient. Compare this against the DFA chart I have just a little further down in this thread. The sqrt(time) excursions are maintained with a Laplace distribution, BTW.
Clear out play for MJ.
The statistical mechanism is as follows: An unusually warm year is likely to be followed by another unusually warm year. With persistence, the sequence of unusually warm years can last for decades — and may even look like an upward trend if there happen to be a few random shocks in a row.
AFAIK, a redistribution of heat from ocean to atmosphere is the only explanation of such internal variability.
If we had good observations of the heat content of the ocean, we could easily dismiss or confirm this hypothesis.
“AFAIK, a redistribution of heat from ocean to atmosphere is the only explanation of such internal variability.”
A possibility is the change in distribution of solar to more atmosphere and less ocean at normal time constants. The greater than expected UV change with solar minima is a possible mechanism. I know that sounds far fetched, but it appears legitimate if CO2 impact at the surface is less than approximately 1.2K,
Assuming the amount and location of clouds and snow and sea ice isn’t affected by the redistribution of energy thus changing the Earth’s albedo, I suppose the only place the energy could come from is the oceans. Why would we make such an assumption?
If we had good observations of the heat content of the ocean, we could easily dismiss or confirm this hypothesis.
We do, at least for 1956-2005, and it shows the fluctuation hypothesis to be untenable. Without the ocean heat data, the hypothesis might be doubtful, but it could not be conclusively excluded.. The ocean heat data exclude it conclusively.
That’s progress. It puts a binding constraint on the magnitude of the internal (or unforced) variability of the temperature of the atmosphere.
That would be an interesting calculation. Sometime the calculations to bound the size of an effect clarify and focus the debate..
Richard- for more on this, see Heat Uptake and Internal Variability, and also Global Ocean Heat Content.
Do the ocean heat data also exclude “the missing heat is hiding in the oceans” excuse? Some famous CS are probably not as knowledgeable about this stuff as you are.
Warming of Global Abyssal and Deep Southern Ocean Waters between the 1990s and 2000s: Contributions to Global Heat and Sea Level Rise Budgets*
Warming of Global Abyssal and Deep Southern Ocean Waters between the 1990s and 2000s: Contributions to Global Heat and Sea Level Rise Budgets*
How well can we derive Global Ocean Indicators from Argo data?
Any opinion on this post from the Blackboard?
“Conclusions, Observations, Cautions
The satellite sea level data shows no evidence of acceleration in the rate of ice melt over the past 18 years, and the observed reduction in the rate of sea level rise since ~2003 is consistent with a much reduced rate of ocean heat accumulation. The relatively short period of the satellite record means that one should be cautious about drawing too many conclusions from the data; the measured trend of the last 18 years could be influenced significantly by unaccounted factors.”
Are the oceans warming, Fred?
Don – JCH has posted some informative links on heat in the deep ocean not ordinarily measured by standard methods. If there is extra heat there, that would add to the evidence already present in the upper ocean and make the authors’ arguments even less credible than they already are.
Regarding the Blackboard comments, they seem to be an accurate summary of recent (post-2003) ocean heat changes, but these ups and downs are not unusual in the long term record.
I don’t find LL’s arguments very interesting. What I am getting at is the argument that natural variation is an implausible explanation for the recent warming, so CO2 must be it. CO2 emissions keep going up, but the consensus climate scientists seem to be very concerned by a warming hiatus. Is the control knob broken, or Is the warming hiatus that these learned climate scientists are worried about imaginary? If it is not imaginary, what is causing it? Natural variability? And I keep asking for some significant evidence that the warming that we have seen recently is outside the bounds of natural variability. Dr Tol and his minion, telescope, have ignored these questions? Is it me, Fred?
Do the ocean heat data also exclude “the missing heat is hiding in the oceans” excuse? Some famous CS are probably not as knowledgeable about this stuff as you are.
Heat that is detected/observed below 700 meters is not missing. No missing heat is included in anything other than a data category called, of all things:
When they find heat, the take it out of missing heat and put it into OHC. If you follow Pielke Sr’s blog, you can watch him make the calculation.
Maybe Trenberth et al, don’t read Pielke. See my post to Fred re. the Blackboard post on sea level rise. Shouldn’t increased deep ocean warming be showing up in recent sea level changes?
“We do, at least for 1956-2005, and it shows the fluctuation hypothesis to be untenable. ….. The ocean heat data exclude it conclusively.”
I’d be interested in references.
Also earlier I asked Richard Tol for references to back up his “atmospheric greenhouse gases are a statistically adequate explanation of the observed warming”.
I just want to read the stuff and make up my own mind.
HAS- – My response to Richard Tol above links to a couple of relevant sources.
Saw those, but given the short time frames and apparent shifts in behaviour even within these (see publications by Knox and Douglas), using this as the basis to claim “ocean heat data exclude it conclusively” seemed a long bow .
BTW I assume you track stochastic climate modelling, and in particular how short-term variability might impact on long-term trends.
HAS – The timeframes are very long (e.g., 50 years) and the quantities of heat enormous – hence the conclusive refutation.
I believe the Knox and Douglass paper’s invalidity has been documented in this blog on several occasions by Pekka Pirila, but its short timeframe makes it irrelevant to the longer interval we’re considering.
I think you’re right in stating that short term variability can affect long term trends, but in this instance, the ocean heat data demonstrate that internal climate fluctuations can at most account for a very minor fraction of the long term warming between 1956 and 2005 – almost all must come from external forcings. Of these, we know that anthropogenic ghgs contribute the major fraction. This would be true even if the internal fluctuations were potentially longer than “short term”.
I did participate in that discussion and would say it is also a long bow to say Pekka Pirila dealt to the issue of variability in the record (and for that matter variability in the spatial distribution of the OHC changes).
I remain unconvinced that we can say with certainty what is happening with the oceans in terms of their role in the global heat budget.
1960 to 2000 –
#Least squares trend line; slope = 0.0194615 per year
1999.25 to 2010.25 (last 11 years, 132 months) –
#Least squares trend line; slope = 0.0315885 per year
That was a reply to Richard S above, and for some reason it dropped to the bottom. He specified 1960 to 2000, and the last 11 years.
So he picked them, not me.
We’ll talk when BEST completes the data for the last two months and blows gigantic holes through arguments that were based upon the woefully incomplete versions of them that exist now.
Oh … the last two months are magic… sure. Tell that to La Nina.
Or are you suggesting giving them two more months to manipulate the data will “fix” things?
I have asked this many times of Posters quoting “Trends” using Best data.
have you actually looked at the data?
I mean the individual readings, it is rubbish, absolutely riddled with errors and you want to use it to argue your case.
Looks like this is about over. I will sum up:
After reminding the readers of the gratuitous public berating he had given to his gracious hostess, Dr. Toll proceeded to justify the berating by briefly explaining why he believes the offending papers should have been ignored. He then answered a few questions in a cursory manner, ignored several others, and seems to have left the building. This is the climate science to which we have become accustomed.
This is what Detrended Fluctuation Analysis when applied to the Vostok Ice Core looks like:
I actually didn’t detrend it because over the entire range the temperature doesn’t shift. This is over 3000 points worth of data interpolated into 100 year intervals. If you know about the Vostok data, it shows occasional large fluctuations in temperature over long periods of time. These correspond to the interglacial swings. What the DFA is showing is that most of the excursions do trend over time in a close to random walk fashion (power law=0.5) up to a certain time; at which point the interglacial periods show what appear to be very long range temporal correlations. The thinking is that these correspond to Milankovitch cycles.
The next graph is what the autocorrelation function (ACF) looks like for Vostok in 1500 year increments. The fine structure in the ACF matches that in the DFA at the high end. These features again correspond to long-range correlations.
I think what Tol is saying is absolutely right. The DFA approach can detect the natural random walk but it doesn’t do much for isolating any of the occasional critical forcing that may occur. In historical terms, this seems to occur rarely so it gets buried in some of the natural fluctuations. For the present day data, Ludecke and co-authors are screwing up the analysis by removing the trend and trying to concentrate on the fluctuations. They really should be indicating the value of the “a” term that falls out when they detrend.
Bottom line is that we always need a model to describe what is happening and the statistics can only get us so far. Cool stuff and thanks to Tol for chiming in.
How about you fine-tune your ice core data analysis down to a resolution of about 30 years and show us that temperature excursions, such as we have seen in the last 60 years, did not occur before we started poisoning the air with that nasty CO2? Show us how implausible natural variability is as an explanation for whatever amount of warming is alleged to have occurred recently. Dr. Toll left the building, before he got a chance to clear that up. You can do it. Right? Or would you prefer to use a couple of trees found somewhere on some peninsula, or an upside-down sediment?
Poor baby, can’t do the math or understand any sophisticated stochastic arguments, so goes off and whines.
And from me, the hits keep coming.
I know enough math to make obscene amounts of money. I also know enough not to waste my time on esoteric crap like the vostok ice core data. Why don’t you select 3 or 4 thousand 30 year chunks of data and explain what it tells you about natural variation. Is the recent warming that you clowns are so hysterical about really unprecedented? Maybe Tol will help you.
Typical, attack to poster, don’t answer the question.
Are you talking to me or Lord Monfort?
If it’s me, well, I don’t see you lifting a finger to do any analysis.
Perhaps you are too busy making “obscene amounts of money” like his royal highness.
I apologize for spending the night in bed.
Just don’t let it happen again. Why don’t you apologize to Judith?
Heh. nice riposte.
I’ve enjoyed reading your post and the subsequent discussions, thanks for posting.
I’ve found that if you lay in bed with a laptop on your chest and fall asleep you can end up with nasty burns. Its a slow burn so you wont feel anything. trust me.
I can attest to this, though I only got burned on a part of my leg.
I think the more important question is not if the world has warmed but if it has warmed in a way that poses any more of a significant challenge than the climate of the past ~1000 years.
Something is wrong with the above links.
Both links lead to the same file.
http://www.math.canterbury.ac.nz/~m.reale/pub/Reaetal2011.pdf I assume
Richard. I support everything you have said about the Ludecke, Link and Ewart paper but consider that your attribution of ulterior motives to Judith borders on libel. The very act of posting any comment on the internet in the many forms that it takes should always be made with the libel laws firmly in mind.
Here are the global mean surface temperature data from NASA and the Hadley Centre:
This data shows the global mean temperature PATTERN has not changed since record begun 160 years ago.
This single pattern shows a long-term global warming rate of 0.06 deg C per decade and an oscillation of 0.5 deg C every 30 years.
This oscillation is due to ocean cycles as described in the following paper:
Based on the above pattern, global cooling is expected until about the 2030s. The data shows early evidence of this cooling as shown in the following graph:
According to the data, there has not been any change in the global mean temperature pattern with increase in human emission of carbon dioxide since record begun 160 years ago.
There is no evidence of Anthropogenic Global Warming.
This oscillation is due to ocean cycles as described in the following paper:
(see Figure 4)
Girma, that first graph is needlessly complicated and has misled you into thinking “the global mean temperature PATTERN has not changed since record begun 160 years ago.”
Keep it simple, and you are less likely to be fooled by the data.. Start by looking for your 30 year pattern in the HadCrut series beginning with 1950. You have five 30-year periods ending with1999. Based on the repeated cycles you see in these periods, you would expect to see a temperature decline starting with 2000, but the data doesn’t show a decline. This is a break in the pattern.
In my previous post, make that “starting with 1850.”
For some reason, the data does not show any oscillation in the global mean temperature in the period from 1850 to 1880. During this period, there was still the long-term warming of 0.06 deg C per decade.
The Oscillation started about the 1880s.
Phil Jones has mentioned that the data in the late 19th century are less reliable.
Well, starting your OLS with 1980, as shown in the linked chart, still leaves you with a break in the pattern beginning with the year 2000.
Note that unlike in my previous chart where I didn’t overlap the 30-year periods, I overlapped each period by one year like you did. Not that that makes any difference.
One of the criticisms Richard S.J. Tol offers strikes me as rather interesting, and I’m hoping to learn more about it:
Unfortunately, the link he provides is of no help for me. Not only is there a paywall, I’m at a loss as to how the link supports his claim. The title of the article I get when I follow it is:
This doesn’t seem likely to address the issue he raised, especially when one reads the abstract:
I see nothing in this which suggests a discussion of the point Tol raised. Perhaps it is a wrong link, or perhaps the issue simply isn’t referenced in the abstract. Either way, I cannot see the basis for Tol’s remark. If removing the seasonal cycle can introduce problems, this is something which deserves a lot of attention. From what I’ve seen, it is done in most analyses of the modern temperature record. For example, BEST did it (though there are apparently other issues with this step of their analysis).
Could anyone provide me some useful references?
The reference is Eugen Slutsky (1927), “The Summation of Random Causes as a Source of Cyclical Processes”, Problems of Economic Conditions. That paper is in Russian. It is not online, but the 1937 translation is: http://www.jstor.org/pss/1907241.The linked article is often seen as the best summary.
Make that http://www.jstor.org/pss/1907241
Thanks for the links guys. It’ll take me a while to get through them, but in the meantime, I have a question. The article being discussed is from 1927, so while it can provide a foundation for analyzing the issue in question, it doesn’t give much practical help. Do you know if there has been any attempt to look into the effect of removing seasonal cycles in the modern temperature record? If so, it would probably be an easier source to use for considering this issue. Also, the portion of the blog post I quoted says:
What detail did they not provide? The adjustment they made for seasonal cycles seems fairly well-described to me, so I’m not sure what else would be wanted.
Sorry. They do describe this:
From the monthly data, the seasonal effects have to be eliminated. At this, for each calendar month the mean value of the whole record is subtracted from the data and divided by the seasonal standard deviation yielding the normalized record Ti, i=1,…,N that will be applied in the further analysis.
That is, they use the sample characteristics to remove, by subtraction, the seasonal cycle in both mean and standard deviation.
Here’s a paper that says you should not do that: http://www.sciencedirect.com/science/article/pii/0304407694900868
Richard S.J. Tol, thanks for the link, and for acknowledging LLE described their process for removing seasonal cycles. Unfortunately, I’m not sure the link you provided actually supports your position. The first line from it says:
Do we actually know global temperature records have a seasonal unit root? I know there has been debate over whether or not global temperatures as a whole have a unit root, but I haven’t been able to find any real discussion of whether or not there is a seasonal unit root. If there is no seasonal unit root in global temperatures, the paper you linked to says nothing to indicate removing seasonal cycles shouldn’t be done.
Even if there is a seasonal unit root, that paper makes it clear the effect of removing a seasonal cycle depends on many things, including the sample size. There’s nothing in it which “says you should not” remove seasonal cycles. Indeed, the abstract states:
Your source simply does not say what you claim it says. It does raise the possibility (if there are seasonal unit roots in global temperatures) that removing seasonal cycles could have an effect on later analysis, but it does not say one should not remove seasonal cycles. If you can provide a more meaningful source, I’d be happy to look at it, but as it stands, your point rests on very shaky grounds.
Of course, if your point is correct, the same criticism ought to be applied to many other pieces of work, as most (if not all) modern temperature reconstructions remove seasonal cycles. Given this, the criticism seems meaningless. If they did what everybody does, how much can we criticize them for it?
The paper is about integrated series. Persistent series share many of its characteristics. It is hard to see how remove the seasonal cycle in the standard deviation would not affect the covariance structure.
Richard S.J. Tol, while I appreciate you may find it “hard to see how remove (sic) the seasonal cycle in the standard deviation would not affect the covariance structure,” that does not somehow qualify as proof, or even evidence, for your position. You provided a source, claiming it said something. It did not say what you claimed it said. I asked you to clarify this, but you chose not to. You’ve been shown your representation of your source was fallacious, yet you did nothing to correct it. I do not understand this.
Your position may be right, but you’ve simply provided no evidence of it. At this point, all anyone has to go on is your hand-waving. If you don’t have the time or capability to provide evidence for your position at this point, that’s fine. You can just say so. If you do, people will know to take your position as your belief, not as something they should know to be true. This will necessarily modify their impression of what you said, but that’s what should happen. Things should not be presented as fact if they cannot be supported with more than just belief.
On the other hand, if you can provide evidence for your claim, that would be great. If you’re right about your criticism, it’s something people should know (and not just in relation to these two papers). But you have to provide evidence for your claim, not just something that sort of sounds like it could be applied to your claim.
Brandon: If you can’t do this by intuition, you should do the math
The (long range) correlation coefficient is normalized by the standard deviation. Pre-normalization changes the standard deviation and thus the correlation coefficient.
Richard S.J. Tol, if you cannot provide a source, you should just say so. It does nobody any good for you to provide a source while misrepresenting it, then refuse to admit what you did while telling them to “do the math.” You could perhaps argue your position is so self-evident you don’t need to provide a source, but that doesn’t allow you to ignore the fact you misrepresented the source you did provide. Changing your story is fine, but you have to acknowledge that you are changing it.
If you refuse to address such an obvious error in what you’ve said, I cannot be expected to simply “do the math” and agree with you. If I cannot trust you about your sources, I cannot trust you about your math. I am willing to discuss a mathematical representation of your argument, but not until I have some reason to believe what you say will actually have some connection to reality.
As an aside, I didn’t realize it before because I was distracted by some of your behavioral issues, but what you’re saying doesn’t appear to make sense. You’ve said in two of your comments:
As far as I can tell, the papers never claim to remove anything from the standard deviation. You quoted the authors as saying:
This only refers to division by the standard deviation. There is nothing in it which says the standard deviation is altered. In fact, the papers give some detail about how they generate the standard deviations they use, and as far as I can see, nothing they say supports your claim. This makes it appear you not only misrepresented the source you provided to support your position, but also that your entire position is based upon misrepresenting the papers you seek to criticize.
Perhaps I’ve been wrongfooted somewhere, but if so, you’re going to need to actually explain it, because as far as I can tell, nothing you’ve been saying to me has been trustworthy.
Brandon: Just do the math.
Richard S.J. Tol, please either respond to what I’ve said, or don’t respond at all. I am quite tired of having to post lengthy explanations as to why what you say seems to be wrong when you put no effort into addressing anything I say. At this point, your behavior is indistinguishable from trolling.
You began this part of the fork by misrepresenting a source and claiming the authors of these papers:
As I’ve already indicated, there is no indication they subtracted the seasonal cycle from the standard deviation. The text says they normalized the data by dividing by the standard deviation, and that is nothing like what you claimed. How exactly am I to “do the math” when what you said appears to simply be something you’ve made up?
If you cannot be bothered to take the time to explain why what I say is wrong, please don’t bother responding to me at all. Doing so is extremely rude, contributes nothing to the conversation, and quite frankly, is pathetic.
Removal of seasonal trends is filtering a signal. It’s discussed rather obliquely in “Statistical Digital Signal Pocessing and Modeling” by Monson Hayes in chapters 7&8 and also in “Random Data Analysis and Measurement Procedures” 2nd ed by Bendat and Piersol chapter 9..
If I may say so, there seems to be a slight disconnect between the way that statisticians and “signal processers” view time series. This problem is intimately connected to the phase spectrum of the signals.
Thanks for the references Richard Saumarez, though unfortunately I have access to neither at the moment. Since you’re more familiar with the subject, and out of curiosity, could you say whether you you think that sort of filter is likely to make a significant difference for thinks like what we’re looking at in this topic?
By the way, it isn’t really important, but there is no ‘c’ in my name.
Which is why I’m glad you’re here. Not only do I understand your jargon better than the jargon of the statisticians (even when you’re talking about basically the same thing), but I think there are some subtleties in DSP that actually make a difference. Series analysis is all DSP people (and machines) do. It’s much more narrowly focused on the problem at hand.
I do apologize. My name regularly get mis-spelt!
Actually I haven’t really got the time to do this, which I know is ducking the issue. I will give it some thought as I’ve become intrigued by how to predict trends from the ACF, which is quite a problem but the two issues are inter-related. Our poster would tell me to “do the maths” but it is quite subtle. My old prof used to say that signal processing is full of “traps for young players”, which I certainly verified. Off the cuff, I haven’t a clue, but I am fairly sure that it is calculable.
If I may say so, there seems to be a slight disconnect between the way that statisticians and “signal processers” view time series.
That is for sure.
Richard Saumarez, not a problem. People get the spelling of my name wrong all the time. I’ve even had people do it while looking at an ID with my name on it (airport employees are terrible about that). I don’t get bothered by it.
As for “ducking the issue,” don’t worry about it. We all have time constraints, and you didn’t bring the issue up, so I don’t consider you obliged to support the point. Besides, I was only interested in the issue because Richard S.J. Tol criticized the authors for removing the seasonal cycle, saying it was known to affect analyses. I always see the seasonal cycle removed when people look at temperature records, so I was curious why that would be the case. The idea that removing the seasonal cycle could affect later calculations makes some intuitive sense, but I find it hard to understand how the authors could deserve criticism for it when it is what everybody does.
I’d love to know how much impact it actually has, but in the meantime, I’m going to assume it is too small to meaningfully affect the author’s conclusions, and thus it was a very weak criticism. If you (or anyone else) can give me reason to believe it is not that small, I’ll reconsider the issue.
I think p415/6 of the Barnett article may prove helpful
Peter Davies, thanks, but I don’t think it does. Toward the end of page 415, it states:
Removing a seasonal cycle from a series is not taking a moving average (or moving sum/difference). It’s simply readjusting baselines. I may be missing something, but that’s why I was hoping for a more practical example.
The crucial point is that pre-filtering not only removes the seasonal cycle (in this case) but also changes the correlation structure.
Richard Tol, you write “There seems to be a fair amount of agreement that the atmospheric greenhouse gases are a statistically adequate explanation of the observed warming (with one important exception: http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf). There is the important caveat that no one quite knows the extent of natural variability (see http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1300715170), so it is hard to test that.”
Every time I read this sort of statement, it reminds me of Tom Lehrer’s Vatican Rag. And, of course, the “important caveat” makes the statement about agreement just plain wrong.
The point is that CAGW is still merely a hypothesis; an idea with no observed data to support it. It ought to be up to the proponents of CAGW to produce the observed data to support this hypothesis. This has never happened. What has happened is that the proponents of CAGW have turned the null hypothesis upside down. It is not necessary to provide measured data to show that CAGW is right; (sarc on) the use of non-validated models has already done this (sarc off). So, it is up to us skeptics to produce the observed data to show that CAGW is wrong.
When anyone tries to do this, then we get the sort of criticism that Dr. Tol produces for these two papers. I suspect his cricitism is correct. But it ought to be irrelevant. He ought to be producing the data to show that a CO2 signal exists in the temperature/time graph. No-one has ever done this.
However, Girma, with his very simple, straightforward analysis of the 160 years of temperature data that we have, shows clearly the no CO2 signal exists. No-one has challenged his analysis. So CAGW remains a hypothesis, with no observed data to support it.
I would hope that one day Dr. Curry will give us a thread on the Spencer/Dessler controversy; which really does address the issue as to whether there is any observed IR satellite data to support CAGW.
Jim Cripwell writes:
“The point is that CAGW is still merely a hypothesis; an idea with no observed data to support it. It ought to be up to the proponents of CAGW to produce the observed data to support this hypothesis. This has never happened. What has happened is that the proponents of CAGW have turned the null hypothesis upside down. It is not necessary to provide measured data to show that CAGW is right; (sarc on) the use of non-validated models has already done this (sarc off). So, it is up to us skeptics to produce the observed data to show that CAGW is wrong.”
Very well said. In my humble opinion, Tol’s entire effort over the last few days has been to reinforce the CAGW practice of turning the null hypothesis upside down. His approach is to render the phrase “natural variation” ambiguous, treat natural variation (which is the null hypothesis) as one hypothesis among many, and demand evidence that natural variation has caused the perceived rise in temperature. Of course that last bit is a category mistake because the null hypothesis, natural variation, cannot be treated as a causal hypothesis at all.
The null hypothesis is that the range of data that we have collected over the years establishes the limits in which nature has in fact operated. Given that null hypothesis, any hypothesis which identifies a cause or causes for temperature rise must show that the cause has produced a temperature rise that exceeds what can be expected from natural variation. If the causal hypothesis produces no such results then it is to be dismissed as an unnecessary explanation of a natural phenomenon. In brief, the burden of proof is on the CAGW people and they have failed to show that any temperature record exceeds what can be expected from natural variation.
“No-one has challenged his analysis.”
You cannot as a regular reader of Climate etc. be completely unaware of the repeated technical challenges Girma’s “analyses” have failed to meet here at least.
Girma has posted the same claims literally tens of thousands of times across the blogosphere over the course of years, as a simple Google search will confirm.
His claims have been refuted completely hundreds of distinct times, often by well-known and highly regarded technical experts in analysis.
Girma’s “analysis” is not straightforward, and it is not correct on any of the major datasets we have, when compared to much stronger correlations of non-periodic functions.
Here, using exactly Girma’s datapoints (removing the meaningless under-17-year span) we get http://www.woodfortrees.org/plot/hadcrut3vgl/mean:13/mean:11/plot/hadcrut3vgl/from:1850/trend/plot/hadcrut3vgl/from:1880/trend/plot/hadcrut3vgl/from:1910/trend/plot/hadcrut3vgl/from:1940/trend/plot/hadcrut3vgl/from:1970/trend — which looks vaguely exponential, not sinusoidal.
Here — http://www.woodfortrees.org/plot/best/mean:48/mean:72/isolate:144/plot/best/mean:42.7/mean:64/isolate:128 — we see if there is a period, it is nothing like the 30- or 60- year signal claimed.
Why would you introduce clear fabrication into a serious technical thread?
Girma has posted the same claims literally tens of thousands of times across the blogosphere over the course of years, as a simple Google search will confirm.
What Girma needs is a quantile regression, regressing the 99% points, 95% points, 75% points, median, 25%, 5% and 1% points vs time. It is possible to test whether straight lines are sufficient, vs. a requirement for curvature. From the graphs I have seen, I think the result would confirm her claim. That’s in part because tamino has fit a straight line with autocorrelated residual to the same data, and he has shown that, with autocorrelated residuals, there is little evidence for departure from a linear trend. Caveat: tamino has done many analyses, and I cherry-picked this result.
It’s been done. Even here, in fact on a thread many months ago.
The fit would be pretty good if the claim was of a linear or polynomial correlation. It’s just poorer than alternate hypotheses that _do_ have meaningful mechanical explanations.. and has no correlation to the mechanical causes Girma claims are there, and claims periodicity, which has a much stricter condition for accepting a claim of fit.
Google such terms as “Dr. Girma Orssengo” (he goes by “selti1” sometimes), or “favourite Deltoid troll”, or “blog science genius”, or “Handbook in Denialism”, or “just another agenda pusher” if you wish.
Girma’s arguments amount to nothing that can be called valid by competent analysis, after apparent years of being polished and reworked all over the internet.
“just another agenda pusher” if you wish.
I am a persistent pusher of my truth=>http://bit.ly/ocY95R
Bart R: It’s been done.
Girma has done curve-fitting by eyeball. There is no way to conduct a statistical test of a pattern detected posteriori. Ludecke et al showed that things like Girma has discovered can arise frequently by chance. So there is no reason to believe that he is correct. On the other hand, eyeballing has revealed consistencies that were missed by statistical tests.
His claim that he main trend is a constant linear increase since the late 1800s was supported by tamino’s analysis, which accepted the null hypothesis that the early 21st century did not differ from the 20th century.
The only possible hypothesis tests of his idea can come only after about 20 more years.
This field is full of ideas that have not been adequately tested. His idea one of them — well a pair of ideas, actually.
Girma formerly presented his case as a rising sine curve. Another response to the claim is at http://rhinohide.wordpress.com/2011/01/17/lines-sines-and-curve-fitting-9-girma/ by Ron Broberg discusses a regression analysis. Not quantile regression, but ..
It looked pretty good, except for the three issues I’ve recited, and other objections raised by others.
To show a trigonometric fit by regression generally doesn’t convince until at least three periods — 180 years, in this case — are available. (Is that what you meant by another 20 years, on the 160 year dataset?)
I agree, every serious hypothesis deserves a reliable assessment. However, Girma’s hypothesis fails on every basis to consider serious, other than, “well it draws the eye.”
And that Girma repeats it over, and over, and over and over.
70 years of deceleration of global warming rate in the 20th century!
(Do you believe that?)
Well, you finally figured out what a pause looks like.
That is better Girma! Think of predictions as wagers. There is no money in the push. What are best odds, over or under? Then research the small factors that bump the odds one way of the other.
There is no predictive skill if you aren’t willing to push all in.
Oh, forgot one thing. Cherry picking is allowed in wagering.
Context matters, if you have enough data for context.
Yet you cropped meaningful, available data.
That’s a no-no, unless you had a good reason.
Did you, Girma?
Darn you Bart R, yours is much prettier than mine.
And yet, I picked up more about graphing from you in that one example than I’ve gotten out of Girma all year.
Here’s a treat your techniques allowed me to compose:
Starting in 1885, AMO follows a strong 60-year oscillatory trend. Dunno why. Maybe it’s always been there, but I doubt it.
PDO on the other hand follows a completely different ~46-year oscillatory trend starting around 1917.
The interference of the two trends and AGW due CO2-rise contributes briefly to an apparent but very short-lived 90-year global temperature signal simulating a 60-year wavelength superimposed on a rising linear trend.
The 90-year trend ought have stopped, of course, after the first 70 years (in 1980) if it were only and all ocean oscillations — it ought have dropped again then.
Indeed, right now we expect an extreme of a fourpoint drop due oscillations at nadir, sunspots, and aerosols.
We have to wonder at the cause of the pause in cooling which Girma’s hypothesis, and the oscillations-absent-AGW, both predict.
Richard, please explain how Girma’s analysis of 160 years of temperature data show that no CO2 signal exists.
Jim Cripwell writes:
He ought to be producing the data to show that a CO2 signal exists in the temperature/time graph. No-one has ever done this.
Are you serious? (But of course you are…)
If you can look at those graphs and not see the correlation, then you need help. The fact that you assume that no one has ever done a basic correlation analysis is almost amusing.
Temperature drives, CO2 sits in the back. CO2 is quasi-condensable.
Your words marginalize.
Positive feedback rules, CO2 is nearly forever.
Good morning WHT.
CO2 is ~forever, but not in the atmosphere. Atmospheric CO2 is determined by climatic factors, not by fluxes.
If positive feedback ruled, temperature would have soared out of the stable cycle of the past ten thousand years a long time ago. The temperature of earth is stable. Negative Feedback Rules.
Herman A (Alex) Pope
Time scale rules.
CO2 is ~forever in the carbon cycle compared to our lifespan, which has unforeseeable effects on the atmosphere.
On 100,000 years?
Negative feedback ruled once upon a time.
Since we’re 110 ppmv above the level of the past multiple million years, Negative feedback no longer rules, and we have no idea if it ever will again. Unless you know more than you’ve been telling up to now.
I just received an email from Horst Ludecke, he has been reading the two threads, he is preparing a response to Tol’s issues and also some of the issues raised in the comments.
How wonderful! More of a platform for EIKE/CFACT, more time wasted. Bravo, Dr Curry.
We could always talk about sea ice. Perhaps your time would be better spent preparing a guest post than it is complaining about our topic?
My time is definitely better spent, as Dr Curry will not answer my questions.
But maybe I could write a guest post about EIKE? There’s plenty of material out there.
You have a huge chip on your shoulder regarding EIKE. I never heard of them until now. It sounds like a political argument. Using the same logic I suppose you argue Germany didn’t invent the jet plane since I assume you think poorly of their politics also.
I believe you when you say you never heard of them, but look, Steven, either EIKE is a trustworthy and credible source with regards to climate science, or it isn’t. I have known them for a couple of years now, and many times I have seen them putting out things that were wrong and obviously politically motivated given their connections to other dubious think tanks such as CFACT.
At what point am I allowed to discard them as a source that has anything useful to say? Shouldn’t I tell people that EIKE isn’t trustworthy as a source before they give their disinformation a platform?
This doesn’t necessarily have to do with politics (I’m not directly interested in politics). I don’t care about EIKE’s politics, as long as they are doing their best to be honest and transparent. They are not. They simply are not. Don’t take my word for it, look for examples.
In fact, you have one right under your nose here. If that was Dr Curry’s goal, then kudos to her. We can take EIKE of the list of useful and trustworthy sources.
Climate Gate Email:The fact is that in doing so the rules of IPCC have been softened to the point that in this way the IPCC is not any more an assessment of published science (which is its proclaimed goal) but production of results. The softened condition that the models themself have to be published does not even apply because the Japanese model for example is very different from the published one which gave results not even close to the actual outlier version (in the old dataset the CCC model was the outlier). Essentially, I feel that at this point there are very little rules and almost anything goes.
It is the IPCC that has the disinformation platform.
There is no logic that allows you to move from
1. Party A has not been trustworthy in the past
2. Party A is wrong scientifically about this present matter
None. no logic can get you from 1 to 2.
It is logically possible for an liar to tell the truth. Now, you might think it is a waste of time to check whether the liar is lying again, but if you want to conclude #2, you have to do more work than simply pointing to past sins.
Otherwise, I’d stop reading mann and Jones. I thought Mann’s last paper was good and the presentation Jones gave at AGU was compelling. For me, past behavior increases my LEVEL of investigation. It does not suspend the laws of logic. If I want to say they are wrong, I have to prove it in the present circumstance.
Judith willing, why don’t you write a guest post concerning your area of expertise-presumably sea ice- rather than continually complain about EIKE. I know nothing of them and consequently I will be interested to see what they say for themselves.
Yes, I would be most interested in a guest post from Neven on the arctic sea ice
I agree. And if I word it like this:
1. Party A has not been trustworthy 95% of the times in the past
2. There is a 95% chance that Party A is wrong scientifically about this present matter
Is that logical?
3. I only have 24 hours in a day. It takes me a lot of time to make sense (due to a lack of experience, knowledge, intelligence, competence, etc) of papers like these. Should I invest this time yet again?
What can be dismissed and ignored out of hand, and when? Or in other words: Why is this latest crap from EIKE/CFACT/GWPF/GMI/CEI/ATI/SPPI/everydebunkedfakeskepticoutthere getting so much attention?
The reason I started believing AGW could be serious, is because I saw so many skeptics distort, repeating canards, playing word games, never correcting errors, never apologize…
Which skeptic can I trust? Mind you, this doesn’t mean I automatically trust Al Gore (I don’t) or everything the IPCC says (I don’t).
But it’s probably best if I follow your advice and focus my attention on the Arctic sea ice. This is starting to look like that final scene in The Devil in Miss Jones.
No Neven that is not logical. it is inductive reasoning. Jesus christ.
You can note that if 95% of their past statements were lies, that chances are that the current statement is a lie. But you don’t prove it is false by noting this. From your view ( and mine) , ‘it’s not worth my time checking it.” However, this argument has no force with someone who wants to believe the next lie. it has no rhetorical force for that audience. For them that effort is worth it because they want the lie to be true. For you, its not worth the time.
It’s only fair to give them the chance to respond.
It’s your time, Richard. I can only decide how I choose to waste mine.
I agree it’s a waste of time, but procedural fairness is a great good.
I guess you’re right. Now that it has come this far…
You continually complain that EIKE is not trustworthy. It is of course up to Judith what she publishes, but an article ‘Who can we trust?’ might be illuminating, provided it was even handed. In it you can discuss Eikes alleged failings as well as those of the IPCC, and individual major contributors such as Dr Mann and Dr Hansen.
If you lay out your information it gives us all the opportunity to make up our own mind, which seems to me to be a large part of Judith’s rationale for her blog.
Actually I am planning a “trust” post, based upon Onora ONiell’s work
I hope the response includes an explanation for the use of the uncorrected temperature records for Hohenpeissenberg. The well documented problems with that temperature record (and perhaps the others as all) make me wonder about any result based on those data.
JayTee, I’m not personally acquainted with the issue you raise, but I can say I am not impressed by the data quality of the papers. In fact, I think data selection issues are far more problematic than anything Richard S.J. Tol brought up in his post. Unfortunately, they’ve received almost no attention.
I want to comment about something which has been discussed in several different spots on this blog, most recently in a fork started here. I am repeating it because the point deserves a great deal of emphasis.
By using detrended fluctuation analysis (DFA) the authors did not “eliminate from there analysis the very thing in which they are interested” as Richard S.J. Tol claims. DFA is used to generate the “Hurst coefficient” used in later calculations (exceedance probabilities) by the authors. Those later calculations do not use detrended data. They simply need the Hurst coefficient in order to be able to calculate probabilities.
Not only is what he says not what the authors did, it is not something you can even do with DFA. DFA is a fairly complicated method, but all you need to know is it is not used to modify data series for use in further calculations. Because of this, Tol’s criticism of the authors’ use of DFA is completely bogus. It is entirely based upon him making things up.
This point cannot be stressed enough.
so please enlighten us how the trend was reintroduced after elimination
If I understand the point, it is not that trend is reintroduced. It is that a trend (or the appearance of a trend) is shown to be a plausible consequence of long term memory in the system that produces the observed time series. That is, given an inferred Hurst co-efficient, it follows (or is claimed to follow) that a system with that behaviour is able to show extended excursions that have the appearance of a trend on the scale of several decades.
I’m not making any comment here on the validity of the analysis. I am merely trying to repeat the point being made by Brandon as far as I can understand it..
Chris Ho-Stuart, your description is mostly accurate (though I’m not sure why you call the Hurst coefficient inferred). However, it doesn’t address the issue Richard S.J. Tol raised. He wants to know how the authors could use DFA to generate a value, yet still be able to do a different analysis on the original series.
You know, the whole “copy” thing. You do an analysis on one copy of the data. You do another analysis on another copy of the data…
Richard S.J. Tol, no? Your request is just silly. You don’t have to reintroduce anything if you just go back to the original data. The authors did two separate calculations, DFA and exceedance probability. Both used the same data series. The latter simply uses the Hurst coefficient calculated by the former as an input.
It’s quite easy to use the same data in more than one way.
You say they superimposed the synthetic data on the trend? The paper is implicit, but I do not think they do that.
If they did, then they conclude that
trend (observed) + fluctuation (observed) ~ trend (observed) + fluctuation (synthetic)
That’s even sillier than what I think they did: They just declare the observed trend to be natural, without any analysis.
No. That is not what I say. That is nothing like what I said. In fact, there is no way anyone could sensibly interpret anything I’ve said as saying that. I have no idea where you came up with this idea, but it is complete and utter garbage.
So, what did you say then? How do you think LLE compute Figure 3?
I said what I said. If there is something in my remarks which is unclear, I’ll answer any specific questions you have about what I’ve said. However, your comments have no actual connection to what DFA does or what I’ve said. As such, I cannot see any reason to repeat myself.
At this point, my only advice is go back and read what I’ve said, or go look at what DFA actually does. Afterward, if you still think you’ve accurately described things, provide explanations with specific details, and I’ll address them as best I can.
Otherwise, all you’re doing is asking me to repeat myself without contributing anything, and that’s pointless.
Brandon: I genuinely do not understand.
LLE use synthetic data to compute the exceedence probabilities in Figure 3.
I think they used the estimated Hurst coefficients to generate the synthetic data.
What do you think they did?
Richard S.J. Tol, I am not going to start discussing Figure 3 when we haven’t even reached an agreement on what DFA does. Please stop trying to change the subject. The current dispute is quite simple, and there is no reason to talk about Figure 3 when trying to resolve it.
If you have any questions about what I’ve actually said, please quote me and explain what clarification you want. However, since I haven’t said a word about Figure 3, please stop talking about it.
You wrote: “Those later calculations do not use detrended data. They simply need the Hurst coefficient in order to be able to calculate probabilities.”
The calculated probabilities are used in Figure 3 of LLE. How do you think LLE calculate these probabilities?
Richard S.J. Tol, I’m not going to go through the entire process they used in those calculations, especially since doing so would require discussing the reference where they got part of the process from. If you really want to know how the calculations are done, feel free to read up about them.
In the meantime, I’ll again repeat my previous comment. If you have any questions about what I’ve actually said, please quote me and explain what clarification you want.
I don’t think this will be resolved without a more detailed explanation from the authors of exactly what they are doing. My interpretation was that they were using the DFA to determine the autocorrelation/persistence/Hurst exponent alpha. Then given this alpha, work out how likely it is that you will see a trend purely as a result of autocorrelation, which they call ‘natural’, using Monte Carlo simulations.
Paul Matthews, I disagree. I don’t think more detail could resolve anything. Richard S.J. Tol simply came up with an “interpretation” which isn’t based upon what the authors actually said, much less what DFA actually does. I have no idea how he came up with it, but it is simply, and obviously, untrue.
For what it’s worth though, the interpretation you came up with is correct.
The GRL paper by Bunde and Lennart describes the method in more detail. It is exactly as I guessed. Paul Matthews’ interpretation is equal to mine.
Richard S.J. Tol, you just said:
This leaves us in a peculiar position. Just a little while ago, I said Paul Matthews had the correct interpretation. This indicates he came up with the same interpretation I have been advancing this entire time. You now say his interpretation is equivalent to yours. Taken at face value, this would mean our interpretations are the same. Obviously, that cannot be true. Either his interpretation is in line with what I’ve said, or it is in line with what you’ve said. It cannot be both.
Given what he said is basically a summarized version of the second and third paragraphs of my earlier comment, I think it’s clear he is “siding” with me. His interpretation in no way supports the criticism you leveled against the papers discussed in the blog post, and your claim it is equivalent to your interpretation is bollocks.
But by all means, feel free to explain how Paul Matthews said the authors “eliminated from their analysis the very thing in which they are interested.”
Brandon: Sure. We reached a logical impasse.
We agree that the trend was eliminated before the DFA.
The trend was either reintroduced or not in the calculation of the exceedence probabilities.
I think the trend was not reintroduced.
What do you think?
Richard S.J. Tol, your latest comment has you saying something unfortunately unsurprising:
I do not agree with this, at all. One does not eliminate trends before doing DFA. DFA eliminates the trends. Normally I would consider this an inconsequential matter of poor phrasing, but given our disputes consist entirely of you misrepresenting (and presumably misunderstanding) what various people and sources say, I can’t view it as inconsequential. That aside, you then go on to say:
This is, to put it bluntly, an extremely silly comment. Not only is it irrational, it is one I’ve already addressed. If this is the “logical impasse” we’ve reached, the impasse is one created by you making things up and ignoring what I’ve said.
Brandon: Can you kindly stop ducking the question?
Was the trend used in the computation of the exceedence probabilities, or was it not?
Richard S.J. Tol, I have explained to you, repeatedly, why I am not discussing Figure 3. However, that is not what you now accuse me of ducking. Instead, you accuse me of ducking this question:
You’ll note, this is different from your earlier questions of:
It is cheeky to ask me to explain how calculations are done then accuse me of “ducking the question” of whether or not those calculations involve using one particular thing. Refusing to get dragged into a detailed discussion of an irrelevant methodology is a far cry from what you now accuse me of.
Now then, to answer your question. I’m not sure what you mean by “the trend,” but the answer is irrelevant in regards to the creation of Figure 3. Figure 3 was generated purely with synthetic data, so no matter what you meant, the answer would be no. However, the results displayed in Figure 3 are compared to actual data to draw conclusions. For this comparison, the actual (non-detrended) series, is used, in direct contradiction to what you’ve claimed.
Brandon: I’m glad we agree on what they did. Your description in your final paragraph is equivalent to my description above.
Richard S.J. Tol, no it is not. Nowhere on this page have you given given a description compatible with what I just said. That would be why my last paragraph, the one you say is equivalent to your description above ended with:
To be clear, the description I gave directly contradicts what you say in your blog post. Simply stating my comment agrees with what you said will not change this. You are just misrepresenting me at this point, as you did with Paul Matthews. It’s ridiculous, and nobody is going to fall for it.
If you genuinely believe what I said is equivalent to what you said, please explain why you believe such. I know you didn’t do this with the comment by Paul Matthews, but here’s another chance for you to actually discuss the issues.
You seem to have misread my post. LLE estimated the Hurst exponent from detrended data. They thus threw away the parameter of interest.
Richard S.J. Tol, you say:
This is nonsensical. Using DFA to find the Hurst coefficient doesn’t somehow mean any “parameter of interest” was thrown away. Nothing gets thrown away by figuring out the Hurst coefficient. All that happens is you find a measure of self-affinity in the data.
More importantly, you state this in response to me disputing your claim that the last paragraph in this comment of mine is equivalent to your description of what the authors did. This is what that paragraph said:
Nothing in this even discusses DFA, much less discusses how using DFA throws away a “parameter of interest.” As such, this latest comment of yours seems to be a complete non-sequitur (in addition to being completely wrong). Rather than explain how my paragraph is equivalent to your description as you should have (and as I suggested you do), you have simply repeated your initial position while ignoring what I’ve said, and claiming I seem to have misread you. This behavior of yours makes no sense at all, unless you are just trying to troll me.
But since you repeated your claim, let me make this as clear as possible. Using DFA does not discard data, and it did not result in the authors throwing away any parameter of interest. You are simply making this up, as you have time and time again.
I was struck by figure 1 of LLE. It quite easy to produce similar plots using a system with a distribution of time constants +- some “persistance”, which in the first instance I modelled as a pulse varying anything from between 30 to 100 years. Interestingly, this does produce quite sharp trends, followed by a plateau, which might be regarded as a fluctuation. Since LLE did this calculation, I would liked to have seen distributions of trend duration & magnitude.
Richard S. J. Tol,
so please enlighten us how the trend was reintroduced after elimination
Luke 11:33 “No one lights a lamp and then hides it or puts it under a basket.
Instead, a lamp is placed on a stand, where its light can be seen by all who enter the house.
“I have known Phil for most of my scientific career. He is the> antithesis of the secretive, “data destroying” character…”
The original raw data are not lost either. I could reconstruct what we had from some DoE reports we published in the mid-1980s. I would start with the GHCN data. I know that the effort would be a complete waste of time though. I may get around to it some time. As you’ve said, the documentation of what we’ve done is all in the literature.
Would you please give Phil Jones a call, and get this stuff along with the original ‘literature’ together so we all, can see.
This is a technical thread.
so please enlighten us how the trend was reintroduced after elimination
so please enlighten us how the trend was reintroduced after elimination
The trend(s) emerged in many of the realizations of the simulated random process. It was not “reintroduced”.
The DFA technique is complicated? I really don’t think that is the case. I am not sure it is available directly in statistical tools such as R, so I was able to whip up a quickie version last night. The only bit of ambiguity is whether the least squares minimization for the a_i coefficients is done per L or across all L values simultaneously. I think that logically it should be the latter, which does make it a bit more computationally intensive, but otherwise straightforward.
Like I said, I was able to implement a version and use that to extract the random walk coefficient for the Vostock Ice Core data for over several hundred thousand years (HERE). This was about 3300 points worth of data and it placed the resolution at about 100 year intervals. What is neat about the fit is that it goes right through the origin and matches to the conventional random walk and a Hurst exponent of 0.5, which essentially says Hurst/fractal behavior is not operational. I can reasonably make the assumption that the random walk will scale down to a yearly resolution, since it looks like it does appear to intersect at the origin.
The Vostok datais automatically detrended natural behavior, albeit for one location on the earth, as it starts and ends at close to the same temperature after several hundred thousand years. So I imagine we can extract the random walk hopping rate from my fit and then use that to estimate the possibility of generating the natural variability in the present day warming trends. This would give an exceedance probability as Ludecke claims, but with a much longer and more ergodic set of data to deal with.
The only step left for me to do is estimate the diffusional hopping rate from the fit. I have arbitrary units on the DFA f(L) function but I should be able to scale a hopping rate that from the fit and then also calibrate it against from the independent ACF and temperature change histograms I have computed.
What was the purpose of this comment? You didn’t address anything I said, other than to say you don’t think DFA is complicated. It’s fine you think that, though I actually said “fairly complicated,” and I do think that’s an apt description of the process (especially given a non-statistician audience). However, whether or not DFA is “fairly complicated” is pretty much irrelevant to the point of my comment, so I don’t know why you would make this response of yours.
For one, to show how the technique works on a long time-series as Tol suggested.
I like to lean on the law of large numbers.
Tests for long term persistence on the Vostok ice core (and other ice cores and sediment proxies at similar scales) show that the Hurst phenomenon can be found, with H of approx. 0.98, e.g.:
http://itia.ntua.gr/getfile/980/2/documents/2010EGU_ReconHurstPr.pdf (click through to presentation)
Particular care with ice cores is required especially if they are “orbitally tuned”, it distorts the power spectrum and can give misleading results.
OK, what we are trying to show is if the excursions follow a random walk so that the envelope over the entire set looks like this:
This is the standard random walk Fickian excursion that you will find in a temporal data set. For times up to 20,000 years that seems to fit the Vostok data. Beyond that as you can see from the chart that I made, long term correlations show up and the random walk excursions are limited to the 10 degree C swings between the interglacials. That is why the Square Root dependence disappears as we are clamped to the interglacial temperature differences at large time intervals — thus the dispersive nature of the diffusional random walk disappears and the curve oscillates to the interglacial periods and flattens out.
Someone else can redo this chart as it isn’t that hard to do. I have referenced Koutsoyiannis before and I know he is an expert at fat-tail rainfall effects, but I don’t know exactly what he is doing with this particular analysis. I have noticed in the past that he tends to like to do slide presentations and I rarely find any detailed papers on his work. Maybe I didn’t look hard enough but I could only find presentations and no papers on his hydrology rainfall work, and that is the case with his paleoclimate studies as well.
The climacogram has a number of advantages over DFA as a tool for estimating the Hurst exponent, such as the ability to correct for estimator bias. I can’t remember the paper where it was introduced (unfortunately I don’t think it carried that name originally), I’ll see if I can locate it.
Unfortunately, Dr Koutsoyiannis often ends up putting his latest work in presentations rather than into papers. I suspect this is at least partly because of the obstacles that are put in place to publication, possibly due to the failure to toe the consensus line.
Well I know that Koutsoyiannis is an innovator in extreme value analysis and in fat-tail statistics, and since this subject matter doesn’t always keep the “Normal” statisticians happy, I can see why he may have some problems in publishing.
The following chart is an example of a classical random walk simulation with a diffusion coefficient of 0.0004 Deg^2/Year.
This walks past the boundaries on the positive and negative excursions as time elapses. In terms of the real climate situation, those are likely reflecting boundaries due to the feedbacks on the high end due to S-B cooling and probably multiple effects on the low end.
IMO, this is the kind of simuation analysis that one would do to figure out exceedance probabilities. Also one can do it via analysis and cumulative probabilities.
There are two ends of climate science temperatures and precipitation.
Precipitation is the uncertainty end as it effects temperatures.
Have you looked at the data latitude rather than from the whole?
Largest mass…Most freshwater…Velocity differences…Salt concentrations….etc?
Not likely as those velocity mapping was just generated.
It does make for an interesting read.
I’m still left scratching my head here. In several areas, I agree with Dr Tol, but in several areas I still disagree.
Overall, I think the paper adds relatively little. The attribution stage is just strange, narrowly comparing apples with pears and often not accounting from errors and bias from the estimation process which most likely dominates the results.
I agree with Dr Tol that there is an interesting and important debate over whether “unforced”, or natural variability is best represented by an autoregressive (Markovian) structure or fractal (Hurst) structure. This paper clearly uses a Hurst structure and as an assumption, I think that is reasonable.
An important aspect of Hurst-Kolmogorov dynamics is that the Hurst exponent is scale-independent (from self-similarity). This is different from autoregressive functions which are not scale-independent and will yield different results when viewed at different scales (and may indeed require different models – a true AR(1) series at a monthly scale necessarily becomes an ARMA model and the annual scale, as well as the change in model parameter). But this is not true of the Hurst exponent, which (if correctly calculated and modelled) should remain consistent from monthly to annual scale. Indeed, Dr Koutsoyiannis has shown just this from satellite, instrumental, ice core and geological proxies for global temperature, all of which share a common Hurst exponent of the order of 0.9-0.98 from a scale of months to millions of years.
A second point is the removal of the seasonal variation. If we are using the Hurst-Kolmogorov dynamics to represent unforced variability, it is essential that this is removed because the seasonal variation is a clear externally forced change. It also does not fit with the Hurst-Kolmogorov dynamics (clearly shown on a log/log power spectrum plot; the Hurst-Kolmogorov floor of natural variability appears as a straight diagonal line across the plot; seasonal variability clearly appears as a narrow spike above this floor). Remember that the Hurst phenomenon here is modelling the internal variability of the climate, as a stationary time series, and is not representing forcings on top of it. So removing the seasonal cycle is essential; with the seasonal cycle in place, the estimates of the Hurst exponent will be heavily skewed by this non-stationary component of the time series. That said, I strongly agree with Dr Tol that the consequences of such an operation must be taken into account in the analysis, and not just ignored. So my view is that some sort of operation like this must be applied, but the exact form and consequences must be understood. I suspect I sit somewhere between Dr Tol and the authors here.
When it comes to testing for the presence of a trend within self-similar data, DFA is not only an appropriate method to use, it is more appropriate than FA for estimating the Hurst exponent. The reason is that any non-stationarities in the data (e.g. the known seasonal cycle or postulated non-stationary trend) can skew the estimate of the Hurst exponent. Therefore, it is quite appropriate that DFA be used, as it was (for example) in Cohn and Lins 2005 GRL paper. These parameters can then be used to model the underlying stationary time series, and determine a probability (e.g. by Monte Carlo modelling, as conducted in C&L) that the trend present in the main series might be there by random chance alone. So I’m confused by Dr Tol’s emphasis on the D in DFA; it is in fact more suitable than FA in the case above. Perhaps I’m missing something else that has been done in this particular paper that makes it inappropriate.
But there is a lot to find wrong in the attribution sections in the papers. As noted, global temperature records (instrumental, satellite) typically have a high Hurst exponent (H~0.9-0.98) when using estimators which correct for bias etc., and it is no surprise that global (or large scale areal) proxy records are of this order also. Local temperature records often have lower values for H, and so it is no surprise to find this disagreement when they compare local records to large scale proxy reconstructions. As noted, other analysis along these lines (e.g. Halley 2009) shows that the proxy records have non-overlapping confidence intervals, i.e. they can’t all be right (or have significantly underestimated confidence intervals). The findings in these papers do not really address fairly obvious and likely problems with the analysis.
Spence_UK, on the issues I’ve examined, I agree with you, and consequently have little to add. However, I can explain to you the DFA situation. You have not missed anything about this particular implementation. You’ve simply (and quite understandably) misunderstood Richard S.J. Tol’s criticism. His position is using DFA detrends the data, therefore all the later calculations done in the paper are based upon detrended data, and that is wrong.
Of course, that position is complete nonsense, That DFA detrends data in no way prevents the authors from using the original data in other calculations.
With trending data, the FA estimate of the Hurst exponent is biased. The DFA estimate is less biased (or perhaps even; I did not check this). So they correctly use DFA.
However, DFA gives information about the fluctuations around the trend. I would think that the trend itself is the parameter of interest.
OK I think I’m a little clearer on the differences now.
The fluctuations around the trend are relevant for characterising the null hypothesis – i.e. is there a trend or not. This is the focus of my discussion on how DFA is used, e.g. in Cohn and Lins. My interest is whether the criticism you have of DFA applies to other papers that I often rely on.
However, in this paper, they then go on to use the Hurst exponent as a diagnostic in determining why the 20th century is “different”. Which makes no sense, as the point of the Hurst exponent is to estimate the stationary part of the series.
One could make a case for a change in Hurst exponent as being evidence of a non-stationarity, but if this was the case this would also need to be tested against a null hypothesis of no change, which is not done at any point, not to mention addressing the apples-to-pears comparisons being made.
So am I right in thinking that your criticism of DFA in this case is with respect to the attribution aspect of the paper, not the use of such tools in testing the null hypothesis (i.e., is there a significant trend)?
Spence_UK, I disagree with your comment about their process making no sense, but I’m afraid I’m spread a bit too thin to discuss it in any detail. For the moment, all I can do is say it might be useful for you to see where the authors got the process from. The authors are not using a new approach.
(I find it interesting you mention Cohn and Lins and the paper I just linked to has it as a reference.)
I think there might be some confusion, partly because I have split the paper into two parts; one part where they apply the techniques of Lennartz and Bunde to the BEST data, and another part where they attempt to carry out attribution based on the results.
The part of the paper where they largely replicate Lennartz and Bunde’s methods I do not see a problem with. It wouldn’t be my preferred method, but I think the method is okay. It is the second part, where they attempt to perform attribution based on the Hurst exponent, that I am less convinced by. This is something that I do not think is taken from the Lennartz and Bunde paper, but is novel (and, IMO, not good analysis).
I am trying to get clarification from Dr Tol whether his criticism falls along similar divisions. If so, I am okay with it. If he also disagrees with (for example) Cohn and Lins or Lennartz and Bunde then I must be missing something in his claims.
The Lennartz and Bunde paper picks up on a long debate from Cohn and Lins 2005, including Rybski et al 2006, Koutsoyiannis 2006 and Halley 2009.
Spence_UK, I don’t believe there is any confusion about what you mean, but before I get to that, I should point something out. These papers did not use data from BEST. They only used stations from GISS.
With that out of the way, the “attribution” part of these papers is definitely taken from the Lennartz and Bunde paper. Its section four is all about attribution, and its conclusion section specifically discusses the probability of observed trends being anthropogenic. I don’t see how you would conclude the attribution part of the papers being discussed in this blog post are novel.
My apologies. I read the first guest post with lots of references to BEST, but I see that was only with regard to the timing of the paper. Hmm. You are correct, the paper uses the GISS data set. This doesn’t materially change anything I was discussing.
The attribution sections are quite different between the two papers, from my reading. Lennartz and Bunde merely try to apply confidence intervals to any trends that reject the null hypothesis at some level, but do not attempt to link those back to (e.g.) paleo temperature or solar reconstructions, which is where I think the problems arise in the LLE paper. One could argue that the use of the word “anthropogenic” in Lennartz and Bunde is a loaded term and inappropriate in their analysis.
Spence_UK, I agree the difference between them using data from BEST and GISS doesn’t change anything we’re discussing. I just thought it was something worth noting.
In any event, I now see I was confused. Most of the discussion on this page has been limited to the first portions of the papers (the first three sections). Because of this, when you said “attribution,” I thought you meant attributing the warming to anthropogenic influence. That’s what the Lennartz and Bunde paper covered, hence my disagreement with you. Of course, once you start talking about things like solar influence, you’re definitely beyond what L&B covered, but you’re also beyond what Richard S.J. Tol and I have been disagreeing about.
Sorry to have misunderstood you.
The thing I find troubling is I cannot even figure out what data they used.
If they used what I think they used, then I’d say GIGO.
color me unimpressed.
Sure. DFA is a valid statistical method. Like all methods, it should be used appropriately.
However, DFA gives information about the fluctuations around the trend. I would think that the trend itself is the parameter of interest.
Repeating myself from above, they did a hypothesis test, not an estimate of a trend. The result of there hypothesis test was that there is little evidence for the existence of a trend to be estimated.
Richard Tol began his essay above with the following:
“There has been some brouhaha over a guest post by Lüdecke, Link, and Ewert. I think the quality of work is so bad that Judith should never have spotlighted these papers. Publicly berating Judith, I of course drew more attention to their work, and her response added to that.
So why I do think this work is so bad it should be ignored?”
In these few words, Tol dismisses the litany of outrageous moral claims that he made over the preceding two days as he argued that Dr. Curry had made some serious mistakes in posting material by Lüdecke, Link, and Ewert. He seems to believe that a post from him on his criticisms of Lüdecke, Link, and Ewert will set all things right and put God back in His Heaven. In other words, he believes that he can make all sorts of outrageous moral demands on Dr. Curry and the participants in her blog, refuse to explain or defend his moral demands, and remain free of criticism because he offers a post that is critical of the science. I cannot accept his implicit claim that his scientific offerings bring with them redemption from his moral errors.
Tol’s most outrageous claims, in my opinion, are that Dr. Curry speaks authoritatively through her blog and that she made some kind of moral error by “lending” her authority to Lüdecke, Link, and Ewert when she posted their work. In response to my criticisms of the first claim, Tol wrote that “I indeed think that Judith Curry is an authority. I also think that others think the same.” Clearly, Tol does not believe that there is a need to explain why he believes that Dr. Curry acts as an authority through her blog. Apparently, he believes that the fact that he says she is makes it true. We do a disservice to scientists and to science if we permit scientists to remain wholly inarticulate on moral questions. We do a double disservice if we permit them to insist that discussion of science must be the only focus of blogs such as Dr. Curry’s and that any means may be employed by scientists seeking to post so long as they believe that they further the discussion of science.
One purpose of Dr. Curry’s blog is to learn about communications between climate scientists and the general public. I believe that Tol’s example establishes that some scientists believe that they are permitted to dismiss any questions from the public that do not bear in the most direct way on their scientific claims, that they are morally justified in this act of dismissal, and that they have no duty to explain why they are morally justified. I cannot imagine a better definition of unbridled arrogance. I believe that we have learned that some scientists cannot avoid arrogance in their communications with the public. Needless to say, such arrogance does not bode well for climate scientists who believe that their main problem has been a failure to get their message to the public.
Science is amoral, but being truthful is a moral essential to science, and we expect scientists to truthful to themselves. Richard Tol was being truthful to himself when he took Judith Curry to task for presenting the material by Lüdecke, Link, and Ewert. He believed it was the wrong thing for her to do and he explained why he thought so. Could he have been less abrasive? Certainly, but I don’t think that’s a moral issue.
Being truthful to one’s self won’t get you very far in ethical theory unless you are a romantic of the Byronic sort.
The man is simply incapable or unwilling to engage in moral discourse except when is his on the offensive against someone like Dr, Curry.
“make all sorts of outrageous moral demands”
It’s really telling that you regard having the basic integrity not to promote dishonest and incompetent analysis as an “outrageous moral demand.” That implies that lying is at the heart of your argument, and the most basic of principles — telling the truth — has been rejected by you. Which is sad.
Please address his claims about Dr. Curry. That is the topic.
I’m glad Dr. Tol gave us his viewpoint. I’m just sorry he accused Dr. Curry of disinformation. Surely Dr. Tol knows that the purpose of her blog is to promote open discussion?
Anyway, I think Dr. Tol now knows that instead of name calling, he does better to use his exceptional brain power to explain why a study is bad. Keep it up, Dr. Tol, in this new direction of yours!
Your morning update:
Famous climate scientist (CS) and telescope minion schooled by real statistician. Major criticism of LL by famous climate scientist in doubt. Famous CS and minion tacitly admit they do not have a clue about what the bounds of natural variability might be. Thus ruling out of natural variability as cause of recent temperature rise must be ruled out. Stay tuned. Authors of LL to make appearance. Famous CS may get tired and go back to bed.
Great post! Give us more detail.
Don Monfort, I assume you are referring to Richard S.J. Tol when you say as the “famous climate scientist,” but who are you referring to when you say “real statistician”? From what I’ve seen, the only person to make any serious effort to dispute Tol’s criticism(s) has been me.
But you’re not calling me a “real statistician,” are you?
You will do until one comes along. It is apparent from his unsupported arguments that the famous climate scientist does not fit the description. Why won’t those guys pay for professional advice? I know they hire lawyers when they want to sue somebody for defamation.
Yikes, you were referring to me. I know I’ve handled statistics better than Richard S.J. Tol on this page, but that doesn’t make me any sort of statistician. It makes me a person who is able to read sentences for what they say. It takes no special knowledge to know Tol’s comments on DFA are completely bogus.
In any event, thanks for the compliment, I guess.
It’s relative Brandon. I am not accusing you of misrepresenting your qualifications. Compared to the average famous CS, you are a real statistician. I often take literary license. Are we OK now?
Don Monfort, I know you weren’t accusing me of anything. I was just surprised as I don’t think I’ve ever been called a “real statistician” before, and I’ve been called lots of things.
By the way, that I was surprised doesn’t mean I was bothered by it. We’ve been “OK” all along.
OK, you are a liar. That’s why nobody here likes you. Well, Josh maybe.
Josh is a swell fellow compared to Robert.
I’m sorry, did calling you out on your lies hurt your widdle feelings?
Better toughen up.
Update on update:
Still no apology for boorish behavior from famous CS.
Not so great post. Try: Still no sign that famous CS is not completely inarticulate on moral questions.
You will have to speak with my editor about that. But I think he prefers to keep it simple.
Still no recognition by boorish (and overly prolific) poster that the blog host explicitly specified that this is a technical thread and that comments like his belong elsewhere.
You are short one string. Your comment is not technical in nature. I am filling in for Josh. Your hall monitor merit badge is in the mail.
Richard Tol is an economist.
“Richard S. J. Tol (born 2 December 1969, Hoorn, the Netherlands) is a Research Professor at the Economic and Social Research Institute, Dublin, Ireland, where he works in the research areas of energy economics and environmental economics. He is the professor of the economics of climate change at the Vrije Universiteit Amsterdam, an adjunct professor at Trinity College Dublin and an associate at Hamburg University and the Hamburg Centre for Marine and Atmospheric Science. He is a member of the Academia Europaea.”
I will continue to refer to him as a famous CS. He is probably as qualified to pontificate on the climate science, as are most of the rest of them.
1969! I have socks older :)
Here is an example why scientific studies are often misleading:
Consider the use of drug tests to detect cheaters in sports. Suppose the test for steroid use among baseball players is 95 percent accurate — that is, it correctly identifies actual steroid users 95 percent of the time, and misidentifies non-users as users 5 percent of the time.
Suppose an anonymous player tests positive. What is the probability that he really is using steroids? Since the test really is accurate 95 percent of the time, the naïve answer would be that probability of guilt is 95 percent.
Now suppose, based on previous testing, that experts have established that about 5 percent of professional baseball players use steroids. Now suppose you test 400 players. How many would test positive?
• Out of the 400 players, 20 are users (5 percent) and 380 are not users.
• Of the 20 users, 19 (95 percent) would be identified correctly as users.
• Of the 380 nonusers, 19 (5 percent) would incorrectly be indicated as users.
So if you tested 400 players, 38 would test positive. Of those, 19 would be guilty users and 19 would be innocent nonusers. So if any single player’s test is positive, the chances that he really is a user are 50 percent, since an equal number of users and nonusers test positive.
This is a great example, well explained. I’m sending it to my non-statistically minded friends.
But the example does not illustrate “why scientific studies are often misleading” – Rather, it illustrates that “the naïve answer” is sometimes not the correct answer, and that a careless reading of scientific studies can produce scientific garbage.
What it really shows is that framing the question is as important (maybe more important) than getting the correct answer. Where the climate debate (as opposed to climate science) usually goes terribly wrong is when the wrong question (i.e. “has the earth warmed over the past century”) is posed. See: “straw man”.
I think it may illustrate fred berple has a misconception.
He says: “Suppose an anonymous player tests positive. What is the probability that he really is using steroids?”
He also says: “So if any single player’s test is positive, the chances that he really is a user are 50 percent ….”
An individual player does not have a steriod-using probability. He is either using steroids or not using steroids.
Of course any individual is either using or not.
But unless you happen to be omniscient, the salient issue is what facts we know, given the test results.
We’re in Heisenberg country now.
It could be stated as : “what is the probability that an accusation of steroid use, made against a positive-testing player, would be accurate ?”
I’ll have you know I resemble that remark.
That doesn’t make the scientific study misleading; it makes the results easy to misinterpret. That’s not at all the same thing. What you are describing is a basic example of the application of Bayes’ theorem to convert sensitivity and specificity into positive and negative predictive values.
These things are not misleading or even difficult if you know what you are doing. The problems comes when people who are ignorant of the basic tools and concepts of science want to argue about science, but they don’t want to do the work involved to become a knowledgeable self-taught amateur scientist.
This leads to a number of habitual mistakes and errors in judgment, including but not limited to misunderstanding the results of scientific studies, mistaking their own ignorance for a problem with the science, inaccurate and biased evaluations of the science, etc.
It’s a great help in analyzing noisy, chaotic data when you know ahead of time the answer you’re looking for.
There is nothing either noisy or chaotic about the effect of prevalence on pretest probability and thence on the predictive value of a test (or exam finding, or whatever).
You just don’t know what you’re talking about.
What the example shows is that an imperfect test (show me a perfect one) can deliver misleading results if you don’t already know the properties of the sample.
Since you are doing the analysis to uncover the properties of the same, it seems reasonable that you probably don’t know the properties.
For example, in the case of the baseball players and drug testing, if you don’t know ahead of time how many are using drugs, an imperfect test can deliver a misleading result.
I think the thing that hasn’t been emphasized about the LL paper is that it gets the Hurst coefficient for individual station records, not a global mean. In fact they say that the standard deviation of the global mean is not large enough, which to me means it is not a noisy enough time series (e.g. see how BEST looks quite smooth compared to a typical station record). In a sense their result about the station temperatures meandering more than trending (my interpretation of their conclusion) is not surprising. It is like looking for a global warming signal at one station, which is hard to find because of the local and natural variability that is much larger than the expected warming signal. This is exactly why no one does it that way, and is obvious by visual inspection, but LL found it why statistically.
Very good point and unfortunately lost in the noise of the discussion. I suppose that is why even if we have a long term time series such as the ice core data to do the DFA, the fact that it is a single location means that it won’t average out to the asymptotic central limit theorem result than you would get for a global set.
Will this make the estimates more optimistic or pessimistic for exceedance probabilities ? On the fence on this because the item of interest is more conservatively tabulated via approaches such as kriging analysis, yet the paleoclimate data may show fatter tails.
Maybe for longer periods, time-smoothing is a proxy for a spatial average, but is an imperfect one subject to local peculiarities, such as nearby ocean current changes.
Thanks for bring that up. I noticed that; it’s one reason I was not inclined to spend any more time on it. But the further discussion here has been illuminating on some of the background, so it’s all good.
This is the quote I found puzzling as to why they don’t trust global data as much as station data.
“It is stressed that in general the applied procedures to establish global records from local ones result in unrealisticly small values of the standard deviation σ. This becomes in particular obvious regarding the BEST curve by eye. Therefore, in general, we assume that globally averaged records are not feasible for an autocorrelation analysis.”
Jim D, the authors explain why they feel global records have an unrealistically small standard deviation. Shouldn’t you address the reasoning they give?
Surely a small standard deviation is an advantage when looking for a signal. It results from canceling random local perturbations. This is why I don’t see their point.
Jim D, it is an advantage, but only if it the standard deviation is accurate. The authors claim global records have an unrealistically small standard deviation, which means the standard deviation would be inaccurate (if they’re right).
Speculation on their part. I would associate large standard deviations with uncanceled errors and small ones with cancellation of errors.
“the authors explain why they feel global records have an unrealistically small standard deviation.”
The authors say: “The global temperature increase in the 20th century is mainly evaluated from global temperature records that are established
with special algorithms , , , , . On its own, however,
local records show a different picture, since cooling, not warming, is found for a high percentage of temperature stations worldwide.We argue that global records are not a feasible tool to examine global temperatures. First of all, the homogenization and grid box procedures used to establish global records are inevitably arbitrary and, therefore, could cause quite different outcomes when applied to identical sets of local records. Secondly, and of main interest here, establishing global records attenuate the extremes of uncorrelated local records. As a consequence, the standard deviation, which is a decisive attribute in our analysis, becomes unrealistically small in global records.”
Their assertion that the different processings “cause quite different outcomes when applied to identical sets of local records” is not true. There are four estimates of global average land surface temperature now and they all agree closely since about 1900. The standard deviation is remarkably insensitive to the methods applied, even if they are, as claimed, arbitrary.
There are no grounds for claiming that the standard deviation is unrealistically small. Their analysis doesn’t provide them because the evidence they show suggests that they have a very biased sample that is not globally representative – it’s clustered at mid-latitudes in the northern hemisphere.
There’s nothing wrong with looking at local records, but their rationale seems a bit thin. The downside of looking at local records is that they maximise the noise to signal ratio (or the natural to anthro ratio) so the chances of rejecting a sizeable anthropogenic trend by their standards is pretty high.
It looks as if they find a detectable trend in a fair proportion of stations, which one could, if so inclined, take as positive evidence of a signal.
Neb, your assertion that they asserted that:
“the different processings “cause quite different outcomes when applied to identical sets of local records”,
is simply not true..You omitted the word “can”..
sorry,, the word you omitted is “could”
Thanks for pointing that out. The conditional version of the sentence is trivially true. One could arbitrarily add a random factor to each series. In practice, when more care is taken, and the records are corrected in a manner that, despite containing arbitrary elements, is motivated by a physical understanding of the problem, the results are remarkably consistent.
One could also point out that using uncorrected series is equally ‘arbitrary’ and has the additional and obvious demerit of being the wrong thing to do.
It’s clear you’ve got Professor Tol back on his heels. Superb job of holding his feet to the fire (probably one too many foot metaphors :-) . I can’t follow much of the technical back and forth, but it’s more than clear he’s not able or willing to answer your questions. Fabulous job.
I realy think Brandon should petition JC to do his own guest post.
I’m flattered, but I don’t think there would be anything I could discuss which would be worth a blog post. As it stands, the disagreement between Richard S.J. Tol and I is really quite simple. He leveled a criticism against the authors. I say his criticism is untrue, and it is based entirely upon him making things up. For all the back-and-forth between us, it is really that simple. Of course, there is much more I could say about his behavior, and the fact the ridiculousness of it has prevented any sort of resolution of this dispute, but that wouldn’t be good material for a blog post.
Then again, I guess I could always petition Judith Curry to let me do a guest post on some other subject. Unfortunately, I doubt I could find anything which would really fit on this blog.
He would likely score big points from a high school debating judge, but unfortunately this is the internet, we are not in high school, and I could care less about proper rules of etiquette for debating.
Superior knowledge and adherence to the proper rules of logic doesn’t scare you either. Right telescope?
Just curious – if you can’t follow the technicalities involved, how do you draw such a conclusion?
Aye, and that’s the reason I made my comment. I can see through the debating tactics, and don’t get swayed by something that to me looks like the red-herring strategy of a Chewbacca defense.
“Looky here, Richard S.J. Tol said this here, and then he said this. Ladies and Gentlemen. This does not make sense!”
… and it goes on and on. Who cares, this is the internet and statements typed at the keyboard are often ambiguous the first time through.
To me, Tol has been consistent in his initial argument, picked up some ammo along the way, and now has a pretty bullet-proof critique against the Ludecke papers.
Joshua, I knew someone would ask that even before I pushed the post button. Despite my lack of technical expertise, I can still read English. The man is not answering direct questions. It doesn’t take a rocket scientist, or a statistician, to recognize that.
pokerguy, I could actually post a “simple” explanation of the technical details if it would be useful.
Then do it, and I bet it won’t come out any different than what Tol has already explained. You are just trying to pick up debating style-points, yet no one is counting.
I also thought about that. But having engaged with Brandon in debates in the past, I’m not inclined to make the conclusive judgement that you made without being able to follow the technicalities.
It is possible in debates to claim that one’s questions haven’t been answered, when: (1) the questions have been answered or, (2) the questions are essentially irrelevant, or essentially straw men, etc.
In other words, when did you stop beating your wife?
I can’t evaluate whether that is the case in this debate – but I can notice comments that Brandon makes such as:
Now I have read Brandon making similar statements, numerous times, in other situations where I could clearly discern that they weren’t accurate. And I know that he’s made similar statements towards me that weren’t valid. And I have seen him do this even as he claimed that he wasn’t engaging in counter-productive debating tactics.
So – in the end, because I can’t follow the technical debate I am reluctant to draw conclusions because I don’t always find Brandon’s statements to be valid, nor am I inclined to assume his accusations to be accurate.
So for me, as someone who is neither a rocket scientist nor statistician, the simple ability to read English doesn’t enable me to evaluate the context sufficiently to draw the conclusion that you drew – and I have to wonder whether your conclusions might not be influenced by your confirmation bias. If you’re comfortable with your conclusions, so be it.
Second try – if this is a duplicate, my apologies, and I hope the duplicate can be deleted.
Earlier, I provided evidence to refute the authors’ interpretation that twentieth century temperature data are predominantly due to “natural fluctuations” as opposed to “external trends” This interpretation is in paper LL. In LU, dealing with earlier climate history, they contradict themselves and exclude natural fluctuations as the main factor, but because their twentieth century interpretation is wrong on the basis of climate data, I thought it worthwhile to recapitulate some of the problems arising from their misconceptions about climate behavior. For this purpose, I’ll quote from their earlier post on their papers, with my commentary interspersed in italics.
“Temperature records are persistent (long-term correlated). This is well known since a warm day is more likely to be followed by another warm day than by a cold day, and vice versa. Short-term persistence of weather states on a time scale of days until several weeks is caused by general weather situations and meteorological blocking situations. However, the causes of long-term persistence over many years and even several decades are largely unknown. Persistence – a purely natural phenomenon – is measured by the HURST exponent α and is explicitly opposed to external trends like the UHI or warming by anthropogenic CO2…”
Persistence is in no way antithetical to externally forced trends, including those mediated by anthropogenic greenhouse gases. In fact, the opposite is somewhat more likely. If the climate warms this year from some unspecified “natural” but unforced fluctuation, it will be subject next year to the cooling influence of the Planck Response. Some persistence via feedbacks is possible, but without postulating impossibly high climate sensitivities irreconcilable with observed climate behavior, the overall effect over time will be an anti-correlation – warming one year leading to cooling in a subsequent year or shortly thereafter. The circumstance under which long term persistence will be most likely involves the operation of an external forcing that maintains a radiative flux imbalance driving a continued warming. Increasing CO2 concentrations are a salient example, although other forcings, including “natural” changes in solar irradiance, can also supply the external influence needed for a long term trend.
“Both autocorrelated real temperature records without external trends and autocorrelated synthetic temperature records that can be generated by special algorithms are denominated as ‘natural’. As the main feature of autocorrelated natural records, extremes arise that seem to be external trends. This poses a fundamental problem because without further effort, an external trend and an apparent ‘trend’ that is caused by persistence are not distinguishable…”
As mentioned above, there is nothing necessarily “apparent” (unreal) about a trend due to persistence.
“Fig. 4: A synthetic purely autocorrelated (natural) record that nevertheless seems to be determined by external trends…. The method of ,  that tackles this problem is based on the assumption that an observed real record has the following two constituents: a natural part, which is governed by autocorrelation; and (possibly) an external trend… one has to check whether the value of W is below a defined confidence limit. If this is the case, the observed real record is gauged to be determined by an external trend. Otherwise it is assessed as ‘natural’…”
If I understood the above correctly, the misconception is repeated here except that the synthetic record utilized a “purely autocorrelated” record to generate what the authors claim are spurious trends. Their assumption seems to be that autocorrelation somehow “just happens naturally”, but of course that is false. Whether the phenomenon is persistence in excess of an exponentially declining autocorrelation function or simply a more ordinary autocorrelation, climate physics require some type of persisting physical mechanism to sustain the effect from one year to another, because otherwise, one year’s effects are likely to mediate counteracting effects by the next year, or if not then, within another year or two. In fact, a synthetic simulation of an external trend based on the imposition of autocorrelation is evidence for rather than against an external forcing, because autocorrelation would be minimal in its absence.
Despite the above, I should mention again that apparent surface temperature trends of medium length might in theory occur by chance from an unlikely run of independent natural factors. The conclusive evidence that the paper was wrong was provided by the data showing progressive addition of an enormous quantity of heat to the ocean during the interval when temperatures rose. That appears to exclude anything other than an externally imposed persistent forcing, but I did invite anyone uncertain about this to attempt to reconstruction that would recreate the observed ocean heat changes with parameters consistent with climate behavior.
I would certainly welcome comment from the Hurst exponent/autocorrelation experts.
Got me thinking there Fred. For a given empirical time series, the overall trend is indistinguishable from a natural frequency of that period or longer. So if we have a time series of length L and a trend that extends over L, then the natural frequency that can constitute that trend has that period or longer. So Ludecke removes that trend and then tries to dig it out of the random walk distribution (Brownian or Hurst) and should find that it is a very low probability. If it was a high probability, they wouldn’t have the statistics to prove it because they just removed the main statistical component from the time series!
That is why Tol (and I) believe they are throwing away the vital information, the “a_i” trend coefficient from the DFA analysis. And more importantly they are making things up if they say that DFA is identifying that the current warming as a high probability natural variation. This is what they say:
If the trend was a high probability, it would also need huge error bars, since they removed that trend’s natural frequency, making it almost a tautologically futile exercise to prove that it is entirely a natural warming, as per what Tol has stated. The DFA is at best a perturbation exercise and allows one to estimate the small probabilities of exceeding some temperature change.
Richard Tol, I must leave. Thank you for your time and effort. To me, your criticisms were on point but not dispositive. The paper was in fact better than others that have been published in the peer-reviewed literature and cited in support of AGW. Had the authors submitted the papers to Science instead of E&E, I am sure we would not have heard of them for at least 2 more years. That would have been a mistake.
Which is the opportunity to thank JC.
Of course you make some good points. I struggle just like everyone else when it comes to sorting out matters of ego and bias.
The Change in the Observed Global Temperature is Natural
That is because only a single pattern exists in the global mean temperature data.
This pattern is an overall warming of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years as shown.
The oscillation is due to ocean cycles as shown in Figure 4 of the following paper:
This ocean cycle details has been described as:
Observed global mean temperatures are below IPCC projection for the case emission had been held constant at the 2000 level as shown below:
Observed global mean temperatures match Hansen et al projections for the case climate forcing had ceased to increase after 2000 (Scenario C) as shown below:
Observed temperatures do not show the greenhouse finger prints of hot spots in the mid-troposphere as shown below:
There is no evidence of AGW in the global temperature data.
Here you go.
BEST exaggerates the slope but also increases the confidence, which clarifies the relationships and allows us to conclude with more precision the clear AGW signal post 1970 correlated with CO2.
Note also the incredibly reliable sunspot inverse relationship until some perturbation around 1940 began to cause the relationship to degenerate and fishtail.
This is a signature of disruption of an ergodic pattern in a Chaotic system under external forcing.
The inverse-sunspot relationship re-asserts itself briefly until about 1960, predictably (by Chaos Theory) exhibiting period-doubling and then turbulent breakdown, entirely replaced by the CO2-driven rise.
See however the small amplitude accounted for by the solar signal, sunspots accounting only for about an order of magnitude less than the size of the temperature swing globally in the last half century. This implies oscillatory — as the 2005 paper you referred to suggests — and alleatory and ontic variations also contribute, also a small amount to the variation.
The pre-1960 rise may have been dominated by UHI and changing land use (as rural-urban is a bigger change than urban-urban++), aerosol and albedo changes and the like, but CO2 and its feedbacks mathematically must break through as the cumulative major effect, and 1970 appears to have been when that happened.
You want a test you can do right now that will confirm what I say?
Look in the eye of anyone with no melanin in their iris.
Is it blue or green?
What causes that? It’s exactly the same physics as is observed in GHGs.
Blue eyes prove AGW.
Your graph only proves the eye can be fooled by any seductive curve in chronic delusional misidentification syndrome, an insidious pathological condition.
Land is only 25% of the globe. Don’t ignore the oceans. I don’t look at BEST data until then.
A weather station is an incredibly small thing. The total surface area covered by only the thermometers of under 40,000 stations, at about 1 cm each, is vanishingly small, whether you include bouys or no.
Yet we must regard the measurements from this infinitessimal area (indeed, because it is so small, it is more useful) as representative of the whole, a principle applied thorughout all measurement in the world in every field, as confirmed by established statistical methods — methods of which you repeatedly demonstrate absolutely no interest in nor understanding nor use of as far as I can tell.
We know land-only measurement introduces a bias. We can estimate the degree of that bias, its rate, its direction, its weight in estimating the actual combination of SST and BEST, its impact on CI. We understand why mechanically oceans lag in expressing heating while the ice packs melt and deep mixing increases marginally, compared to land — in exactly the same way we understand why the quasi-60-year ‘period’ you propose does not actually match the major ocean oscillations.
(To remind you, as once you tacitly acknowledged it, the fact of the shift of the length of your ‘period’ between 40-ish and 70-ish years itself disqualifies periodic fit entirely.)
Looking at the SST datasets, we see no cause to believe BEST does anything but emphasize the post-1970 trend, while making the post 1950 CI so significantly improved as to allow us to reliably draw conclusions from global 13-year trends on BEST, where trends under 17 years on all other global datasets are, in isolation, meaningless.
I don’t ignore the oceans. I use BEST to simplify the model to allow me to better resolve underlying variables.
There is none so blind as he who chooses not to look at BEST data.
All that said, the above exercise is just a demonstration, and not intended as argument that what I say is validation of AGW. (That is better done with other analyses, I think.) It just is meant to show how entirely lacking what you say is.
Bart R, why not do this quantile regression MattStat is suggesting?
Also, is there a reason you cannot do the same analysis on WfT using BEST’s gissin’ cousin: Gistemp? A. Lacis seems to think adding oceans to those so proficient on land is not going to change the result.
There you go. Your dataset of choice. Only affirms the effect again.
Will you stop spreading these invalid analyses of yours that only obfuscate?
What will it take for that to happen?
Time will not, as you say, tell; Bayes’ Theorem means we would need wait potential generations; your cooling wager – win or lose — means nothing.
When will you stop this pathological behavior?
..is there a reason you cannot do the same analysis on WfT using BEST’s gissin’ cousin: Gistemp?
No reason at all. Feel free to do so yourself. You’ll get about the same result, though the offset and scale factor for Mauna Loa may need tweaking.
I favor BEST in this case because it’s a longer record that demonstrates inverse correlation on the Hale cycle, and the land/sea differences are immaterial to the narrative presented with the graph.
Graphs are just pictures that go with stories, there is a large aspect of fiction to all of them. Sometimes, the fiction is just a story. Sometimes it means what it looks like.
A. Lacis seems to think adding oceans to those so proficient on land is not going to change the result.
Done correctly, I’m sure this could be correct, except of course for the properties that will change depending on the judgements made by the compositor. I’m unwilling to do so myself, WfT doesn’t have a utility for combining curves in any event, and if the land/sea difference is important, both can be plotted on the same graph anyway.
Bart R, why not do this quantile regression MattStat is suggesting?
Joe’s car is having engine trouble. Joe goes to Dr. Bill, his surgeon friend, and asks, “Doc, put my engine into your MRI so I can see what’s going on inside.”
Dr. Bill says, “Joe, your engine is magnetic, it’d break the MRI and all that would come out is random noise.”
Joe’s reply, “Doc, let’s not worry about breaking the MRI; so long as there’s something coming out, there’s a chance it might be right!”
Girma’s point may be right.
No one in the world is qualified to say Girma’s _belief_ is wrong.
Incredibly unlikely, given the evidence, but wrong? Not possible to say, so far as I know.
Girma’s _methodology_? That’s wrong. It’s manifoldly wrong. It’s technically incorrect seven ways from Sunday. It fails as graphical analysis, it fails as statistical analysis, it fails on Bayesian principles. It fails on logic. It fails on proper treatment of evidence. The numbers don’t even add up.
Sure, bits and pieces of Girma’s presentation are correct.
That’s hardly surprising, as they’re generally widely borrowed from others’ presentations (usually unrelated to Girma’s claims) without attribution.
Which I believe fails on academic ethics, especially when combined with Girma’s habit of recycling past presentations that have been reviewed and criticised and found seriously flawed, without reference to this shoddy provenance of his work.
These last three threads in Cliff Notes format:
Our statistical massaging of sparse, incomplete, uncertain data is better than your statistical massaging of sparse, incomplete, uncertain data.
No it isn’t.
Yes it is.
It’s like I walked into a biker bar to find the bikers doing a bar fight lip-sync style. They look tough. They talk, well, prissy. They never land a punch.
Walked back outside and the sign says:
The Growling Poodle:
where statistician bikers meet
And the bikes are all Vespas.
How are the last three threads substantively different from the run-of-the-mill Climate Etc. thread?
Brandon S. wrote:
I’m sure many of us would appreciate that greatly. Please do! Generous of you to take the time…
Seconded. I would absolutely love to have the simple version.
And by the way, Brandon, I’ve made a mental note never to cross swords with you unless I am very sure of my facts! :-)
I posted an explanation here. I tried to keep it as simple as I could, but let me know if I need to clarify anything. Often, what seems “simple” to one person isn’t simple to someone else.
Those who keep complaining that non-technical comments are appearing on a technical thread need to read the set-up of Dr. Tol’s post. This is an editorial. A technical discussion would have started here: “There are two papers.” This diatribe is more a critique of free discussion than it is of the two papers, which according to the good Dr. should be ignored. He will probably end up wishing that he had ignored them.
I imagine he already is.
You’re making it hard on those of us who aren’t retirees with too much time on our hands. But by all means, let me help you:
JC note: This is a technical thread that will be moderated for relevance. Make your general comments on EIKE or disinformation brouhaha on one of the previous threads.
Not that I’m surprised that you don’t care what the blog host (who also has a day job and not enough time to play ‘hall monitor’) has explicitly asked.
You need a more fulfilling hobby.
I am not retired. I just have a lot of money. I hire people to do the work.
It seems that all you have to do is to try to dog me. But thank you for the publicity. Try to make your next post on topic. So far you are an o-fer.
Frankly, I think that the statisticians are making an ungodly mess of the entire global temperature measurement issue. To answer the question, “what is happening to the climate of the earth”, all you need is a small representative collection of pristeen thermometers well distributed about the earth. No place on earth is going to have it’s own temperature for very long. Convection will see to that. Then simply take that small number of pristeen thermometers (50 – 100) and do a simple average anomaly. That would be far more trustworthy that all the irrelevant garbage, manipulation, and endless adjustment that the statisticians want to do – namely, methods that can prove anything anyone wants to, depending on their agenda. Frankly, I think the world is sick and tired of the statisticians and their little egotistical squabbles.
Well said. I cannot imagine that by statistically analyzing temperature data, the complex climate thermodynamics can be sorted out.
You can get rid of the statisticians but you can’t get reid of the random variability. In no time at all you’ll have a passel of infantile statisticians.
“You can get rid of the statisticians but you can’t get reid of the random variability.”
Really it is not random but follows the laws of physics that we do not have at this time and the existing science has failed. Consequently, statisticians have good reasons to get involved in the climate issue because of its importance and desperation at the same time. There is no way that the causes of global warming can be figured out by statistical analysis of surface temperature data.
Even the surface temperature cannot be figured out by statisticians. Only good thermometers can give it to us. Statisticians can’t make good thermometers out of bad. They can apply, “corrections”, and “adjustment” based on assumptions and preferences. But regarding global temperature, they can’t help.
Nabil Swedan: Really it is not random but follows the laws of physics that we do not have at this time and the existing science has failed.
Empirically, that is a distinction that can not be maintained. You paraphrase Einstein: “God does not throw dice.” But we can not tell from present evidence whether God does or does not throw dice; that was pointed out to him. All we know is that all research produces random variation, and the variation frequently is describable by “distributions”.
There is no way that the causes of global warming can be figured out by statistical analysis of surface temperature data.
These papers and this thread are not about causes, but merely whether an observed and so-called “trend” can arise when there is no trend.
“These papers and this thread are not about causes, but merely whether an observed and so-called “trend” can arise when there is no trend.”
The answer to that question is “yes”. Statistically significant trends (95% confidence levels) happen 5% of the time when there is no trend.
The squabbles between statisticians arise from the dearth of pristine station records , which prevents reliable straightforward determinations of regional temperature variablity throughout much of the globe. My colleagues and I have attempted to compile a world-wide index of temperature variability using only unadjusted, intact records from nominally “pristine” stations, affected as little as possible by changing environments. Unfortunately, truly pristine station records prove to be virtually unavailable outside the USA, Scandinavia and Australia. Furthermore, there are no intact century-long records of any reliability in many regions of the world, leaving much to be desired vis a vis geographic coverage.
Nevertheless, stark differences show up between our cursory index, based on simple area-weighted averaging, and the unproven statistical syntheses produced by GISS, NOAA, and BEST. Those differences are largely in the very low-frequency components, with the synthetic indices invariably showing far less decline in the post-war years through the mid-1970s and a much greater 20th century trend overall. That sort of systematic discrepancy points to UHI and/or record-splicing methods as the culprit in the syntheses.
I understand what you are saying John. But I’ll take sparse data over manipulated data at any time. For example, you say that you can only find those kinds of thermometers in the USA, Australia, and Scandinavia. Fine! I can’t believe that those countries would have trends that are significantly different from the global trend. Maybe they would for a few years. But when looking at a century, I would want a very good physical explanation for why Scandinavia should have a different trend than, say Africa. Unless the thermometers that you have are completely unrepresentative in terms of latitude, altitude, and vegetative coverage, you should get a meaningful number.
John: “That sort of systematic discrepancy points to UHI and/or record-splicing methods as the culprit in the syntheses.”
Regarding UHI, I did this back of the napkin calculation:
Let’s start with the results from Imhoff’s paper.
“Globally averaged, the daytime UHI amplitude for all settlements is 2.6 °C in summer and 1.4 °C in winter. Globally, the average summer daytime UHI is 4.7 °C for settlements larger than 500 km2 compared with 2.5 °C for settlements smaller than 50 km2 and larger than 10 km2.”
This tells us that UHI contamination for all “settlements” is 2.6C summer + 1.4C winter / 2 = 2.0C. So, what qualifies as a settlement. At the end of that quote he says:
“for settlements smaller than 50 km2 and larger than 10 km2.”
So we know that a settlement would have to be larger than 10 km2. What does that mean in terms of population. Zhang gives the example of Lynchburg VA, which has a population of 70,000 and an area of 29 km2. So if Lynchburg has 70,000 in 29km2, then I’m going to assume that Imhoff’s 10 km2 will have less than 50,000 people. And I’m giving away quite a bit of slop here.
Now, turning to the BEST UHI study, they say this:
“Urban areas are heavily overrepresented in the siting of temperature stations: less than 1% of the globe is urban but 27% of the Global Historical Climatology Network Monthly (GHCN-M) stations are located in cities with a population greater than 50,000.”
From this we can conclude that Imhoff’s settlements have, at a minimum, 27% of the GHCN thermometers. Let’s say that all the rest of the thermometers have no UHI effect. As Spencers study shows, this is not going to be the case, since the effect starts at very low population densities. But I’m again giving away the slop here. So, if that 27% has 2C of UHI contamination, then that averages out to .54C of UHI contamination for every thermometer globally – as a minimum. Considering Spencer’s study about low density UHI effect, and the fact that Imhoff’s paper likely meant more than 27% of all stations, that number could be as high as 1C, in my mind.
And the NASA UHI article:
Now, in 1850, when many of these reconstructions start, the world population was 1.2 billion. Today it is 7 billion. The fact that there was already some UHI in 1850 means that some of that effect, which is meant to show global temperature change for the study period, will have to be given back. But, regardless, the answer remains that the contamination of the temperature record is very large.
I, too, place much greater reliance on a spatially sparse, but relatively unbiased, sample of validated station records. Such a sample, however, shows very much different spectral characteristics in different regions of the world, especially wrt to very-low frequency components. In other words there is no such thing as a uniform global trend. In the face of such inconsistencies, the question of truly global variations becomes much more complex than the compilers of “global” indices acknowledge.
Not wading through everything that has been written, I’ll just make the point that statistical analyses that purport to be able to determine “natural” versus “anthropogenic” without providing a physical causitive bases are essentially useless.
And this works both ways.
Weather and climate are due to physical interactions between the sun, the solar system, the galaxy(?), the oceans, land masses, atmosphere, and biosphere. Statiscal analyses may be useful to discern patterns of cyclisty or oscilllatory behaviour but they have no explanatory power as to causation.
What follows is a “simple” explanation of the detrended fluctuation analysis (DFA) controversy. Richard S.J. Tol criticized the authors of two papers for using DFA, claiming in effect, DFA “eliminated the very thing they were interested in.” I dispute this, saying DFA does not lead to the elimination of any data.
To explain, I’ll use the idea of “black boxes.” Rather than not discuss what calculations are, I’ll treat them as processes in which we can see the inputs and outputs, but not the calculations themselves. By doing this, we can analyze Tol’s position without having to worry about all the mathematical details.
Now then, there are two black boxes. The first is the DFA black box. The second is an Exceedance Probability (EP) black box. The first box takes a temperature series, which I’ll call x, as its sole input. The box does what it does to x, and when it’s done, it comes up with a single value called the Hurst coefficient (also called Hurst exponent). That’s all.
With this value, the second black box is used. It takes as its input the same series x, but it also uses the Hurst coefficient generated by the first box. It runs, and it comes up with a value, called probability. This probability is the probability of the observed warming being “natural.” With these descriptions in mind, consider what Tol said in his guest post:
What did the authors eliminate from there analysis? Both black boxes used the same series (x) as input, so DFA didn’t change anything. All it did was produce a value, the Hurst coefficient. It didn’t generate a new series as output, so it couldn’t possibly have modified anything about what was being analyzed.
For a very rough analogy, imagine you are doing an analysis on a series, y. In one stage, you take the average of that series, giving you a single value. You then do a calculation on y in which you use that average as an input. This could be as simple as finding the standard deviation of the series.
Did you throw away any data? No. The single average you calculated clearly contains less information than the full series, but since you then used it in a calculation involving the full series, nothing was discarded. This is basically the same as what the authors did with DFA. With this in mind, consider comments from Tol such as these two:
Do these comments make any sense given the explanation I provided? No. Also, that the word “detrended” is in the name of DFA does not mean DFA produces a detrended series which the authors analyzed, though this is the impression Tol gives with the latter comment. DFA does not create a new series to be used. DFA simply calculates that one value, the Hurst coefficient.
Of course, you have only my word this explanation is accurate. Others may tell you it is wrong. If you cannot verify what I’ve said by looking into what the papers did (or what DFA does), allow me to suggest an alternative. Before believing anyone who advances Tol’s position, demand they tell you which input value has data eliminated from it. If they cannot do so, don’t believe them.
I’m more conversant with the physics than with the statistical arguments, but I’ll tentatively comment on the latter. I believe that both Brandon and Richard are partially right, and not as much in conflict as implied above.
It is true that in a literal sense, no data were “thrown away”. However, Richard is correct in asserting that an estimate of the probability that the observed temperature change over time was due to “natural fluctuations” as opposed to an external trend caused by anthropogenic ghg emissions can’t be estimated from the DFA process used by the authors. In fact, the magnitude of the trend from which the fluctuations were calculated has a critical effect on this probability, and was ignored by the authors.
In the following quotation from Brandon’s comment, he correctly summarizes the essence of what the authors thought they were doing – but weren’t:
“With this value, the second black box is used. It takes as its input the same series x, but it also uses the Hurst coefficient generated by the first box. It runs, and it comes up with a value, called probability. This probability is the probability of the observed warming being “natural.”
Well, not to put too fine a point on it, that probability is NOT the probability of the observed warming being “natural” or whatever term the authors wish to apply to the fluctuations. Rather, it is, roughly speaking, the probability that such fluctuations could generate runs of temperature change with time that exceed certain observed values, thereby creating the false impression of an external trend (this is the best approximation I can manage without trying to explain Hurst exponents and autocorrelation). However, as anyone familiar with basic statistical principles knows, estimating the probability of an outcome in the case of a “null hypothesis” is very different from estimating the probability that a null hypothesis is true, the null hypothesis in this case being the “natural” or non-anthropogenic mechanism. For the latter, we need to know the prior probabilities of both the null hypothesis and its alternative(s). The information required for this is rarely available, but in this particular case, we can at least state that the magnitude of the observed trend line affects the prior probability of the alternative, and so as Richard states, neglecting it discards relevant information, even if that information can’t directly be translated into a numerical value for prior probability.
I won’t repeat here my previous commentary on why the physics preclude the authors’ interpretation, but I’ll add that I see no reason why autocorrelation in a detrended series should contradict a physical mechanism for the observed trend that includes a persistent CO2-mediated forcing. This is particularly true for climate behavior that exhibits internal fluctuations on shorter time scales and on a regional basis, in concert with an external trend on a longer and global timescale.
The above, which I consider tentative, addresses only surface temperature records. In several previous comments, I’ve cited the evidence from ocean heat content that I find irreconcilable with the authors’ conclusions even if their statistical reasoning were more defensible than it appear to be to me.
I should amend my above comment to acknowledge that the observed trends were not “ignored”. However, their magnitude provides evidence for the existence (or non-existence) of a forced response. A strong trend provides affirmative evidence, based on climate physics, and that same response can also show up as autocorrelation in a detrended analysis.simply because a persistent forcing will tend to maintain an elevated temperature from one year to the next, when without that forcing, the Planck Response would tend to mediate a cooling effect following a warmer year. For this reason, autocorrelation in the DFA doesn’t completely negate the significance of a steep upward temperature slope over time.
I’m not going to discuss the merits of the approach used by the authors (at least, not in this fork). This explanation is merely to explain the dispute between Richard S.J. Tol and myself in a more easily understood manner. Whether or not the approach used by the.authors could accomplish what they set out to do, Tol’s criticism of the use detrended fluctuation analysis, and his subsequent responses to me, have been baseless.
With that said, it’s worth noting the method used by the authors was not one they created. They are merely using the methodology designed by Bunde and Lennart. If one is going to criticize the methodology, this needs to be mentioned.
Fred: Well, not to put too fine a point on it, that probability is NOT the probability of the observed warming being “natural” or whatever term the authors wish to apply to the fluctuations. Rather, it is, roughly speaking, the probability that such fluctuations could generate runs of temperature change with time that exceed certain observed values, thereby creating the false impression of an external trend (this is the best approximation I can manage without trying to explain Hurst exponents and autocorrelation). However, as anyone familiar with basic statistical principles knows, estimating the probability of an outcome in the case of a “null hypothesis” is very different from estimating the probability that a null hypothesis is true, the null hypothesis in this case being the “natural” or non-anthropogenic mechanism. For the latter, we need to know the prior probabilities of both the null hypothesis and its alternative(s).
Since I frequently disagree with you, I must now agree that you have this correct.