by Judith Curry
although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….
The social psychology community is reeling from this news, discussed at the blog Ignorance and Uncertainty:
Tilburg University sacked high-profile social psychologist Diederik Stapel, after he was outed as having faked data in his research. Stapel was director of the Tilburg Institute for Behavioral Economics Research, a successful researcher and fundraiser, and as a colleague expressed it, “the poster boy of Dutch social psychology.” He had more than 100 papers published, some in the flagship journals not just of psychology but of science generally (e.g., Science), and won prestigious awards for his research on social cognition and stereotyping.
Tilburg University Rector Philip Eijlander said that Stapel had admitted to using faked data, apparently after Eijlander confronted him with allegations by graduate student research assistants that his research conduct was fraudulent. The story goes that the assistants had identified evidence of data entry by Stapel via copy-and-paste.
Robert Smithson raises some interesting issues and questions. Regarding means and opportunity:
Let me speak to means and opportunity first. Attempts to more strictly regulate the conduct of scientific research are very unlikely to prevent data fakery, for the simple reason that it’s extremely easy to do in a manner that is extraordinarily difficult to detect. Many of us “fake data” on a regular basis when we run simulations. Indeed, simulating from the posterior distribution is part and parcel of Bayesian statistical inference. It would be (and probably has been) child’s play to add fake cases to one’s data by simulating from the posterior and then jittering them randomly to ensure that the false cases look like real data. Or, if you want to fake data from scratch, there is plenty of freely available code for randomly generating multivariate data with user-chosen probability distributions, means, standard deviations, and correlational structure. So, the means and opportunities are on hand for virtually all of us. They are the very same means that underpin a great deal of (honest) research. It is impossible to prevent data fraud by these means through conventional regulatory mechanisms.
Regarding motive:
Cognitive psychologist E,J, Wagenmakers (as quoted in Andrew Gelman’s thoughtful recent post) is among the few thus far who have addressed possible motivating factors inherent in the present-day research climate. He points out that social psychology has become very competitive, and
“high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….”
I would add to E.J.’s observations the following points.
First, social psychology journals exhibit a strong bias towards publishing only studies that have achieved a statistically significant result. This bias is widely believed in by researchers and their students. The obvious temptation arising from this is to ease an inconclusive finding into being conclusive by adding more “favorable” cases or making some of the unfavorable ones more favorable.
Second, the addiction in psychology to hypothesis-testing over parameter estimation amounts to an insistence that every study yield a conclusion or decision: Did the null hypothesis get rejected? The obvious remedy for this is to develop a publication climate that does not insist that each and every study be “conclusive,” but instead emphasizes the importance of a cumulative science built on multiple independent studies, careful parameter estimates and multiple tests of candidate theories. This adds an ethical and motivational rationale to calls for a shift to Bayesian methods in psychology.
Third, journal editors and reviewers routinely insist on more than one study to an article. On the surface, this looks like what I’ve just asked for, a healthy insistence on independent replication. It isn’t, for two reasons. First, even if the multiple studies are replications, they aren’t independent because they come from the same authors and/or lab. Genuinely independent replicated studies would be published in separate papers written by non-overlapping sets of authors from separate labs. However, genuinely independent replication earns no kudos and therefore is uncommon.
The second reason is that journal editors don’t merely insist on study replications, they also favor studies that come up with “consistent” rather than “inconsistent” findings (i.e., privileging “successful” replications over “failed” replications). By insisting on multiple studies that reproduce the original findings, journal editors are tempting researchers into corner-cutting or outright fraud in the name of ensuring that that first study’s findings actually get replicated. E.J.’s observation that surprising hypotheses are unlikely to be supported by data goes double (squared, actually) when it comes to replication—Support for a surprising hypothesis may occur once in a while, but it is unlikely to occur twice in a row. Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.
Most researchers face the pressures and motivations described above, but few cheat. So personality factors may also exert an influence, along with circumstances specific to those of us who give in to the temptations of cheating. Nevertheless if we want to prevent more Stapels, we’ll get farther by changing the research culture and its motivational effects than we will by exhorting researchers to be good or lecturing them about ethical principles of which they’re already well aware. And we’ll get much farther than we would in a futile attempt to place the collection and entry of every single datum under surveillance by some Stasi-for-scientists.
JC comments: I find this case and Smithson’s comments interesting and of relevance to climate science for several reasons. The research culture and motivational factors in the field of social psychology have arguably contributed to rewarding behaviors that are not in the best interests of scientific progress, in the same way that I have argued that the IPCC and the culture of funding, journal publication, and recognition by professional societies have not always acted in the best interests of scientific progress in climate field.
I was particularly struck by the “data torturing” concept. Consider a chemistry experiment conducted in a controlled laboratory environment, whereby the raw data is used for the analysis, with fairly clear procedures for determining the uncertainty of the measurement. Testing hypotheses using climate data is much more challenging from the perspective of the actual data. In climate science, uncertainty associated with observations can arise from systematic and random instrumental errors, inherent randomness, and errors in analysis of the space-time variations associated with inadequate sampling. Applications of climate data in hypothesis testing may require either the elimination of a trend or high frequency “noise.” Hence for any substantive application, climate data needs to be “tortured” some way, in the sense of applying some sort of manipulations to the data and making some assumptions. The problem occurs when the data is “tortured” to produce a desired “confession.” Seemingly objective manipulations of the data can inadvertently produce “confessions” beyond what is objectively obtained in the original data set.
Documented manipulations of the data can be reproduced if the data and metadata are available, and sufficient information (preferably code) is provided so that independent groups can evaluate the objectivity and technical implementation of the method used in the analysis. It is because of complexity of the climate system and the inherent inadequacy of any measurement system that complex data manipulation methods are used. It is essential that we better understand the limitations of the methods and how to assess the uncertainty that they introduce into the analysis.
Wrt to torturing data, here is an oldy but a goody:
http://wmbriggs.com/blog/?p=195
No problem, I have heard that stated before. If you filter data you are throwing away information. But there are certain obvious things you can do. For example, if you know that there is likely a seasonal influence of one year and you have monthly data, then it is okay to invoke a 12 month sample mean.
So instead of looking at this:
http://woodfortrees.org/plot/hadsst2gl/from:1960/normalise/plot/esrl-co2/from:1960/derivative/normalise
You get to look at this:
http://woodfortrees.org/plot/hadsst2gl/from:1960/normalise/plot/esrl-co2/from:1960/mean:12/derivative/normalise
So you notice that the derivative of CO2 concentration with respect to time seems to match the global temperature variations. The next obvious thing to do is to run a cross-correlation to check for obvious lags, and you don’t see any. Then try a Proportional model on the actual CO2 and see how much the variance is reduced by running a Proportional-Derivative model
http://1.bp.blogspot.com/-S5PeFWheTHg/Tm70q–ifhI/AAAAAAAAAgU/YOy_x1SMqFU/s1600/CO2_PD_P_Models.gif
Then you wonder what the likelihood is of having a zero-lag, high correlation agreement between temperature and a substance theorized to have an effect on global temperature. Seems small, likely less than 1:25 compared to completely randomized data.
You next wonder if the fluctuating carbon fossil fuel emissions somehow relate to the fluctuating CO2 measurements.
http://img30.imageshack.us/img30/6048/co2ccemissions.gif
Not quite as striking but still there is still zero lag between the two time series. I also did a long term trend of FF emissions against CO2 and found agreement there by applying a convolution of historical FF data prior to 1900 with a fat-tailed impulse response function.
http://mobjectivist.blogspot.com/2010/05/how-shock-model-analysis-relates-to-co2.html
I used a fat-tail impulse response because that matches diffusivity into long term carbon stores: http://theoilconundrum.blogspot.com/2011/09/missing-carbon.html. I found that I needed to parameterize the CO2 background up to 294 ppm for the model to fit really well.
http://4.bp.blogspot.com/_csV48ElUsZQ/S-DKxjdTE7I/AAAAAAAAARc/aN08E8jlvCg/s1600/ml-carbon-fit-294.gif
So next you wonder what the likelihood of finding agreement of a time-series correlation between CO2 and Temperature, and of a time-series correlation between FF Carbon emissions and CO2 over the last 50 years. Conservatively I would think it would be 1:25 X 1:25 or 1:600 for the agreement to be just coincidental. And I also wonder if it is indeed an agreement, how dominating is the effect?
I am just as curious as everyone else as to what this all means, but I am not going to wait for a statistician to come around and do it for me. Instead, I retrieved the data and did the signal processing myself, and hope for one of the stat auditors to emerge and tell me what I did wrong. Maybe it is the premature normalization, maybe I shouldn’t have averaged over 12 months? Would that have made any difference in the final interpretation?
Almost defines data torture.
We know that CO2 varies with temperature – http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
Here is an experiment by little Mary Freeman. She found that CO2 was highly correlated with temperature. http://www.colorado.edu/eeb/courses/1230jbasey/abstracts%202006/13.htm
You haven’t really explained the cross correlation of atmospheric CO2 with anthropogenic emissions. Monthly values should really be used here given the usual response times of the systems – and it seems annual values were used? So you have what is assumed to be a peak correlation of 0.4 at 0 years. So this explains 16% of the variance?
Sounds a bit high to me – and is probably the data smoothing. .
Thanks for the references. That NOAA version of the data appears to the eye even more correlated to the temperature. I will do the actual cross-correlation with the NOAA data and see how it differs.
That said, say we don’t care about the causality. So if CO2 varies with temperature in this predictable way then aren’t we verifying the hockey stick rise in temperature? CO2 then becomes a perfect proxy for temperature records and we don’t have to worry about the kriging interpretation of spatial temperature records anymore (which is much of what McIntyre and the auditors complain about). All we need to do is look at these incredibly sensitive CO2 records from sites like Mauna Loa and from the mixing of CO2 we have a better interpretation at what is going on.
Not much we can do about that. I have been studying peak oil topics for several years, and the best we get in terms of records for oil and fossil fuels is the yearly amounts.
Bottom line, I am just trying to understand what is happening from a systems perspective. I have put in a lot of work over the last few years understanding oil depletion, and think I can provide a fresh interpretation that the climate scientists may have been missing. We’ll see how far I can go with it.
This is what the cross-correlation looks like between the yearly d[CO2] data that you referenced from the NOAA site and the hadcrut3 Temperature data.
http://img546.imageshack.us/img546/2292/hadsst2gldco2noaa.png
I notice how much of the fine structure disappears because of a wider window but the strong correlation at 0 lag is still there.
And this is what the correlation looks like with a Proportional-Derivative model of [CO2] against Temperature.
![\Delta T = k[CO_2] + B \frac{d[CO_2]}{dt}](https://s0.wp.com/latex.php?latex=%5CDelta+T+%3D+k%5BCO_2%5D+%2B+B+%5Cfrac%7Bd%5BCO_2%5D%7D%7Bdt%7D&bg=ffffff&fg=333333&s=0&c=20201002)
http://img411.imageshack.us/img411/5791/pdco2t.png
This has zero lag and a strong correlation of 0.9.
The model is
The first term is a Proportional term and the second is the Derivative term. I chose the coefficients to minimize the variance between the measured Temperature data and the model for [CO2]. In engineering this is a common formulation for a family of feedback control algorithms called PID control (the I stands for integral). The question is what is controlling what.
When I was working with vacuum deposition systems we used PID controllers to control the heat of our furnaces. The difference is that in that situation, the roles are reversed, with the process variable being a temperature reading off a thermocouple and the forcing function is power supplied to a heating coil as a PID combination of T. So it is intuitive for me to immediately think that the [CO2] is the error signal, yet that gives a very strong derivative factor which essentially amplifies the effect. The only way to get a damping factor is by assuming that Temperature is the error signal and then we use a Proportional and an Integral term to model the [CO2] response. Which would then give a similar form and likely an equally good fit.
It is really a question of causality, and the controls people have a couple of terms for this (I know because I got grilled on it for my qualifiers). There is the aspect of Controllability and that of Observability.
So it gets to the issue of two points of view:
1. The people that think that CO2 is driving the temperature changes have to assume that nature is executing a Proportional/Derivative Controller on observing the [CO2] concentration over time.
2. The people that think that temperature is driving the CO2 changes have to assume that nature is executing a Proportional/Integral Controller on observing the temperature change over time, and the CO2 is simply a side effect.
What people miss is that it can be potentially a combination of the two effects. Nothing says that we can’t model something more sophisticated like this:
![c \Delta T + M \integral {\Delta T} = k[CO_2] + B \frac{d[CO_2]}{dt}](https://s0.wp.com/latex.php?latex=c+%5CDelta+T+%2B+M+%5Cintegral+%7B%5CDelta+T%7D+%3D+k%5BCO_2%5D+%2B+B+%5Cfrac%7Bd%5BCO_2%5D%7D%7Bdt%7D&bg=ffffff&fg=333333&s=0&c=20201002)
is the last one
What makes your approach difficult is that according to the present main stream views (as I have understood them) CO2 is driving the temperature in PI fashion with a effective delay of a couple of years, while the temperature is driving shorter term fluctuations of CO2 over one or two years, but in a reverting manner, because no persistent storages of CO2 size that would allow for effects extending over several years are involved in this process.
Hi WHT,
You think you have linear functions in a feedback loop, and that is how you model it. If the data is up to it, you could try forming the complete lagged covariance set. Its matrix C(t) =
[A(0)A(t), A(0)B(t),
B(0)A(t), B(0)B(t)]
t = time step.
I think I really do mean covariance not correlation.
Then convert to the impulse response function matrix R(t).
As I recall using the inverse matrix of C(0) = C(0)^-1
R(t) = C(t) x C(0)^-1
I think it is that way round.
In theory (Fluctuation Dissipation Theory) it is sort of hey presto! You have the response function as intiated by A or by B. Which I think is all you need to know.
If not, it might show up some additional complexity or that the data is simply not up to it.
So if you haven’t tried that, yet check it out.
Alex
Oops
Correction:
As I recall using the inverse matrix of C(0) = C(0)^-1
(was silly)
As I recall, using the inverse matrix of C(0) i.e. C(0)^-1
(is not quite so silly)
Alex
So temperature causes changes in CO2 and the changes in anthropogenic CO2 explain (perhaps optimistically) 16% of the atmospheric variability. So what other factors independently cause temperature variability?
‘Change in SOI accounts for 72% of the variance in GTTA for the 29-year-long MSU record and 68% of the variance in GTTA for the longer 50-year RATPAC record. Because El Niño Southern Oscillation is known to exercise a particularly strong influence in the tropics, we also compared the SOI with tropical temperature anomalies between 20S and 20N. The results showed that SOI accounted for 81% of the variance in tropospheric temperature anomalies in the tropics. Overall the results suggest that the Southern Oscillation exercises a consistently dominant
influence on mean global temperature, with a maximum effect in the tropics, except for periods when equatorial volcanism causes ad hoc cooling.’
http://i1114.photobucket.com/albums/k538/Chief_Hydrologist/maclean-2009-Fig4.gif
‘During El Niño, the warming of the tropical eastern Pacific and associated
changes in the Walker circulation, atmospheric stability, and winds lead to decreases in stratocumulus clouds, increased solar radiation at the surface, and an enhanced warming.’ http://www.cgd.ucar.edu/cas/Staff/Fasullo/refs/Trenberth2010etalGRL.pdf
The decadal changes in TOA flux and ocean heat content look something like this.
http://i1114.photobucket.com/albums/k538/Chief_Hydrologist/Wong2006figure7.gif
Source – Wong, T., B. A. Wielicki, R. B. Lee III, G. L. Smith, K. A. Bush, and J. A.Willis (2006), Reexamination of the observed decadal variability of the Earth Radiation Budget Experiment using altitude‐corrected ERBE/
ERBS nonscanner WFOV data, J. Clim., 19, 4028–4040, doi:10.1175/
JCLI3838.1
There is little doubt that CO2 drives temperature but this is certainly minor on interannular to decadal timescales. There are other more important drivers on these scales and probably longer. These include clouds, ice, dust and ocean thermohaline circulation.
Here for instance is an 11,000 year ENSO proxy – http://i1114.photobucket.com/albums/k538/Chief_Hydrologist/ENSO11000.gif – the red shift indicates El Niño.
The point is that it is not a simple system – that involves only CO2 and temperature – whichever way causality occurs or both ways which is the logical situation.
I am still playing around with first-order perturbations so will keep that in mind if I decide to go deeper into the analysis.
From the best cross-correlation fit, the perturbation is either around (1) 3.5 ppm change per degree change in a year or (2) 0.3 degree change per ppm change in a year.
(1) makes sense as a Temperature forcing effect as the magnitude doesn’t seem too outrageous and would work as a perturbation playing a minor effect on the 100 ppm change in CO2 that we have observed in the last 100 years.
(2) seems very strong in the other direction as a CO2 forcing effect. You can understand this if we simply made a 100 ppm change in CO2, then we would see a 30 degree change in temperature, which is pretty ridiculous, unless this is a real quick transient effect as the CO2 quickly disperses to generate less of a GHG effect.
Perhaps this explains why the dCO2 versus Temperature data has been largely ignored. Even though the evidence is pretty compelling, it really doesn’t further the argument on either side. On the one side interpretation #1 is pretty small and on the other side interpretation #2 is too large, so #1 may be operational.
One thing I do think it helps with though is providing a good proxy for differential temperature measurements. There is this baseline increase of Temperature (or CO2), and accurate dCO2 measurements can predict at least some of the changes we will see beyond this baseline.
Also, and this is far out, but if #2 is indeed operational, it may give credence to the theory that that we may be seeing the modulation of global temperatures the last 10 years because of a plateauing in oil production. We will no longer see huge excursions in fossil fuel use as it gets too valuable to squander, and so the big transient temperature changes from the baseline no longer occur. That is just a working hypothesis.
I still think that understanding the dCO2 against Temperature will aid in making sense of what is going on. As a piece in the jigsaw puzzle it seems very important although it manifests itself only as a second order effect on the overall trend in temperature.
What really outs the global warming alarmists as deceivers and not simply unconscious incompetents is the absolute loss of the ‘official’ raw data upon which the AGW True Believers’ faked snapshot of the world rests. The original data has gone missing. The best examples of the missing data can be seen in the foi2009.pdf CRUgate disclosures and the information contained in the ‘Harry Read Me’ file. But, it doesn’t stop there. NASA dropped the number of ‘approved’ temperature stations altogether.
And on top of all this corruption, manipulation and incompetence is the fact that the process in wholly unscientific at the outset. The locations and numbers of ‘approved’ temperature stations are in no way representative of the entire surface of the Earth. That and the fact that the oceans have been cooling, and it is oceans that cover most of the Earth’s surface, have made a joke out of climate science.
Tortured data are the root of many problems today.
Since we know Earth’s heat source is as unsteady [1-3] as our present social and economic systems, we urgently need to:
A. Acknowledge benefits of the 1971 Kissinger/Mao meeting:
_a.) Nationalism and racism were reduced,
_b.) World peace was enhanced, and
_c.) Nuclear war was avoided.
B. Avoid retaliation, and
C. Work together to restore:
_a.) Integrity to government science, and
_b.) Citizens’ control over government.
1. “Sunspot 1283 bristling with flares,” PhysOrg.com (7 Sept 2011) http://www.physorg.com/news/2011-09-sunspot-bristling-flares-x18-m67.html
2. “Star blasts planet with X-rays,” PhysOrg.com (13 Sept 2011) http://www.physorg.com/news/2011-09-star-blasts-planet-x-rays.htm
3. “The Sun-weather relationship is becoming increasingly important,” The GWPF Observatory (14 Sept 2011) http://www.thegwpf.org/the-observatory/3868-the-sun-weather-relationship-is-becoming-increasingly-important.html
The CRU data may be AWOL, but NCDC data is readily available. The raw bits are difficult to digest, but I worked up 1900-2009 using a 1×1 degree grid keeping only sectors with the full 110 years of data. http://justdata.wordpress.com Temperatures are “land only”.
Is that really a fact ?
An example of data torturing from the IPCC SPM:
“The linear warming trend over the last 50 years is nearly twice that for the last 100 years”.
This was objected to by a government reviewer (having been inserted into the report after the scientific review) but the objection was overruled.
Somebody please explain the average warmist blogger that Judith is not planning to accuse any of her colleagues of crimes against humanity.
PS no data has been tortured in writing this comment
Regarding climate science, I think we can assume that most published results are not tortured, although it is probable that more are than the field would like to admit. However, it depends on exactly what kind of science is being done. When you do an experiment that is not logistically difficult to replicate, others in your field can (and if it’s worthwhile, will) do so. This is the check on fields like molecular biology. If your lab can do it, then a hundred other labs can as well. How often is this true in climate science?
As an aside, I am familiar with a case in which a graduate student (non-climate science) came up with no statistical significance on her data – and thus no paper for her dissertation. Her advisor asked a post-doc for statistical help, and he advised running the data through logistical regression instead of ANOVA. It worked – the results under regression showed statistically significant differences. The work was submitted and accepted. And no one reading the paper would ever know that THIS data had been tortured under pressure to publish. It was an insignificant paper in a scientific backwater subject, so no one would care, either. This experience makes me skeptical of ALL published science – I know how things work behind the scenes.
I have also seen very similar instances with graduate students who are writing their dissertations.
It reminds me of when I talk to graduate students in the life sciences about their experiments; sometimes they describe their experiments as either “working” or “not working,” or being useful or not useful, or being a success or a failure, contingent upon whether their results proved their hypotheses. They look at me cross-eyed when I ask them whether or not even results that don’t prove their hypotheses should be considered useful.
Anecdotally, I find that phenomenon most prevalent with international graduate students (in particular with Asian students). They sometimes feel it is their personal responsibility to provide data that prove the hypotheses of their professors, and they have personally let their professors down if they don’t provide such results from their experiments. It can be a real problem – sometimes resulting in fairly significant psychological harm resulting form the emotional stress of letting down their professors. And sometimes professors exploit that attitude on the part of international graduate students to get those students to work harder than what should reasonably be expected (and what might typically be expected from American graduate students).
Why are you stereotyping Asians?
I assume that you’re mocking political correctness, but in case you aren’t:
I spoke anecdotally, from rather extensive experience.
In fact, there are data that back up my anecdotal experiences – such as data on different cultural attitudes towards authority.
Obviously, distinctions between individuals are greater than between different groups, but that doesn’t mean that you can’t make valid generalizations about how culture of origin affects the attitudes of graduate students.
If you don’t think that some broad, cultural generalizations are valid with respect to how cultural differences manifest among graduate students in American academia, I would suggest that you haven’t worked with a diverse group of graduate students in American academia.
If you have any data that disprove the validity of my generalizations, I’d love to see them.
Joshua:
You may well be right, I have no first hand experience working in such a context. However, I have conducted global surveys for many years, and invariably Asian respondents treat rating or response scales in a far more considered way than their US and European counterparts. This generally led to lower Halo effects and more complete use of the scales. My conclusion is that precision and accuracy are also more pronounced cultural norms Asian respondents. (Note that background and discipline also play a role. Geert Hofstede did a lot of work in this area.)
The scientific method requires first a hypothesis (including a null hypothesis) and data to be collected plus statistical testing determined before embarking on the experiment. Determining your statistical testing beforehand (and asking a statistician about the form of the data and the tests to apply beforehand is de rigeur).
Applying enough statistical testing to a set of data post hoc invariably will eventually generate a statistically “significant” result. It’s perfectly legitimate to undertake an exploratory study like this so long as one recognises the limited validity of one’s findings.
It’s of course useful heuristically to collect data and massage it statistically and use the correlations thus elicited to generate hypotheses, which can then be tested by a properly designed experiment. The ensuing results and the probabilities pertaining to them are far likelier to be valid.
We need to be careful here. Different tests have different sensitivities. It is perfectly possible for a statistical test with a low sensitivity to fail to find statistical significance while a more sensitive test may quite legitimately find significance. I’m not saying that this happened in the case you quote because I don’t understand the details. But equally without understanding the details it is not obvious that there is a problem.
“MarkB | September 15, 2011 at 1:28 pm | Reply
Regarding climate science, I think we can assume that most published results are not tortured”
Unless there has been very careful experimental design to eliminate cognitive bias, that assumption is likely incorrect. We ALL torture the data every day, without being aware we are doing it.
Scientists are no different. We believe we are without bias, while the opposite is the case. Human beings are not capable of acting without bias, because it is subconscious. Intellectually we know we have it, but we can’t consciously control it. Thus it must be eliminated by experimental design.
How many climate researchers use double blind methods for example, when collecting and measuring samples or compiling results? I’d hazard a guess the answer is none.
So, it should be no surprise if the results confirm your expectations. Your subconscious is working to make sure they do, inserting small errors that you will not catch, that will skew the result in the direction you desire. And you should not be surprised when you lose the original data accidentally. Your subconscious knows there are errors that would best remain hidden. even if you don’t.
MarkB,
Only the data and results that do not agree with the consensus are tortured.
Data that does agree with the consensus, even if inaccurate or false, is given a nice party and plenty of gifts.
So, you’re disagreeing that “torture works,” Judith? I suspect that many of you denizens feel differently, depending on the context.
How about “enhanced interrogation of data?” Does that work?
I’ve tried waterboarding data before, but that just tends to make the papers soggy. And with electronic data … let’s just say, don’t try this at home unless you’re heavily insured against electrical fires. ;-)
Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.
Indeed…can you imagine Jones and Mann giggling about McIntyre’s inability to replicate a result under such a regime?
Then fund it. giggle.
2.5 billion dollars a year isn’t enough? Tears.
http://www.climatesciencewatch.org/2010/02/02/president-obama%E2%80%99s-fy2011-budget-has-21-funding-increase-for-usgcrp-climate-science-research/
And we need extra funding to: pay scientists to attempt replication? pay those on review panels to fund worthy replication projects? pay the journal editors to consider replication studies?
I’m not sure why you’d jump straight to funding when the remedy suggested was behavioral.
Being somewhat older than 13, I’ll dispense with the giggle.
Fund it or not. Politically, there isn’t going to be any change in the US until climate science starts replicating studies. The stakes are too high, the investment too small to do otherwise. The alarmists should try to make their case with responsible science.
Of course, the alternative is to try to succeed with the tactics that Algore and Trenberth are trying. giggle. bigger giggle. ROTFLMAO.
be careful what you wish for Eli. In 2012 if the republicans take charge they just might decide to fund observation instead of modelling. doh!
The medical world has moved this way with the Cochrane Collaboration on health care studies. Negative findings on trials registered with the data base actually get to see the light of day!
You don’t need to torture data to get the results you want. You just have to leave the disagreeable info out of your original collection. Just find the stuff you want and don’t find the stuff you don’t want.
I give you Climate Science.
Andrew
I give you Climate Skepticism. By far the worst torturing of data I’ve seen is done by them, when they fit straight lines or exponentials to data when the theory predicts that those will give a dreadful fit. They do it anyway then turn a blind eye to the bad fit.
That is true. It takes a higher level of sophistication to create data to fit your agenda than to just misinterpret data.
http://www.nytimes.com/2009/01/22/science/earth/22climate.html
Neat picture too.
Vaughan, I know you are feeling tortured by HADCRUT3 going down over the last 10 years and sea level going down …. but imagine how skeptics feel when you keep calling them names when its YOUR sides data going down even with the bogus adjustments!
“Torturing the climate data”
See Steve McIntyre’s insightful overview: “How do we “know” that 1998 was the warmest year of the millennium?”
Mine for trends
Ross McKitrick in What is the ‘Hockey Stick’ Debate About? found:
“Hide the decline” – exclude the data
Steve McIntyre in “Yamal and Hide-the-Decline” reviews the developments, especially how excluding data was another major cause of the IPCC’s prominent use of “hockey stick”. See: NW Siberia core count
As seen in the: Yamal Region Chronologies
Caveat emptor Let the buyer beware!
Given that McIntyre and McKitrick’s code mined for hockey sticks. . . :)
I dont think you want to bring up the problem with Dave Clarke’s (deep climates) analysis by linking to this. Nor do you want to point to Tamino’s (Grant Foster’s) difficulty’s with decentered PCA, and oneuniverse’s findings.. That’s a very interesting discussion. even Don Baccus (dehogza) shows up.
Does climate science need to address the “placebo effect” and implement “double blind” analyses methods such as required in clinical trials? e.g. by using third party analysts and climate auditors? See: The Power of Mind and the Promise of Placebo
So if you do research the old fashion way, you won’t get published, and if you don’t get published enough you can’t be considered for the big jobs, so the most prolific authors are…?
Here is some evidence of accidentally tortured data from the Detection of Global Economic Fluctuations in the Atmospheric CO2 Record thread. Yesterday I found that Max’s emissions data does not match Jon’s Figure 1. Max had apparently worked out the analysis independently of Jon and submitted that in the comment.
This is the overlay of the two (the vertical shift does not matter for cross-correlation):
http://img194.imageshack.us/img194/2985/jonmax.gif
Someone put in a bad data point for 1988 (guess which one is bad). It is enough to influence the cross-correlation, so that Max’s data is much less correlated than Jon’s with CO2.
The moral is that one bad data point is enough to make the time-series cross-correlation meaningless. One can argue that we don’t have many points in the data set, but you have to use what you have.
BTW, the time-series cross-correlation suggests that the causality chain from (Carbon emissions => CO2 => Temperature) is fairly compelling with this data set, once you sort it all out. See:
http://theoilconundrum.blogspot.com/2011/09/sensitivity-of-global-temperature-to.html
WebHubTelescope
Interesting evidence.
However, is that due to a false dichotomy of excluding solar/cosmic ray impacts? See David Stockwell’s Solar accumulation theory, especially his phase lag showing a Pi/2 (25% of cycle) lag of surface temperatures behind solar forcing.
Thus there would be two inputs:
Solar/cosmic to clouds to temperature to CO2
and Fossil Fuels to CO2 to Temperature.
Both need to be sorted out in detail to distinguish the effects.
Stockwell’s evidence appears to strongly say that solar/cosmic to clouds to temperature to CO2 is a dominant factor. Conversely, a near zero lag between SST and CO2 strongly says that CO2 is tied to ocean temperatures as a consequence.
I would also be interested in your evaluation of Fred Haynie’s Future climate change where he has a wealth of fascinating CO2 variations and analyses.
Fred Haynie Future climate change
http://www.kidswincom.net/climate.pdf
Thanks David,
That indeed is an interesting analysis. In terms of a poker-hand, a sharp cross-correlation peak at lag=0 usually beats out a broad correlation peak at a finite lag. They also have to put the TSI on a slope line to get the CC to make sense, which means there is still an overall forcing function not accounted for. TSI might be an additional effect and I would try to aggregate it with the FF forcing function in a variational fashion to see what comes out. Perhaps TSI gives some of the slow lagged dynamics and FF gives the sharp and spiky fast dynamics and the overall incline.
One other thing is that because they observed that Temperature has a 90 degree phase shift with the TSI cycle means that it is behaving a lot like a time derivative of TSI. This would suggest that Temperature is reacting very sensitively to the rate of change of TSI. What they really should do is just take the time derivative of TSI and overlay that curve on average Temperature. Then they can show a Lag=0 on the derivative and people can ponder what exactly that means.
Very interesting stuff.
I also like the Feynman quote which I have never seen before:
That’s essentially why you can never give up when you start down this path. Everything has to fit together like a jigsaw puzzle.
WebHubTelescope
Suggest looking at Stockwell’s
Cointegration Primer
Cointegration
“Integration Order”
(Unfortunately, the figures were lost)
Consider if:
Temperature is an integral function (because of heat capacity.)
(Which gives a phase lag from cyclic forcing.)
Radiant forcing via optical depth is proportional to natural log (CO2),
CO2 is proportional to the integral of Fossil Fuels.
That looks like several integral steps difference from temperature to Fossil fuels. Thus the importance of Stockwell’s Solar Accumulation models.
These integration order analyses could provide valuable tools in your analytic methods.
Definitely. Thanks.
![c \Delta T + M \int {\Delta T}dt = k[CO_2] + B \frac{d[CO_2]}{dt}](https://s0.wp.com/latex.php?latex=c+%5CDelta+T+%2B+M+%5Cint+%7B%5CDelta+T%7Ddt+%3D+k%5BCO_2%5D+%2B+B+%5Cfrac%7Bd%5BCO_2%5D%7D%7Bdt%7D&bg=ffffff&fg=333333&s=0&c=20201002)
I am working with this form at the moment to improve my understanding:
I have the temperature as an integral function on the left hand-side, but I know that [CO2] has the potential to generate immediate positive feedback so I have the derivative on the right-hand side of this as well. This can converge from both directions. It’s interesting to play around with these numbers (and I don’t consider that torture :)
WebHubTelescope
Good to know they may be of help.
Re: “I know that [CO2] has the potential to generate immediate positive feedback”
Do we empirically “know” that?
Or could it be that Temperature increase causes an immediate release of CO2 (via the Clausius Claperyon equation)
Can we statistically distinguish between d(CO2)/dt vs d(ln(CO2))/dt? (i.e. 1/(CO2))
(i.e., I am concerned that with small changes in CO2, a series expansion coupled with noise may provide simplistic results that may be misleading)
Thus I am interested in the potential for Fred Haynie’s evidence to help distinguish some of these issues.
That’s exactly what I was implying, Causius-Clapeyron or Arrhenius rate law. The positive feedback is then because of this release, we will have more GHG which will then potentially increase temperature further. It is a matter of quantifying the effect. It may be subtle or it may be strong.
According to best fit cross-correlation, the average derivative rate of change is about 0.3 ppm change for every degree change in a month . As a feedback term for Temperature driving CO2 this is pretty small but if we flip it and say it is 3.3 degrees change for every ppm change of CO2 in a month, it looks very significant. I think that order of magnitude effect more than anything else is what is troubling.
d(ln(CO2))/dt is (dCO2/dt)/CO2 by the chain rule so using the lgarithm or not may be a fairly subtle effect to first order.
Haynies stuff is good in so far as he is really looking at deconvolving the seasonal pieces out of the puzzle. I am just not sure if he is going about it in the most efficient way.
WebHubTelescope
Re: “TSI on a slope line to get the CC to make sense which means there is still an overall forcing function not accounted for.”
See: Using the Oceans as a Calorimeter to Quantify the Solar Radiative Forcing, by Nir J. Shaviv
That demonstrated amplification has not yet been incorporated by IPCC.
Global warming theory is the iron pyrite of reason and a giant step backward for science.
A succinct summary of the problem is offered by xkcd
http://xkcd.com/882/
On the other hand, there is the opposite problem. Call it data coddling, the refusal to make the data work very hard. This is not common in actual science but is very common on the fringes, where people are actually eager for a null result.
A fine example is currently up at Lubos’
http://motls.blogspot.com/2011/09/austin-texas-all-daily-trends-since.html
How this relates to the spectacularly odd summer here in Texas and its attribution is left mysterious. It is almost as though Lubos wants to convince me that my lawn isn’t dead and my trees aren’t dying by wielding a noisy graph he scratched together.
I’m not sure there is any rule of thumb for avoiding these sorts of error except for this one: A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.
MT: I have read a lot of journal articles and never seen this back bending done. Maybe I just don’t know what to look for. Do you have an example? How about in the latest issue of Sciencemag, which most of us have access to.
Oh it’s mostly blog science and its ilk. (I said that in the first draft but accidentally deleted it before posting.) Getting null results into the literature is hard enough when they actually mean something.
Oh, wait, I did say it. “This is not common in actual science but is very common on the fringes”
David: MT: You seem to have misunderstood me. You claim the following: “A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.”
And in any case, a blog post, whether on Real Climate or a physicist’s blog is not intented as a summa theologiae of climate science; it would take a brain-dead monkey not to realize that Lubos is claiming the 2011 temperature is an outlier (among other, greater outliers) in an overal downward trend in temperature.
It’s too bad, though, that he didn’t to precip also because as I recall that is flat to up as well in Texas, which would have further supported his position that summer 2011 is an outlier, not a growing trend influenced by 40, 50, 60 yrs of steadily increasing CO2.
Oh, and I think I found some cases of your “bending over backwards to present adverse results”. The hiding of 20+ yrs of decline in tree-ring temperature signal in different forms and fashions by no less than 3 leading climate researchers. Oh wait…
_Michael
Oh and burying deep in the SI or just not reporting quite low verification statistics.
Oh or brutally attacking a paper which merely added new data to Santer 08, showing that his borderline valid results were then invalidated with 2 more years of data (showing they were never very robust).
I could keep going..
MT: You seem to have misunderstood me. You claim the following: “A scientist acting constructively will always bend over backwards to consider the interpretation least favorable to their point of view.”
I am asking for an example of this bending over backwards, as I have not seen it. Every journal article I have read has the authors presenting their findings in a positive light. I have never seen the “least favorable interpretation” discussed.
David,
Most scientists want to find their errors themselves rather than being caught of attempt to publish an erroneous paper. There are also people, who prefer publishing without such self-criticism. This phase of trying to find the errors is not visible in the papers, because it’s done at an earlier stage.
But you can see something like that in the papers as well. They contain all kind of reservations, and it’s not at all uncommon that papers state openly, where the argument has weaknesses, as the authors feel (correctly) that it’s much better to state that rather than claim certainty and be shown by others to err in that.
Trying to get a paper published appears to involve often a careful play in giving on the first sight an impression of strongly justified and important results and including all the caveats “in fine print”. The reviewers and journals are often ready to accept this kind of bias in the emphasis.
What about the posts at WUWT when there’s an unusual snowstorm, or seals show up in Boston Harbor, or there are unusual levels of snowpack for one or two years?
Would those qualify?
Certainly, brave Sir Robin, you mean to point out that WUWT points out that AGW promoters do things like assign “global warming” as the middle name of hurricanes, or claim that drought will be the case in Australia, until it rains?
hunter –
The example that you cite would most likely fall into the category of coddling data.
How does that change whether or not WUWT’s posts would also qualify?
I mean, you wouldn’t be justifying what happens at WUWT based on a “But, they did it first” rationalization, would you? That would be so unlike you.
Sir Robin,
I am pointing out that you are misrepresenting what WUWT is saying, which is typical of what you depend on.
Or snows returning to Kilimanjaro?
One interesting recent example is when we heard that the sun may be entering a cooler period. Scientists reacted to that with interest, and none tried to deny it could happen. This shows the neutrality of the scientific mainstream when genuine science is shown to them.
You look EXTRA stupid if you deny the quietest sun in 100 years … and yet they tried.
How many solar forecasts were reissued and revised?
Perhaps what Lubos is pointing out is that if you live in Texas long enough that there is a good likelihood of trees and lawns dying from drought?
But that would fly in the face of Dessler’s claim that this drought is caused by CO2,and one cannot face that from the faithful pov, can you?
Did Andrew make such a claim? Link?
Multiple times. On PBS in Houston. I watched it.
He has also stated it in other venues.
It was very annoying.
He didn’t say that CO2 caused the drought, he said that it most likely made it worse than it would otherwise have been – that’s really not the same thing.
I looked at Lubos’s plot and it does have a weird asymmetry according to the season. He doesn’t seem to have much intellectual curiosity for being such a smart guy. If it was me, I would look into that and try to figure it out.
One place to look into would be other examples of the phenomenon. Stephen Wolfram comes to mind. Like Lubos he’s been a physics professor at a distinguished physics department (Caltech vs Harvard), though unlike Lubos he doesn’t need to fund his website with ads. But he’s been trying to reduce physics to cellular automata for over three decades now, and still doesn’t have a compelling cellular-automaton explanation of any of Maxwell’s laws, thermodynamics quantum mechanics, relativity, or just about anything that would establish a connection between cellular automata and physics. That didn’t stop him from writing a 1200 page book on his theory.
Motl and Wolfram both seem unable to debug their respective theories of how the world works. They have that in common with a great many people, 99.99% of whom however have not been admitted to the faculty of a top-ranked physics department.
Vaughan,
BTW, I liked your RC-circuit analogy describing the thermal mass of ocean versus land from the other day. That is the kind of imaginative stuff that I really appreciate and gets one thinking from a different perspective. You see that analogy and it locks in your brain and then you might be able to use it somewhere else. These are the cross-disciplinary patterns that lead to new insights.
At a time before powerful digital computers became readily available, people built that kind of circuits, only more complex, called them analog computers and used them both in research and in engineering.
Integrating was easy, but determining the derivative more difficult. In that also electromechanical components where used, namely tachometers similar to those used in cars to tell the speed.
The analog computer I used even had quarter square multipliers.
That book wasted an afternoon or two for me.
crap.
MT- Regarding Texas- (as I am living there now also), yes it was a hot summer. Wouldn’t you agree that the data does not show that there is any long term trend of warming in Texas—but it was a hot summer.
I thought Dressler was the one who stated that the hot summer in TX was further evidence of AGW?
After taking a shot at Perry calling for Texans to pray for rain, “I know that climate change does not cause any specific weather event. But I also know that humans have warmed the climate over the past century, and that this warming has almost certainly made the heat wave and drought more extreme than it would otherwise have been.”
http://www.theeagle.com/columnists/Paying-the-price-for-climate-change
So Dessler said squat other than his personal belief based on his knowledge.
Dallas-
How do you know that humans made the heat wave and drought more extreme than it would have been? Please consider that this years summer was weather and not climate. Please consider that this summer’s weather was driven by shorter term events that would completely overwhelm the impact of any potential change humans would have potentially done.
I would agree that it is possible that the summer was made hotter or dryer due to humanity, but it is not necessarily true. It is also possible that if CO2 were lower, that wind patterns would have been different and TX would have been cooler. You are drawing conclusions with insufficient information—do not be so sure.
I should have written that it is possible that if CO2 were higher, that wind patterns would have been different and the summer might have been cooler—we do not know.
Rob that is Dessler’s quote not mine, but I totally agree with your analysis.
Sorry these links all come from one ‘fringe’ site. Hope I didn’t offend Michael T. I honsetly tried making these at the NCDC site and couldn’t seem to get it to work, although I have in the past.
http://www.real-science.com/wp-content/uploads/2011/09/chart_16.png
http://www.real-science.com/uncategorized/year-to-date-in-texas
http://www.real-science.com/uncategorized/global-warming-in-texas
_Michael
An interesting Piece Dr Curry.
This sort of behaviour is, although uncommon, not unheard of in science. This is not through any intrinsic problem with scientific methodology, but with those who practice it; i.e. human beings.
Ironically (especially so given my usual ‘drum’ on this forum) increased regulation (or QA) is not and cannot solve this problem. If someone is determined to torture data, they will and there’s very little you can do to spot it; save performing all the available calculations (AND their alternatives) yourself.
There are step-changes that can be adopted however to make this matter far more unlikely AND easier to spot: It all comes down to the materials and methods section :-)
If someone is submitting a paper/research which has used a specific statistical technique or trick, it must be detailed in the materials and methods section. The stats used should be outlined, but also the others considered/used with rational for the final selection/discounting of other methods covered. Any additional but unsed data (from ‘discarded’ statistical methods) should be included in the appendicies for reference.
Forcing people to explain the reasons for their particular methodological choice will greatly reduce instances of data torturing. Putting it in the opening section (materials and methods) also put’s it front-and-centre for all to see.
Finally, raw data and methods must be included WITH the research or paper submission (it’s technically required now, but lets be honest- it’s hardly a well-kept rule). or the paper/research is rejected.
NOw of course this will not stop those 100% determined to be duplicitous, but it would certainly stop those trying to twist the outcomes of their research to a pre-defined outcome.
Incidentally, one of the first questions i ask at work when presented with a projects results is ‘why did you analyse it that way?’. It can often be VERY illuminating.
Re-posted for emphasis.
And I will add my own observation that humans comprise the combatants on both sides of the climate debate.
Good to see that you acknowledge that there is a debate to be had, rather than the usual asserion that the ‘Science is Settled’.
Latimer –
In contrast to how my posts are frequently characterized, I have never posted anything that resembles “the science is settled.”
What’s interesting is that my posts are so frequently mischaracterized by people who have no data on which to support their conclusions. And they are frequently people who write posts decrying unsupported conclusions, no less.
I made no judgement about any previous posts from you. But many here and elsewhere are of the opinion I outlined. I felt your good nature and openmindeness deserved a compliment. That’s all.
And it was also good to see that you have dropped your Witchfinder General role for a moment.
Latimer –
Can you explain this:
in contrast to this:
The only explanation I can come up with is that you were comparing my posts to some unspecified group of people that you are associating with me – without specifying what evidence you use to confirm such an association?
And while you’re at it – maybe you can specify just a bit about who has actually said that the “science is settled?” Just so I can know who I’m being compared to?
If I had been comapring it with writings by yourself, I would have written ‘rather than YOUR usual assertion’
I didn’t. Nuff said.
Add one extra and large caveat to Labmunkey’s comment — if the study is to be used to formulate important policy, it needs to be replicated. If no one replicates any more because there isn’t any grant money for it, the government ought to invest the cash rather than impose the regulatory policy costs without checking. Especially in light of this http://marginalrevolution.com/marginalrevolution/2011/09/how-good-is-published-academic-research.html
The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce. This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings.
and
“
If so much work that scientists KNOW is likely to be replicated fails to stand up, imagine the percentage when they don’t expect replication.
Oh, nice post.
This is something that definitley bears repeating.
“… it is practically impossible to replicate or verify Dr. Mann’s work… Could it be that this particular work violates the principles of the scientific method…?”
EXCERPT
“Now, after some independent analysis it seems that all scientists could possibly be misled on some of their issues. Both the National Academy of Sciences and Dr. Wegman’s committee analyzed the hockey stick report by Dr. Mann that has become the poster child for proof of global warming. The committees came to the conclusion that Dr. Mann’s hockey stick report failed verification tests and did not employ proper statistical methods.
“Also, it appears that Dr. Mann is part of a social network… of climate scientists who almost always use the same data sets and review each other’s works. There is a contention that they would dismiss critics who had legitimate concerns, rarely used statistical experts for the data they used in their reports, and make it very difficult for reviewers to obtain background data and analysis.
“These revelations point to the lack of independent peer review and how it is practically impossible to replicate or verify Dr. Mann’s work by those not affiliated with the network of scientists, so we are looking forward to hearing about that work today. Could it be that this particular work violates the principles of the scientific method and should be dismissed until it meets the basic qualifications?
“Could that have been some of what happened to the Ice Age return theory of the 1960s?”
(Excerpt from the prepared statement of Tammy Baldwin, Committee on Energy and Commerce, 109th Congress Hearings, Second Session, July 19 and July 27, 2006)
Does this explain the serious reluctance for many researchers to let anybody else look at their raw data? That they know that it has been tortured within an inch of its life and that some other worker might blow the gaff to the scientific world?
Seems to be a very good explanation of the Hokey Stick and much of the subsequent anti-scientific behaviour of ‘The Team’. Like the priests in the 15th century who didn’t want Tyndale to publish an English Bible rather than Latin since it would take away much of their power and influence over the gullible public. Being the sole interpreter of he Truth is a very advantageous position. But not so much if others use the same sources and aren’t afraid to come up with a different position.
“… McIntyre & McKitrick (2003, 2005a,b,c,d), the NAS report (2006) and the Wegman report (Wegman et al., 2006) have all independently ascertained that Mann’s PC method produces spurious hockey-stick shapes from a combination of (i) inappropriate centring of the data series, (ii) non-random or biased selection of small data samples, and (iii) inclusion in the calculation of proxies that are known not to reflect a reliable temperature signal (notably, bristle cone pine datasets) …”
“The main inferences that we drew in our letter to parliamentarians of July 21 remain. They are:
“(i) that the magnitude of likely human-caused global climate change cannot be measured, has not yet been shown to have a high risk of being dangerous, and remains under strong dispute amongst equally qualified scientific groups;
“i.e. the science of climate change is far from settled;
“(ii) that the benefits of NZ having signed the Kyoto accord, or of the institution of any other policies intended to avert global climate change (such as a carbon tax), are entirely unclear, and under strong challenge;
“i.e. the economics and likely effectiveness of climate change mitigation measures are far from settled; and
“that because of the many special interests involved (amongst which number energy and mining companies, environmental consultants, environmental and other NGOs, scientists employed to research climate change, government bureaucrats and departments, local and regional councils, and national politicians), the best and perhaps only way to get dispassionate advice on this vexed issue is to convene a Royal Commission of enquiry.
“i.e. New Zealand’s participation in Kyoto will cost at least $1 billion more than originally estimated; seeking impartial advice as to the benefit seems only wise.
Professor Augie Auer, BSc (Meteorology), MSc (Atmos. Sci.), CCMAMS
(Chair, NZCSC Science Panel), on behalf of:
Rear Admiral (ret.) Jack Welch, CB
(Chairman, NZCSC)
and
Professor Bob Carter, Bsc Hons. (Otago), PhD (Cantab.), Hon. Fellow RSNZ
Associate Professor Chris de Freitas, PhD
Mr. Roger Dewhurst, BSc Hons., M.App.Sc (Eng. Geol. & Hydrogeol.)
Mr. Terry Dunleavy, MBE, FWINZ
Dr. Vincent Gray, PhD, FNZIC
Mr. Warwick Hughes, MSc Hons.
Mr. Bryan Leyland, MSc, FIEE, FIMechE, FIPENZ
Dr. Alan Limmer, PhD, FNZIC
Dr. Bill Lindqvist, BEng. (Otago), BSc Hons. (Econ. Geol.), PhD (Imperial Coll.)
John McLean, BArch (Melbourne)
Mr. Owen McShane, B.Arch., Dip T.P, M.C.P.
Mr. Leighton Smith, Broadcaster, Auckland
Dr. Gerrit J. van der Lingen, PhD (Geology), BSc, MSc, MGSNZ
Dr. Bryce Wilkinson, PhD (Econ.), BSc Hons. (Chemistry), MCom. (Econ.)
Dr Bryce Wilkinson, BSc Hons, MCom, PhD
Dr Len Walker, company director
John McLean, B.Arch
The New Zealand Climate Science Coalition, 11 August 2006: Response to comments by Dr. David Wratt – Letter (Chair, New Zealand Climate Committee, Royal Society of New Zealand)
Here’s what I don’t get. If I were a young Turk just coming out of graduate school I’d look at this global warming as a giant opportunity to make a splash. Another dozen or so phony boloney AGW supportive papers aren’t going to do all that much these days since that’s what everyone’s doing. You’d think there’d be a bunch of these young guys chomping at the bit to punch holes in this thing.
pokerguy,
Historically, the powers that be have to die off before there is a paradigm change.
Andrew
How would they ever get such apostasy or heresy published? Or work again in climatology? It is only those approaching the end of their careers who can afford to put their ‘cojones’ on the line against the Forces of Consensus.
And I seriosusly doubt whether today’s bright youmg thigs would choose this as a subject. Their elders have hardly set them a great example of scientific greatness. Only the mediocre need apply lest they too easil;y eclipse the Old Guard.
You have to remember the culturalization of why people go into eco-studies or aspire to be part of the Washington Press core. It’s unlikely at a serious level people would break tribal ranks.
There are rent seekers but the clue that holds it together is left-wing cultural values and self-image.
So when the rain in Maine falls mainly in the Seine, is that a form of extraordinary rendition?
Pokerguy : you’d need a RICH graduate to gamble his/her life against everybody else’s. I don’t think there’s too many rich graduates really.
FWIW, I would imagine that class and income are very positively correlated with levels of post-graduate degree attainment. In fact, I would imagine that it would be hard to find variables that are more strongly positively correlated.
Maybe in the US. Certainly wasn’t true in UK for my contemporaries.
OK so you need to find a graduate whose parents are rich and are perfectly willing to give him/her all the money he/she will need for the rest of his/her life…rather than, say, the usual parents who expect their children to do something with their lives, like having a career and achievements in any area including science.
Seconded.
Here’s torturing data. You can draw a flat line for temperature from 1983 to 1993. Someone in 1993 could have claimed warming had stopped based on that. A similar thing could have been done in the early 80’s. Does this sound familiar? Has it been explained why this one is different? The fact that we didn’t hear such claims back then is because people were generally more sensible about extrapolating variability, I think.
Has it been explained why this one is different?
Er, Yes, there were two huge volcanic eruptions in 1982 and 1991.
Yes – stop torturing the data Jim.
HADCRUT3 says otherwise.
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1983/to:1993/plot/hadcrut3vgl/from:1983/to:1993/trend
Dr. Curry,
Do you alert your grad students to the minefield that published results may contain? Can you identify with the grad student well into her dissertation who can’t make some component of her study “work,” who then calls the authors only to find out, (assuming they are honest with her), that they fudged a bit in the area in which she is interested? You would hope they’d tell her.
Friend of mine was bombed 2/3 way through his Cal Tech Doctorate by a phony paper fundamental to his work. He discovered its results didn’t work by replication, but by then the time had been invested and he lateraled out to Stanford B-School.
There should be a very high value placed on write-ups by smart people who know the area showing that some approach cannot be made to work, and hopefully why.
“… the global temperature graph for the past century. Notice how, after rising steadily in the early 20th century, in 1940 the temperature suddenly levels off. No — it goes down! For the next 35 years! If the planet is getting steadily warmer due to Industrial Age greenhouse gases, why did it get cooler when industries began belching out carbon dioxide at full tilt at the start of World War II?
“Now look at the ice in Antarctica: Getting thicker in places!
“Sea level rise? It’s actually dropping around certain islands in the Pacific and Indian oceans.
“There are all these . . . anomalies.
“… computer programs can’t even predict the weather in two weeks, much less 100 years … They sit in this ivory tower, playing around, and they don’t tell us if this is going to be a hot summer coming up. Why not? Because the models are no damn good!”
–From Joel Achenbach’s now famous May 2006 interview with Bill Gray (the “World’s Most Famous Hurricane Expert”)
As if this were not enough, even if the science were validated, there would still be complex software engineering issues associated with the GCMs – namely code and calculation verification. Fortunately, a substantial body of knowledge has been developed for CFD. For an introduction with references to much more information see http://www.variousconsequences.com/2011/09/separation-of-v-and-v.html. Unfortunately, the climate modelers have mostly ignored this established verification technology.
Thanks much for this link, it may be time for another V&V thread
Can I provide a heaping helping of varves from “upside-down Tijlander”, anyone?
The ignorance and uncertainty blog seems to be “Written by michaelsmithson”
“We may quote to one another with a chuckle the words of the Wise Statesman, lies, damn lies, and statistics….”
J.A. Baines on ‘Parliamentary Representation in England illustrated by the Elections of 1892 and 1895’ in the Journal of the Royal Statistical Society, No. 59 (1896)
There is really nothing new in the climate debate.
I agree: Damn power corrupts!
This standoff is explosive, like the Sun [1].
AGW supporters have the political power.
AGW critics have the science advantage.
Let’s not win the battle and lose the war!
Can we stay focused on restoring:
a.) Integrity to government science, and
b.) Citizens’ control over government?
1. “Super-fluidity in the solar interior:
Implications for solar eruptions and climate”,
Journal of Fusion Energy 21, 193-198 (2002)
http://arxiv.org/pdf/astro-ph/0501441
–> There is really nothing new in the climate debate
No one outside the West is going to commit suicide by starving their economies of energy. And, it really doesn’t matter what dead and dying Old Europe does or does not do. Furthermore, the IPCC probably will be defunded in 2012. Please make a note of it.
This is obvious and has been pointed out many times and ignored in climate science. It’s very unnerving to find out, after fighting for the data used in published papers, that the raw data is “gone” and only “adjusted” data is available; and this is before any “torture”.
It is absurd to accept the premise of doom derived from a few tenths of a degree here and a few tenths there and not examine the data yourself. A casual glance at many of the published papers and it becomes obvious at the amount of statistical tricks used to manifest a signal.
Any “scientist” should be embarrassed at being associated with the current climate community, especially after having this moron running around and telling children that killer storms are going to come for you because your parents won’t drive an electric car (not that he is a scientist, but he is aided and abetted).
After countless attempts to show the flaws in the data and its interpretation, those that have tried are bullied, ignored, and slandered.
The most amazing thing is that NOW someone “realizes” it.
Ever heard of Climate Audit? Its whole purpose was to show the subject of this post.
Try to tell me what the temperature was to the tenth of a degree 200,000 years ago and I’ll sign you up for a $500,000 mortgage for 0% interest. Later you can pout because you were “tricked” by somebody.
– “On torturing data”
– “Probabilistic estimates of transient climate sensitivity”
– ...social psychology journals exhibit a strong bias towards publishing only studies that have achieved a statistically significant result….
For a physiologist … even a quarter of an unreproducible result is sufficient accomplishment worthy of publication.
The distinction is between that of …
I was going to suggest that a productive thing do when there is a low signal to noise ratio … is to go fish elsewhere … and by the way, how about looking over in these places. The big picture offers tantalizing possibilities.
Then again I am living in a culture of science where the acceptable credo is “remove the doubt, reveal the deniers” and “peer review makes science credible”.
Peer review accomplishes the following …
1) Guaranteed Readership: It insures that some other(s) have actually taken the bother to carefully read and consider what you have done and offer some sincere comments as a result of doing so.
This is considerable improvement on having no person carefully read and consider your effort and/or receive no opinion in response to your effort.
2) Standardized Presentation: There is something to be said for presenting items in a common way. If each article of a collection of articles collected into a ‘Journal’ had it’s own individual layout, typeface and convention of meaning, it is somewhat messy and confusing.
In a very big sense … having reviewers make comments about the quality of the ‘science’, errors or oversights in the method, a lack of supporting statistical evidence … is NOT about excellence and credibility in scientific research. … it is about conforming to common group standards of presentation.
Of course peer review and group standards ensure a modicum of completeness and rigor. But is the consumer of such a packaged product to trust that it’s contents are worthy because it has met minimum standards of durability … a statistical calculation have been made and specific number is provided … a method is used which also known to be used by others … certain contextual information is provided … the author cites other authors who are involved with pertinent aspects.
Peer reviewed articles that I’ve read are in the range from rubbish to wonderful. Regardless of that, I seldom have any easy way of peering behind what is being reported to ascertain it’s credible worth.
I certainly have been brow beaten and snowed by generous flourishes of impressive and incomprehensible words and claims.
Frankly all that prim and proper style is more apt to mislead than reassure. Spit and polish, shiny and neat, umpteen testimonials.
Who isn’t going to trust the credibility of such a document? Surely not the very same scientist who is going to use that ‘Academy Certified’ authenticity in the course of producing a new and improved alternate tome of certified reliability and correctness.
Style impresses. It does convey achievement in a secondary and residual manner. That’s why considerable expense was made to produce very impressive trustworthy financial certificates
People trust the appearance. People know that minimal money, minimal skill and minimal effort has been provided to produce that tangible evidence. Just don’t take it further than that ….
The instant any scientist uses the “It’s credible because it’s peer reviewed” argument .. they are saying very very little more than “It’s credible because has the Good House Keeping Seal of Approval
Are you as a creditable scientist going to buy it because it’s grade A certified ‘peer reviewed’. Do you automatically assume that everything you read in Nature is credible important unassailable settled science because it passed peer review and was published in Nature?
I guess so.
Only deniers nutters shill men and riff raff dare to protest the great god Housekeeping Seal of Approval of ‘peer reviewed’ science.
There are other reason’s for peer review ..
3) Peer review serves to be selective about the contents of a Journal.
Fair enough. No problems there …
Pity that claims to shape journal content is more often used to play games of researchmanship and support “in group” interests and maximization of grant awards. Don’t here much talk as to how incredibly political research and science happens to be. That wouldn’t do in the climate of “remove the doubt, reveal the deniers”
It utterly baffles me how otherwise extremely capable researches would be so foolish as to defend themselves with predominantly and deliberately misleading claim of ‘peer review’ makes credible
I don’t recall hearing that the wonderful benefit of peer review is that it ensures that someone carefully considers and responds to author’s effort. THAT’S JUST TAKEN FOR GRANTED, ISN’T IT? … whole hordes or readers eager to relish every word of the scientist’s important work.
I apologize. How silly of me.
Don’t hear much mention of where or how credibility IS ENSURED in science. It comes from places such as research reviews, search committees, awards committees and perhaps most important of all … from peers recognizing and appreciate each others work and support.
Finally there is one very very crappy side to ‘Peer review’. That doesn’t seem to get mentioned much either, even though it’s spot on this topic.
“On torturing data” Only an idiot, a career climber or someone mesmerized with endless variety of results … would play the pump-the-data-for-all-it’s-worth-and-then-twenty-times-more-again game.
It’s the easiest way there is to win the “Look how many publications I have and how important I am!” game. With ‘peer review’ on your side it makes for an assured win. Learn a demonstrably effective production technique. Automate and mass produce with every conceivable context.
Peer review ruins science because …
ONE) It discourages impedes and rejects efforts that cannot be easily tailored to the code-of-presentation
TWO) It encourages the work that is specially intended to slip as quickly and easily as possible between the greased rolling pins of the code-of-minimal-mediocrity-in-presentation
Torturing data passes peer review.
Worthwhile science gets eaten alive.
As I talk rubbish I won’t bother with my nonsense places and ways of looking for nonsense
You can now go back to the mantras …
I’ll get lost
Raving,
Problem is they have no clue to how the planet works.
The solar system is a thousand times more complex than current science can grasp.
Following temperatures to the exclusion of all other factors is fiction at the height of stupidity due to economic funding that has generated the like minded zombies. 4.5 billion years is a vast amount of data being ignored for a few hundred years of science that has no value.
Many areas have been missed as they do not fall into established categories of consensus laws which missed the simplest of measurements of difference in circumference sizes and speeds of rotation.
E=MC2 There is far more than one type of energy and speed changes density.
Joe, as a very abbreviated and impersonal acknowledgement of your responses, see my the link below. It seems to touch on some of what you are saying.
This topic posted yesterday, was very emotional and generated a handful of substantive realizations for me … and was heavily moderated too …
It has left me somewhat disoriented as to which focus I should head towards next. I appreciate your response and shall reply again when I can reform my wit.
Thank you
http://judithcurry.com/2011/09/15/on-torturing-data/#comment-112884
I’m interested in the transition between timeless and timely situations. Yes there is the physics of elastic coupling. That’s not quite what I mean however. ( raving = elastic … sort of)
From your more recent post …
Inertia of planets is the Achilles heel of physics and science theories + … just attached by pressure…
Financial economist Andrew Smithers, writing in the Financial Times, said
“Data mining is the key tchnique for nearly all stockbroker economics. There is no claim that connot be supported by statistics, provided that these are carefully selected. For this purpose, data are usually restricted to a limited period, rather than using the full series available. Statistics, it has been observed, will always confess if tortured sufficiently.”
http://www.smithers.co.uk/news_article.php?id=25&o=20
An example from climate science:
Judith Curry recently presented data at a Boulder conference showing that precipitation in the Pacific NW varied with the PDO. This explains the heavy snowfall in that area in the 1950s and in recent years. She used the full series of data available.
A few years ago, some climate scientists showed snowfall declining from 1950 to the mid-1990s , demonstrating that global warming was reducing snowfall. They used data restricted to a limited period, rather than the available data from 1914-2004, and the data confessed.
Don, that is an excellent example. Part of the reason that many climate scientists limit some of their data to the 1950s onward is their firm belief that CO2 began to prevail as “THE” forcing circa 1950. The period 1910 to 1940 is nearly taboo in climate circles. That is kind of scary to me.
The paleo reconstructions spent some time on the water board too. Opting for high frequency proxies with pretty heavy smoothing and a best instrumental fit selection, pretty much guarantees a subdued past climate record.
You must believe – remove the doubt, reveal the deniers
Yes we must trust the expert economists. They are the cream and superstars of those who do trend analysis ,dynamical systems analysis, and numerical simulations. In private economic forecasting there are no limits on research funds.
Economic forecasters do a wonderful job.
Climate change scientists are losers who couldn’t hack it in the big leagues of economic forecasting.
Sure I am as cynical as stink.
Academics are losers
Failed academics are even bigger losers.
Prove me wrong! :-D
You must believe – remove the doubt, reveal the deniers
You have hit the nail on the head.
You will find our creativity in generating new products is stifled by the restriction imposed by policies. Now that the debt level is too high for investors to take the chance and other countries have very little restriction.
Highest profits are more important than lives or survival. R and D is being devastated as their is always some risk.
You must believe – remove the doubt, reveal the deniers
Peer reviewed research economists win Nobel laureate winning ‘experts’ for predicting future economic trends.
Cor blimey! That explains why many of those hot shot complex systems physicists of my generation, quit research for the sake of producing crackerjack supercomputer models for investment banks.
Halleluiah! Not to worry …
Humanity’s salvation is close at hand
You must believe – remove the doubt, reveal the deniers
So in a post entitled “On Torturing Data” with almost 100 comments, there is no definition at all of what it means to torture data. Everybody just piles on, each with his or her own unspecified definition, and tut-tuts over this noxious practice that, of course, must be curtailed. This kind of discussion is really just literary criticism. It is neither quantitative nor based on some sort of conceptual framework from which we might draw some useful conclusions. It is, rather, an extended metaphor concerning the pitfalls on doing competitive science. What’s the point? Seriously.
If you are saying it is all purely anecdotal postulating, I agree. Some research scientist could win a lottery and would that be news? No, because in a world of billions of people there are always weird statistical outliers. Anecdotes don’t really prove anything quantitatively.
Definition? No. Examples, yes. http://www.nytimes.com/2009/01/22/science/earth/22climate.html This is a NYTimes article on what is the most tortured data in all of climate science. Instead of 0.2 degrees warming per decade that completely blew NASA’s estimate of maybe some warming out of the water, a follow up paper by a bunch of non-science types, silly bloggers, showed that the methodology used was a bit too novel.
Even Kevin Trenberth, grand poobah of climate science said, “It is hard to analyze data where there is none.”, or words to that effect, in reference to this paper.
The statistical consultant for the Antarctic Warming paper, is a serial data torturer, er… world famous climate science novel statistical expert.
Definition? That should be obvious: any manipulation or fabrication of data leading to information which is fictitious to whatever degree and using that information to purposely deceive.
The point of this discussion? Its cathartic!
Bill Kropla
You wrote:
How about starting with “smoothing”?
Max
Bill Kropla
In addition to “smoothing”, how about “cherry-picking”?
Max
If people don’t know what they are doing, we call it “smoothing”.
If they know exactly what they are doing, we call it filtering.
Come on people, we can pick up something like femtowatt signals buried in noise from the Voyager spacecraft, we can uncorrelate doppler signals out of fast moving GPS satellites buried in noise with sophisticated Kalman filters to phase lock the clock drift and get positional accuracy to the meter, we can pick the license plate number out of a fuzzy camera shot from hundreds of feet away based on maximum entropy spectral analysis and then hunt the perp down, and we can do all this other wondrous stuff. This is completely dependent on modern digital signal processing techniques and advanced filtering algorithms. A filter is simply a way of removing everything you don’t want to see, so that you can pick out the information that you think is important. That is what smoothing does to, but it is simply the tip of the iceberg of what is possible.
Yet we marginalize what some scientists can potentially do by calling it “smoothing”.
I think there are talented scientists and naive scientists and a few troubled scientists, but we can’t really lump everyone together with broad brush strokes. Based on what I have seen done with filtering, we have to give the scientists a chance to see what they can do.
[*] For example: Wolfram and his use of Cellular Automata
Idiots smooth things over …
Intelligent people filter things out …
Many intelligent people (who know exactly what they are doing) hold a deeply seated belief in a continuous reversible universe and are profoundly mistrustful of Wolfram’s idiotic discrete filtering of reality.
The problem resides in transiting between coherence and incoherence.
Neither linear smoothing, nor nonlinear filtering can provide assistance in traveling into and out of discontinuity (.. and confusion)
Why do you think I care about cellular automata?
WHT, I am surprised at the hostility to CAs
The mistrust seems to arise from the irreversibility through discrete representation.and it’s implication of irreversibility.
It was a delight to start reading through your comments on fat tailed distributions. Although the maths is a bit much for me, I was reminded of percolation theory, coalescing random walks, convolution, criticality, renormalization and such things which I banged my head against many years ago. A shame Wikipedia wasn’t around back then.
Not sure what to say about your stuff, except that if you are at some considerable disagreement with the IPCC’s estimation then I cringe at their ineptitude or their deliberate misdirection, as the case may be.
Elsewhere you wrote …
There is more than diffusion and percolation. Flow can be everything.
What intrigues me is flow process as it relates to variable velocities and outliers. It would seem possible to sail icebergs through that loop hole.
For me the problem is in making a transformation between coherence and incoherence.
Let’s just say that the outliers are everything as a thinning out by reiterated decimation … is but a means of moving out from continuous and across through to discontinuity.
Not trying to be clever with words here … but incoherence and discontinuity can make a mess of jig saw puzzles.
For sure, convolution renormalization criticality and other tools can span that ‘incoherent messiness’. You know more about this than I do. …
Nevertheless incoherence/discontinuity remains as a formidable problem. If an appropriate narrow bandpass filter cannot be constructed, for whatever reason … and thinking of the problem of pattern acquisition here … the filtering is ineffective.
You frequently mention normal distributions and outliers. It reminds me of the Law of large numbers. An idea which I saw recently in CA research involved a Law of small numbers where N was fractional and much less than 1. (Aside: Sorry, but I don’t have a reference. I understand it because I came at the concept by way of my own work) That has to do with outliers too, In this case N is much less than unity. Not quite sure how to explain what that means or why it is important (significant effectiveness)
The terms incoherent and discontinuity are used interchangeably. It might be said that it has something to do with irrational numbers but that even misleads.
Chaos, strange attractors, infinite dimensions, bifurcations, dynamical systems … all have something to do with it. Trouble is that there isn’t something yet(?) that is practical in transiting from coherent definiteness to wobbly stretchable incoherence and beyond. There is no correspondence.
Formal mathematical ‘measure’ doesn’t apply to incoherent space. It really is about crossing over the boundary to something alluded to in Godel’s incompleteness theory.
This is a real problem. It’s the problem that preoccupied Von Neumann.
Not only is this incoherence problem ‘real’, it is also very ordinary … and damnable to understand and describe.
In all the examples you cite, the is an a priori known pattern to the signal – without this known pattern, recovering the signal is impossible.
Just to share an anecdote from my own discipline, ancient studies: I’ve heard from a junior scholar in a position to know the truth (and whom I trust absolutely) that a prominent archaeologist, faced with a deadline, essentially sketched out a fabricated city plan for the site he had been digging. The job was convincing enough that one would have to visit the site and study it carefully to realize that large chunks of the published map were not based on reality. So dishonesty among top scholars can go on in just about any discipline. (Not that this will come as a surprise…)
Gil,
From the stories I read, it happens far too often and then becomes reference to future scholars.
You have no idea how bad the current science is. 80% is pure fabrication. With 4 times still not explored or in massive confusion.
Torture of a different kind;
http://www.youtube.com/watch?v=TzEEgtOFFlM
That bi-partisan, freedom loving, middle of the road U.N. at it again. Like AGW but more under the radar.
If data is good, there is no need to torture.
North Atlantic SST (and its de-trended derivative – AMO) is a favourite ground to hunt for cycles, or explain 20th century up-slope to justify global temps variability, despite the claim that ‘the nature and origin of the SST/AMO remains uncertain’.
No torture needed, both nature and origin look certain.
http://www.vukcevic.talktalk.net/SST-NAP.htm
Current fashion:
Too much stat inference.
Too much promotion of Bayesian.
Insufficient value placed on data exploration.
Really… more like NO CLUE of Bayesian Statistics.
Week after week, people like Briggs enlighten people of the the SIMPLE errors that occur with statistics all the time (e.g. applying averages to averages). For an introduction to basic statistics (HS-College) Matt Briggs has a very good primer:
http://wmbriggs.com/blog/?page_id=2690
Briggs doesn’t have all the answers …and unfortunately he has written some nonsense about averaging (along with some things that are true). Equipped with a good handle on the base spatiotemporal structures of a system, smoothing operators can be used as POWERFULLY illuminating resonators.
Tip: Don’t look to the abstract assumptions of statisticians for climate answers. Look to practical, sensible data explorers. It’s stat inference based on UNTENABLE assumptions – whether classical OR Bayesian – that’s the SERIOUS problem in a context where there are FUNDAMENTAL spatiotemporal sampling issues of which the MAJORITY of investigators are THOROUGHLY unaware.
The ONLY sensible option is to DROP the untenable assumptions – i.e. do data exploration (which should absolutely NOT be confused with statistical inference based on untenable assumptions). However, as Dr. Curry points out, academic culture FOOLHARDILY DEMANDS tests based on UNTENABLE assumptions.
We already know that most of the p-values are meaningless (whether classical or Bayesian) and yet the wholesale indoctrination of students continues. This is a fundamentally ugly issue facing our society & civilization. All but the few very brightest statistical leaders with extraordinarily rare lucid awareness aren’t hoodwinked and most of them have a smug grin on their faces, coyly opting to defend their TRIBE (unacceptably selfish) rather than admit the truth to a society & civilization that DEPENDS on them to not mislead innocents with untenable assumptions.
Perhaps Bayesian snake-oil salesman, some of whom perhaps don’t yet realize the LIMITED utility of their product for climate exploration, will understand an analogy. The Metropolis-Hastings algorithm sat on a shelf collecting dust for 4 decades. The same thing might happen with LeMouel, Blanter, Shnirman, & Courtillot (2010). Even the people actually taking the time to read it don’t understand it. [Worse: Some diss it from a base in PATENT MISconception.] This indicates FUNDAMENTAL lack of awareness of the spatiotemporal sampling framework, aliasing, integration across harmonics, and the effect of aggregation criteria on summaries.
Perhaps a larger concern should be the consequences of thus-highlighted deficiencies in our educations systems, including perceptive delays measured in decades, if not centuries.
Paul
Speaking of Briggs, see: Do not smooth times series, you hockey puck!
Makes sense to me. Where are the problems with his cautions?
“Too much promotion of Bayesian.” Possibly. Too much misuse of all statistical methods, more likely. PCA on non-stationary and non-existent time series appears problematic.
Perhaps we can get the Bayesians & the Classicals to take a break from their tiresomely chronic feud for long enough to realize what they have in common in the climate context: patently untenable assumptions. The attack should be on ALL camps that base statistical inference on untenable assumptions. I remind all statisticians who may be reading here that data are meaningless without adequate context; what we have in the climate context is SEVERE sampling design misconceptions. Investigators have jumped straight to patently untenable assumptions without ever having explored the data properly – i.e. with a sound handle on the spatiotemporal framework outlined by EOP (Earth Orientation Parameters), which points CLEARLY to differential spatiotemporal aliasing.
I consider statisticians much like I do politicians. I don’t trust any of them on their own. I can see benefits of Bayesian methods, but they need to be verified against other methods. I need to read up on EOP, but with what little I do know, I am in agreement.
Methods are a dime a dozen and any of them (including Classical & Bayesian) can be applied sensibly, but there’s no method that compensates for basing inference (hypothesis tests & confidence intervals) on untenable assumptions. Regards.
untenable assumption.
not so clear.
A burden I once bore for several years was indoctrination of students. My specialty was introducing the statistical inference paradigm to newcomers. Stat inference is founded on assumptions, some of which are contextually indefensible.
I’m pointing out your untenable assumptions. What you claim is clear, is not so clear. What you claim is untenable, not so untenable.
regards.
Perhaps not clear to all, but the evidence is beyond all shadow of a doubt that rafts of standard mainstream climate inference assumptions are untenable – (same thing happens in other disciplines). We’ve reached harmoniously efficient disagreement. Cheers!
If a study was properly designed and conducted, and fails to reject H0, it is MORE informative than one that does reject it in most cases (especially rejections at the pathetic 95% “Climate Science Gold Standard interpreted as ‘highly likely’ “). Yet its odds of getting published are vanishingly small.
I guess it comes down to “positive results only” being a form of cherry-picking.
Brian,
At least you see some concept of how science in it’s current form works.
The planet is at least 100 times more complex than what our forefathers(who generated the generalized theories), thought it was.
Missed simply measuring the differences in the circumference sizes to the different speeds in the 24 hour period.
This means any generalized mathematical formula is highly incorrect.
Coherence is not one of your strong points, is it?
I think what he is saying is that we should look to the motivations. For example, dead and dying Old Europe tried through the UN to use the global warming hoax to take down America. Now, the secular, socialist agenda to consolidate power over the productive in America includes keeping Europe afloat even while Leftist ideology is dragging Western civilization down like a stone. Capiche?
“Old Europe” has goofed over lots of policies, such as the common currency, so muddled thinking is far more plausible than a cunning master plan designed to “take down” our main ally!
David,
Stupidity is much more common than cunning plots.
The AGW community demonstrates this daily.
hunter
Re; stupidity and cunning plots
A good example of both is cartoon character , “Wiley Coyote”.
Another may well be the EU leadership.
Max
The “reply” function seems to be placing posts randomly. Here’s another try:
I see the EU “leadership” and other leftist AGW-pusher-pols as bedazzled by the prospect of nearly unlimited power over the economics and demographics of the planet — in their hands, and very soon! To that end they are prepared to sacrifice any quantity of others and their rights and well-being.
‘Coherence’ isn’t a strong point with climate itself either.
Two choices come to mind …
Cnut or Knud
The king speaks out on the topic of political correctness. There is a problem of coherence.
The extent of this individual’s fabrication or deliberate and significant abuse of research methods is not yet known, but since his public career behaviour has been to publish compulsively and to groom himself to be something of a media celebrity, the personality involved – rather than the research climate, which quite frankly has not by this situation been shown to lead the majority of research scientists in social psychology or in climate science to fabricate their research data – seems to be at least as meaningful to understanding this particular story.
The bad work was almost certainly evident over time, so why did people follow him? Why, indeed. And yet we see it all the time, mostly outside of climate science. He was confident, charming, and able to get people to believe him and support him even thought at least some of his research conclusions defied or were not supported in the evidence and also apparently defied logic and common sense. If reports within the social science community are accurate, he ‘confessed’ immediately and almost playfully, suggesting that he was not ethically troubled. He is cooperating fully yet there have been no public statements of remorse. It is hard not to notice that a real comparison can be made to the personality and behaviour of e.g. Ian Plimer. Too bad some people have so much trouble understanding that an ad hominem argument (i.e., an argument that rejects an individual’s opinion based on the person’s demonstrated inconsistency, insincerity, incompetence or self-serving behaviour) is not always fallacious. :-(
Of course, these kinds of cases aside, there are manifold issues at the institutional level in Academe and especially science research, that are top of the list of skeptical concerns. However, I’m not sure this particular case is instructive since something is so clearly wrong with this guy’s fitness.
At the institutional level, it may be worth considering whether it is easier to ‘fabricate’ data in social psychology than in climate science. The two disciplines have mechanisms to support integrity and good work and to reflect standards, but they are methodologically different and the ethical role of the researcher while equally important is challenged in some uniquely different ways.
cheers
Pekka
You wrote:
This may be true in many cases, but there may be some who think there is a third alternate: publish their errors (without publishing the source codes, etc.), get some buddies to do the peer review and hope no one will challenge the results and conclusions.
These are the ones that need to be watched.
And it would be totally naive to think they are not out there in climate science today (especially after Climategate).
Max
Max,
Why do you thing I wrote “most scientists” rather than “scientists”.
Whether the climate science is plagued more by these problems than other fields of science which study issues with similar complexity and uncertainties is not at all clear. What is clear, is that the problems in climate science have been publicized much more than in most other fields up to the point where false accusations easily outnumber real problems.
I do, however, see very large problems in WG2 type research on the impacts of climate change. The motivation, funding and publishability of research on impacts are all seriously biased. Much of that research is not good science at all, and lives only based on these biases. There is of course also good research on impacts, but in that field the problems are severe.
Pekka
You may be correct in saying that climate science has no greater percentage of “data torturers” out there than other sciences.
I believe the problem comes primarily when there are extremely large sums of money at play and a particular branch of science becomes politicized, as is the case for climate science today.
This opens the door for a corrupted process, which demands scientific results to support a political agenda, as appears to have happened with IPCC, whose sole raison d’être is to determine human-induced climate changes, their impacts on society and suggested mitigation or adaptation strategies.
IOW if there are no potentially serious human-induced climate changes there is no need for IPCC to continue to exist.
This existential problem has contributed to creating the corrupted IPCC process we witness today, to which our host here has alluded in earlier threads.
So, yes, other fields of science may indeed have a similar problem (particularly if large sums of money are involved), but that should not detract us from our concentration on “data torturing” in climate science.
Max
It’s totally clear that IPCC exists only, because many people had concluded that the climate change may have severe consequences, but IPCC is a special kind of forum of collaboration, not a major organization by itself. It’s budget is tiny.
You mention “extremely large sums of money”, but I see that to be true only in some areas, none of which is directly linked to climate science itself. Certainly there’s significantly additional funding for the climate science, but not at a level even approaching “extremely large sums of money”.
The extent of German funding of solar energy might reach that level (more than 50 billion euros committed so far), but mostly the extremely large sums of money are related to the potential costs of mitigation measures that would mean economic losses to everybody, or to the losses that many businesses fear to face.
The motivation that may have led to exaggeration of the risks is not economic, but either sincere belief in the views or a combination of that and prestige, which many people value even more than money.
Pekka
This is beginning to sound like a legal hair-splitting discussion.
Yes. IPCC’s only reason to exist is the premise that human-induced climate changes could represent a potentially serious threat to humanity. If it cannot demonstrate this, it has no further reason to exist.
Yes. There are extremely large sums of money involved. Climate science/research today involves a few billion dollars annually, but this is just peanuts compared to the trillions of dollars that would be involved in global carbon taxes or cap and trade schemes, not to mention the potential subsidies or grants of taxpayer funds to support “green” industries.
As you mention, these large sums of money are more related to “mitigation” schemes than to climate research, itself (I agree).
No. I do not agree with your statement:
On the part of individual scientists, who have “tortured the data” to get the “desired” result, I’d say you might be correct in saying economic interests were not the prime motivation (rather “prestige” and “belief”), but IMO anytime extremely large sums of money are involved, there will be those who are motivated by the prospect of getting a piece of the action.
Max
Pekka
Back to our exchange.
I believe our host here has referred to the basic problem of “data torturing” as it relates to climate science more eloquently than I just did:
This pretty much sums it up.
Cheers,
Max
It is impossible to read “The Hockey Stick Illusion” and not think about torturing the data until it confesses!
I wonder if research programs in areas like climate change need to go through a public “design phase”, where for example an experimenter would be asked (and expected to answer!) how he/she proposes to detect CO2 warming when the global temperature varies for other reasons.
Perhaps this would have elicited the response that only tropospheric excess warming was diagnostic of CO2, and scientists would never have started watching global temperature data in the same frenzied way that people watch the stock markets!
The “design phase” could also cover such issues as what if any corrections should be applied to raw data, and how this should be documented, and also just how accurate the raw data would be.
Such questions would avoid the descent into confused thinking that seems to characterise climate science.
I feel there are also lessons to be learned from parapsychology research!
Someone reviewing a parapsychology paper would be free to think the unthinkable:
1) Maybe the experimenter was guilty of wishful thinking.
2) Maybe he actually cheated.
3) Maybe he discarded a lot of ‘failed’ experiments, only keeping those that succeeded by chance.
4) An experiment would be considered more convincing if it is done ‘blind’ – so that the experimenter can’t influence the outcome, because he never knows whether he is handling the experimental or control data until the coding of the data is revealed. This approach is also often used in medical trials.
Well if you leave climate change aside, I’d support Obama in preference to Bush any day – he just seems a much safer pair of hands! Does that mean I want to trash America – of course not – I like the place, and will be over in your country shortly!
The Republicans seem a lot more savvy about climate change, but not much else!
It isn’t unpatriotic to vote for another party in an election! If you really believe that, you need to live in a one-party state!
Most voters do live in a one-party state.
From a vantage point in Switzerland, I’d say David is correct.
“Old Europe” (in particular, that majority, which now constitutes the EU) is burdened with “muddled thinking” at present. The “motor” (Germany) is being criticized by the EU leadership for being too small-minded in its thinking by being skeptical of the long-term solvency of Greece.
It doesn’t help that Timothy Geithner comes over here to lecture Europe on how to run things, while his own nation has just lost its top credit rating (after he promised this would never happen) and is going broke while spending like a drunken sailor.
At the same time China is slowly moving in to “help” Europe with its solvency problems.Will the USA be next?
But Europe is not about to destroy the USA.
The climate zealots that used to run the UK might have had such aspirations, but they are gone at the top and will gradually be replaced down the line with more reasonable politicians. IMO it is only a matter of months until the rest of Europe shelves the unilateral carbon reduction plans, as they realize that these will not change our global climate one iota.
Max
and is going broke while spending like a drunken sailor.
THAT IS EXTREMELY UNFAIR…to drunken sailors.
The China syndrome …
Contrary to a desire to couple implications in a pan-global manner, the mess remains in one’s own backyard.
Smoothing
A recorded observed data series is exactly what it is, i.e. “what you see is what you get”.
A 10-year running smoothed data series is by definition a manipulated data series. The “smoothing” does not remove “error”, but is designed to remove “noise”. But who can decide what is “noise” and what is “signal”?
See:
http://wmbriggs.com/blog/?p=195
Cherry-picking
Removing “outliers” from the data series (because they do not fit the hypothesis?) or carefully selecting data series, which help to prove the desired hypothesis, are examples of “cherry-picking”, a practice that is not unknown in climate science.
Max
Smoothing can also remove something that you know exists and want to get rid of. For example you can smooth the alternating current component to get a stable source of direct current. That is the gist of the Mauna Loa CO2 data in that one wants to get rid of the yearly cyclical effect. Otherwise a cross-correlation of d[CO2] with Temperature looks like this:
http://img59.imageshack.us/img59/9666/co2nofilter.png
You might as well get rid of that yearly component beforehand and then you get something like this:
http://img411.imageshack.us/img411/5791/pdco2t.png
The second one is a bit easier to reason about, while the first is “corrupted” by what is commonly referred to as a nuisance term. We have to ignore this because Mauna Loa partly measures global conditions but the nuisance term is due to seasonal latitude locality, while the temperature is strictly a global average.
When people start accepting fat-tail statistics for modeling behaviors, they will stop throwing out outliers. I read an interesting paper that said most outliers are thrown out because they don’t fit a Normal distribution, based on people’s preconceived notions that everything has to follow a normal.
This is obvious in the case of climate change skeptics who can’t understand that the adjustment/residence time of CO2 can have a significant fat-tail. Its not exactly cherry-picking but they can’t see that fat tails are possble and those all have to be outliers that should be thrown out.
Sorry, WHT, you drifted a bit.
Let’s talk about “torturing the data”
“Cherry-picking” is what it is, and the “deniers” certainly do not have the monopoly on this, as you imply.
“Discarding outliers” is a common ploy for defending a paradigm (Kuhn).
“Smoothing” is data manipulation. Using “smoothed” data to prove a point is fooling yourself.
Max
WHT
You talk about the “significant fat tail” of CO2 residence time.
This has not been determined by empirical data based on actual physical observations or reproducible experimentation, so is purely a hypothetical deliberation to start off with.
Whether it is correct or not has very little impact on our climate today or over the next 50 to 100 years.
What is of greater interest is today’s instantaneous rate of removal of CO2 from our climate system.
(And we don’t even know that for sure.)
Max
I have written extensively on fat-tails so I look at outliers as the most significant piece of the equation. For example, in oil discoveries, only a few supergiant reservoirs exist as outliers, and the rest follow a fat-tail distribution without a mean value. This characteristic comes up over and over in natural sciences.
More specifically, it invariably comes up in systems with disorder and where the dimension of the measurement is inverted. For example in diffusional systems, where the velocity is highly variable, the time characteristic will always show a fat-tail. The easy way to think about is that time is the denominator in velocity and when a PDF in velocity gets reciprocated a fat-tail power-law will come out. That is why you see a few supergiants; the velocity at which supergiants grow is disordered in different regions, and the point at which we tap the supergiants is dependent on time. Then, bingo, a fat-tail comes out of the distribution and the occasional supergiants start to make sense.
All you have to do is look at the Stefan-Boltzmann Law to see this. The wavenumber distribution is damped exponential, while the wavelength distribution is reciprocal power law.
This is not a hypothetical premise as it describes the way things exist mathematically in nature. It’s really not my fault that no one can put two and two together and just explain it this succintly.
The problem with the adjustment/residence time is that it comes out of numerical computations, ala IPCC Bern, and since no one believes a simulation it is relegated to the hypothetical.
http://theoilconundrum.blogspot.com/2011/09/missing-carbon.html
Everything has to make sense like a jigsaw puzzle, and the fat-tail is a piece in the puzzle.
WHT
In no way do I want to belittle your work on “fat tails” as regards oil reservoirs. This is very likely a significant part of the calculation on the long-term economic viability of a reservoir.
I was referring to the discussions on the CO2 thread here about the “fat tail” in CO2 residence time in our climate system.
Admittedly, all of the estimates of the long-term CO2 residence time in our climate system are based on hypothetical deliberations, rather than empirical data based on physical observations, so all conclusions should be taken with a large grain of salt.
IPCC doesn’t help us much here, with its estimate of a long-term CO2 residence time of “5 to 200 years”,
But the generally accepted suggestion is that, even if we stop all human CO2 emissions at current levels, we will see atmospheric CO2 concentrations (and hence temperatures) remain essentially constant for decades before starting to reduce, (or temperature even rise initially, if one accepts the the “hidden in the pipeline” postulation of Hansen et al., as IPCC apparently does). And it would take centuries to get back to the “pre-industrial” CO2 level, as a result of a long “tail”.
For me the more practical consideration concerns the instantaneous rate of decay of CO2 in the climate system. If one accepts the “half life” estimate of 120 years (as has been suggested by one study, one would arrive at a decay rate of around 0.158% of the concentration, or around 2 ppmv per year.
In effect, this would mean that if the net input of CO2 to the climate system (from wherever it comes) were reduced to 2 ppmv per year, the concentration would stop rising. If it were reduced to less than 2 ppmv, CO2 concentration would begin to diminish.
I realize that these are all simplifications based largely on hypothetical deliberations, but I think they are more pertinent to our climate over the next several decades than “fat tail” deliberations.
That was my point, WHT.
Max
The fat tails of probability distributions and the nonexponential nature of the persistence of CO2 in atmosphere are totally different issues. It’s better to use the term only for the former, where it has been used for long.
As with many other phenomena related to climate science the basic qualitative fact is well known. There isn’t reasonable basis to doubt the non-exponential nature of the persistence of CO2, but the quantitative details are not known accurately at all. The dynamics of the carbon cycle of oceans is certainly not known accurately, and all the estimates of the long term part of the persistence are dominated by those effects, which include as well the transfer of carbon between surface ocean and deep ocean as the “final” removal of carbon through mineralization.
But, as I just wrote in the other thread referring to a sentence in Padilla et al, does this really matter. Is there really any reason to worry on, what’s going to happen more that 200 years from now, because high CO2 releases cannot continue even nearly that long, and because the CO2 concentration will therefore turn to significant decline much earlier (unless the releases are rapidly reduced to a low level that can be maintained longer).
“Probability Theory is the Logic of Science”
I derive the fat-tail in the time domain from the solution to a Fokker-Planck equation with suitable boundary conditions. This is essentially a master equation describing how probability flows away from a region. It really is just a matter of putting the right conceptual abstraction on the problem domain and one can see how this falls out. Perhaps it is not the static PDF that you have in mind but it certainly classifies as a probability density function over space and time, since the rules of probability have to be obeyed.
I really do think this is the quantitative solution to what you describe as the qualitative reality. A while ago I looked at the IPCC Bern curves and fitted those to a reciprocal sqrt(time) dependence, and I think that there is little doubt that a diffusional boundary layer is the fundamental basis for their simulation results.
(look at the following curves where I overlay empirical fat-tail curves over the IPCC results for impulse response function)
http://4.bp.blogspot.com/_csV48ElUsZQ/S9pVGuYpfAI/AAAAAAAAAQc/GVQ-wzzb2nc/s1600/co2-a-b.gif
As with a lot of scientific results, the modelers didn’t care to give the short-hand interpretation that I recall my physics instructors always working-out. Since that is the way I was taught to solve problems, I continue to do it that way.
Sixteen (16) years and no warming:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1995/to:2011/plot/hadcrut3vgl/from:1995/to:2011
Final grade: D-
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1995/to:2011/plot/hadcrut3vgl/from:1995/to:2011/trend
Diogenes and steven mosher
Should be 10.7 years with no warming (Jan 2001 to today)
If temperature anomalies continue as they are right now, that trend of “no warming” will drop back to 1998, so we would have 15 years by end 2012.
But, as Santer et al. have told us, we would still need 2 more years (end 2014) for the lack of global warming to become “statistically significant”.
If and when that happens, the “unusually warm” ENSO year 1998 will be cited as a “bad place to start a statistic”, so the 17 years could probably be extended out to 2017.
If there is no warming through 2017, Santer (or someone else) will move the goalpost again IMO.
Max
It’s far more complicated than that. Plus that is not what Santer is arguing.
Its happened before with Burt’s psychological ‘work’ on twins , total fabrication but because of person no one asked questions on what were clear errors and this ‘research’ became the orthodoxy which lead to it being it cited a great deal and virtual questioned , Until that is some one outside of the area how was a statistician got to work on it . Does this sound familiar to anyone ?
Pekka
Agree there is no point worrying about what will occur 200 years from now.
That is why I suggested to WHT that it might be more interesting from a practical standpoint to determine the instantaneous decay rate of CO2 in our atmosphere at today’s conditions than to deliberate what will happen more than 200 years from now.
AFAIK this rate has not been determined, although an upper limit could be today’s difference between human emissions (assuming these are the sole net input to the climate system, which has been questioned) and the measured increase in atmospheric concentration, which would calculate out to around 2 ppmv per year.
As you say, there will very likely be drastic changes in human CO2 emissions over the next century regardless of any mitigation strategies that may or may not be implemented.
Max
Max,
I don’t know exactly, what you mean by “instantaneous”. In the 4-exponential parametrization of Maier-Reimer and Hasselmann the shortest component has the time constant of 1.9 years and represents 9.8% of the removal. The other three exponentials have corresponding values (17.3a / 24.9%), (73.6a / 32.1%) and (362.9a / 20.1%) leaving 13.1% to so slow processes that the parametrization considers it as permanent.
These values mean that the decay rate changes rapidly during the first years and remains to an essential degree non-exponential at all times. Using “instantaneous” with such a temporal behavior is not really reasonable.
While I consider 200 years to be clearly too long to have much weight in decision making and 100 years to be also on the long side, I do think that the rate of decay must be considered at least 50-100 years to the future. That cannot be described by one rate of decay.
I haven’t checked from original sources, but I think that volcanic events tell about the persistence over the first 10 years or so giving evidence for the applicability of the first two terms of the parametrization. I think that there’s reasonably strong evidence on the validity of the formula also for somewhat longer periods, but the behavior over periods of 100 years or more must be based almost solely on models of ocean circulation and other related phenomena.
The parametrization is not supposed to represent four separate processes, but it’s only a practical tool for calculating the persistence of CO2 released to the atmosphere.
Warning Pekka, I got ridiculed by the resident hydro man for offering up this suggestion, i.e volcanic events.
I still think the time is a diffusion-related phenomena into deep stores.This comes up repeatedly in natural sciences when there is not a strong force component, such as what happens with dispersion of contaminants, anomalous transport in amorphous semiconductors, etc. The force component is not there because the carbon cycle keeps on recycling the material and we are really looking at the diffusion into the deep stores. With diffusion the initial transient looks like an exponential drop but then it slows down because of the length of time it takes a random walk to go anywhere.
I like this animated GIF from the Fokker-Planck wikipedia page:
http://upload.wikimedia.org/wikipedia/commons/f/f2/FokkerPlanck.gif
You can see the drift component in this one; imagine that not being there.
Interesting that they use that approximation for diffusion. Like I said elsewhere, they might be just empirically fitting a diffusional drop off.
I found the Maier-Reimer and Hasselmann parameters and compared it to a modified 1/sqrt(time) drop-off and this is what it looks like:
http://img577.imageshack.us/img577/6595/maierreimerhasselman.gif
Note that they stuck in an unphysical constant to get the tail really fat at long times.
I am really convinced that they simply tried to heuristically model a diffusional drop-off with 4 exponentials and then try to explain it with different rates of absorption. I am sure this came out of a simulation and they didn’t understand how simply they could do a 1/sqrt(t) profile.
WHT,
The basic idea of the multicompartment model is sound. The atmosphere reaches a balance with the most strongly mixed layer of ocean with a time scale of something like 2 years and this is behind the shortest time constant of Maier-Reimer and Hasselmann. Deeper ocean gets its carbon from this first “compartment”, and land areas form their own compartments bot directly and indirectly coupled to the atmosphere. For small and modest increases that system can be modeled by a set of coupled linear differential equations, which have as variables the amounts of CO2 in each compartment.
The above mentioned relationship between that one term and one real mechanism is not exact, but they are pretty certain to be strongly related. The biosphere contributes to that as well and there are no strict separations between layers of the ocean. Thus there is no exact correspondence, but rather one dominating effect modified by others.
The longer time constants are really only fits to a non-exponential curve without any similarly clear dominant mechanisms. For the time scale of tens of years turbulent mixing in a thicker layer of the oceans is important, but so are biological processes both on land and in the oceans. For even longer time scales of centuries up to 1000 years or so the mixing with deep ocean both through thermohaline circulation and through sinking and dissolution of marine biota must be the most important factor. On the scale of several thousands of years sedimentation to ocean floor starts to dominate.
Your approach makes sense in that it emphasizes the non-separability of times scales. We do not have well defined separate compartments, but the compartment model of a few compartments is only a discrete approximation of the reality with a continuous spectrum of time scales. Even so the question is more about the complexity of the Earth system than about probability distributions. The long tail is due to the fact that the ocean cannot absorb all the CO2 even in full equilibrium together with the long time scales of first reaching all parts of ocean and then of the sedimentation. The effect is most definitely present in the best deterministic description of the processes. This is the reason for my dislike of the use of the same word “fat tail” as with probability distributions.
Yes, agree entirely and that is the reason that a series of multi-compartment models will reduce exactly to a Fokker-Planck diffusion solution. I have shown this before, but this is a calculation of 50 of these layers concatenated:
http://img534.imageshack.us/img534/9016/co250stages.gif
This is worked out for CO2 diffusion into deeper sequestering sites but it is essentially the same argument for heat diffusion, as the heat equation is just another form of Fokker-Planck. I chose 50 slabs because you can see how the continuum can play out.
And I just consider this as describing the random-walk conundrum as a random walker can get deeper and deeper but it will take longer and longer to reach that point according to a long-tail (or fat-tail) formulation.
Describing it as long-tail vs fat-tail is a matter of taste. Perhaps I can see your well-reasoned point if we consider that the integral of the 1/sqrt(t) over all time will not integrate to 1, yet the spatio-temporal density function has to integrate to 1. Therefore, a long-tail does describe the temporal behavior better.
Webby,
The lack of empiricism is what gets you into trouble. The volcanic emissions seem certainly unobservable against other emissions. Volcanoes also cause cooling – which drives variability in CO2
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_data_mlo_anngr.pdf
It leads to error where processes are not adequately constrained – and these processes seem wide open.
Cheers
I see the EU “leadership” and other leftist AGW-pusher-pols as bedazzled by the prospect of nearly unlimited power over the economics and demographics of the planet — in their hands, and very soon! To that end they are prepared to sacrifice any quantity of others and their rights and well-being.
When the answer disagrees with one’s theory, it’s torture.
When the answer agrees with one’s own theory, it’s treatment.
Comparing what some psychologist has been sacked for to real science.. is an interesting treatment.
Actually, it’s “Wile E. Coyote”. But I’ve never heard what the “E.” stands for.
It’s really a great and useful piece of information. I am happy that you simply shared this helpful information with us. Please keep us informed like this. Thanks for sharing.