by Nicholas Lewis
Re: Data inconsistencies in Forest, Stone and Sokolov (2006) GRL paper 2005GL023977 ‘Estimated PDFs of climate system properties including natural and anthropogenic forcings‘
In recent years one of the most important methods of estimating probability distributions for key properties of the climate system has been comparison of observations with multiple model simulations, run at varying settings for climate parameters. Usually such studies are formulated in Bayesian terms and involve ‘optimal fingerprints’. In particular, equilibrium climate sensitivity (S), effective vertical deep ocean diffusivity (Kv) and total aerosol forcing (Faer) have been estimated in this way. Although such methods estimate climate system properties indirectly, the models concerned, unlike AOGCMs, have adjustable parameters controlling those properties that, at least in principle, are calibrated in terms of those properties and which enable the entire parameter space to be explored.
In the IPCC’s Fourth Assessment Report (AR4), an appendix to WGI Chapter 9, ‘Understanding and attributing climate change’[i], was devoted to these methods, which provided six of the chapter’s eight estimated probability density functions (PDFs) for S inferred from observed changes in climate. Estimates of climate properties derived from those studies have been widely cited and used as an input to other climate science work. The PDFs for S were set out in Figure 9.20 of AR4 WG1, reproduced below.
The results of Forest 2006 and its predecessor study Forest 2002 are particularly important since, unlike all other studies utilising model simulations, they were based on direct comparisons thereof with a wide range of instrumental data observations – surface, upper air and deep-ocean temperature changes – and they provided simultaneous estimates for Kv and Faer as well as S. Jointly estimating Kv and Faer together with S is important, as it avoids dependence on existing very uncertain estimates of those parameters. Reflecting their importance, the IPCC featured both Forest studies in Figure 9.20. The Forest 2006 PDF has a strong peak which is in line with the IPCC’s central estimate of S = 3, but the PDF is poorly constrained at high S.
I have been trying for over a year, without success, to obtain from Dr Forest the data used in Forest 2006. However, I have been able to obtain without any difficulty the data used in two related studies that were stated to be based on the Forest 2006 data. It appears that Dr Forest only provided pre-processed data for use in those studies, which is understandable as the raw model dataset is very large.
Unfortunately, Dr Forest reports that the raw model data is now lost. Worse, the sets of pre-processed model data that he provided for use in the two related studies, while both apparently deriving from the same set of model simulation runs, were very different. One dataset appears to correspond to what was actually used in Forest 2006, although I have only been able to approximate the Forest 2006 results using it. In the absence of computer code and related ancillary data, replication of the Forest 2006 results is problematical. However, that dataset is compatible, when using the surface, upper air and deep-ocean data in combination, with a central estimate for climate sensitivity close to S = 3, in line with the Forest 2006 results.
The other set of data, however, supports a central estimate of S = 1, with a well constrained PDF.
I have written the below letter to the editor-in-chief of the journal in which Forest 2006 was published, seeking his assistance in resolving this mystery. Until and unless Dr Forest demonstrates that the model data used in Forest 2006 was correctly processed from the raw model simulation run data, I cannot see that much confidence can be placed in the validity of the Forest 2006 results. The difficulty is that, with the raw model data lost, there is no simple way of proving which version of the processed model data, if either, is correct. However, so far as I can see, the evidence points to the CSF 2005 version of the key surface temperature model data, at least, being the correct one. If I am right, then correct processing of the data used in Forest 2006 would lead to the conclusion that equilibrium climate sensitivity (to a doubling of CO2 in the atmosphere) is close to 1°C, not 3°C, implying that likely future warming has been grossly overestimated by the IPCC.
This sad state of affairs would not have arisen if Dr Forest had been required to place all the data and computer code used for the study in a public archive at the time of publication. Imposition by journals of such a requirement, and its enforcement, is in my view an important step in restoring trust in climate science amongst people who base their beliefs on empirical, verifiable, evidence.
************
Dr E Calais
Editor-in-Chief, Geophysical Research Letters,American Geophysical Union
June 23, 2012
Re: Data inconsistencies in Forest, Stone and Sokolov (2006) GRL paper 2005GL023977 ‘Estimated PDFs of climate system properties including natural and anthropogenic forcings’
Dear Dr Calais:
I contacted you last summer about the failure by Dr Chris Forest to provide requested data and computer code used in Forest 2006, an observationally constrained study of key climate parameters. My primary interest in Forest 2006 is the statistical inference methods used, which I consider to be flawed. I have still not received any data or code from Dr Forest, despite repeated promises to make data available. However, the issues I raise in this letter are more serious than simple failure to provide materials; they concern apparent alteration of data.
In summary, I have copies of datasets used in two studies related to Forest 2006, both of which should contain the same temperature data as used in Forest 2006 (save for the deep-ocean observational data). Only one of the datasets is consistent with Forest 2006. The other dataset was used in a 2005 study (CSF 2005 – detailed in paragraph 1 below), certain results of which relating to the key surface temperature ‘diagnostic’ were relied upon in Forest 2006 to support a critical parameter choice. Dr Forest has stated that the surface diagnostic data used in Forest 2006 was identical to that used in CSF 2005. However, the CSF 2005 surface diagnostic model data that I have cannot be the same as the data used in Forest 2006 and, moreover, it points to a much lower estimate of climate sensitivity than that given in the Forest 2006 results.
I would ask you to investigate thoroughly my concerns. If Dr Forest is unable to demonstrate, using data, code and other information that is made publicly available, that the changes between the CSF 2005 and the Forest 2006 model data were made to correct processing errors, and did accurately do so, then I would ask you to consider whether Forest 2006 should be withdrawn by GRL.
In addition, I ask you to require Dr Forest to provide copies of the following data and computer code to me without further delay:
- all the processed MIT model, observational and AOGCM control-run data used in Forest 2006 for the computation of the likelihood function from each diagnostic;
- all code and ancillary data used to generate that data from the raw data used
- all code and ancillary data used to generate the likelihood functions from the processed data
I set out details of my concerns, and the evidence supporting them, in the numbered paragraphs below.
1. I have obtained two sets of processed data, matching in form that used in Forest et al. 2006, which Dr Forest provided to his co-authors for use in two related studies: Curry C, Sansó B and Forest C, Inference for climate system properties, 2005, AMS Tech. Rep. ams2005-13, Am. Stat. Assoc. (CSF 2005), cited in Forest 2006; and Sansó B, Forest C and Zantedeschi D, Inferring Climate System Properties Using a Computer Model, Bayes. Anal., 2008, 3,1–62 (SFZ 2008). The values of the data that Dr Forest provided for these two studies differ substantially, and lead to completely different central estimates of climate sensitivity: 1 K and 3 K respectively.
2. Forest et al. 2006 compares observations of multiple surface, upper air and deep-ocean temperature changes with simulations thereof by the MIT 2D climate model run at many climate parameter settings. Internal climate covariability affecting the variables in each of these three ‘diagnostics’ is estimated using AOGCM long-control-run data. The surface diagnostic (mean temperatures for four latitude bands averaged over each of the five decades 1946–55 to 1986–95) is most informative as to how likely each climate parameter combination is. The raw MIT model, observational and AOGCM control run data was processed to match the specifications of the three diagnostics before the statistical inference methods were applied. Dr Forest states that the raw MIT model data has been lost, making it impossible to replicate fully, and so verify, the study.
3. A preprint version of Forest 2006 was published in September 2005 as MIT Joint Program Report No. 126. Apart from in a small number of places, in particular where references were made to use of an older deep-ocean observational dataset (Levitus et al 2000 rather than 2005), the MIT report version was almost identically worded to the final version. By mistake, in Figures S1a and S1b of the GRL Auxiliary Material, Dr Forest included the graphs from the MIT Report version, showing very different probability densities (PDFs) for climate sensitivity than those in Figure 2 of the main text of Forest 2006 in GRL. I was surprised that simply changing the ocean dataset had such a major impact on the shape of these PDFs, but this may reflect problems with the surface diagnostic dataset used, as discussed below.
4. Bruno Sansó has provided me with a copy of the tar file archive, dated 23 May 2006, of processed data for the surface and upper air diagnostics, which Dr Forest sent him for use in SFZ 2008. The latter paper states that the data it used is the same as that in Forest 2006. Bruno Sansó also sent me, separately, a copy of the deep-ocean diagnostic model, observational and control data used in SFZ 2008. I am inclined to believe that the SFZ 2008 surface and upper air diagnostic data was that used in Forest 2006. Its surface model data matches Forest 2006 Figure 1(a). Using the SFZ 2008 tar file archive data in combination with the deep-ocean diagnostic model and control-run data used in SFZ 2008, and a deep-ocean diagnostic observational trend calculated from the Levitus et al 2005 dataset, I can produce broadly similar climate parameter PDFs to those in the Forest 2006 main results (Figure 2: GSOLSV, κsfc = 16, uniform prior), with a peak climate sensitivity around S = 3. However, doing so necessitates what I regard as an overoptimistic assumption as to uncertainty in the deep-ocean diagnostic observational trend, and I have been unable even approximately to match the GSOLSV, κsfc = 14 climate sensitivity PDF in Figure 2 of Forest 2006. The lack of a better match could conceivably relate to different treatment resulting from undisclosed methodological choices or ancillary data used by Dr Forest. Alternatively, the deep-ocean model data that was used in Forest 2006 may differ from that used in SFZ 2008, which matches the deep-ocean data used in the CSF 2005 study, provided at an earlier date than was the SFZ 2008 surface and upper air data.
5. Charles Curry has made available the data that, a few months before submitting Forest 2006 to GRL, Dr Forest provided for use in CSF 2005. I can reproduce exactly the numbers in Figure 1 of CSF 2005 using this data. The CSF 2005 surface and upper air diagnostic observational data is almost identical to that provided for SFZ 2008, but the CSF 2005 model data is quite different, as is the control data. It is impossible for the CSF 2005 data to have produced the results in Forest 2006, for two reasons:
a) the CSF 2005 surface control data covariance matrix is virtually singular, indicating that the raw data from the (GFDL) AOGCM control run had been misprocessed. Its use leads to extremely poorly constrained, pretty well information-less, climate sensitivity PDFs;
b) when used with the HadCM2-derived surface control data covariance matrix from the SFZ 2008 data, which I have largely been able to agree to raw data from the HadCM2 AOGCM control run (which data Dr Forest has confirmed was used for the Forest 2006 main results), the CSF 2005 surface model and observational data produces, irrespective of which upper air and deep-ocean dataset is used, a strongly peaked PDF for climate sensitivity, centred close to S = 1, not S = 3 as per Forest 2006.
Moreover, the relevant CSF 2005 model data is inconsistent with the surface model data graph in Forest 2006 Figure 1(a).
6. CSF 2005 concentrated on the problem of selecting the number of eigenvectors to retain when estimating inverse covariance matrices from the AOGCM control run data. The study was cited in Forest 2006 as supporting, based on the Forest 2006 data, the selection of the number of eigenvectors retained (κsfc = 16) when estimating the AOGCM control data covariance matrix for the surface diagnostic. Forest 2006 makes clear that its results are highly sensitive to this choice, stating: ‘the method for truncating the number of retained eigenvectors (i.e., patterns of unforced variability) is critical’ and ‘In a separate work on Bayesian selection criteria, Curry et al. [2005] using our data find that a break occurs at κsfc = 16 and thus we select this as an appropriate cut-off’.
7. It is clearly implied, and necessarily so, that CSF 2005 used the same surface model, observational and control data as Forest 2006. And indeed Dr Forest has recently confirmed that the surface model and control-run temperature data used in Forest 2006 was the same as that used in CSF 2005. I enquired of Dr Forest as follows:
However, I note that the GRL paper contains the same statement as the MIT Report preprint about the Curry, Sansó and Forest 2005 work finding, using your data, that a break occurs at k_sfc = 16 and thus you selected that as an appropriate cut-off. It would not seem correct for that statement to be have been made in the GRL version if you had changed significantly the processed data used in the surface diagnostic after the work carried out by Curry, even assuming there was a valid reason for altering the data.
Dr Forest replied:
The Curry et al. paper examined the posteriors separately for the surface temperature data, the ocean data, and the upper air data and never estimated a posterior using all three diagnostics. So the results from the Curry et al study is valid for the surface data diagnostics as given in the Forest et al. (2006, GRL) study.
8. In response to this I enquired further:
But I think you are saying in your email that the version of the analyzed model data that was used in the Curry, Sansó and Forest 2005 paper was the same as that used in both the pre-print MIT Report 126 and the final GRL 2006 versions of the above study, at least for the key surface diagnostic that I was asking about – otherwise the results of the Curry 2005 paper would not be directly applicable to the surface diagnostic used in the GRL 2006 study. Have I understood you correctly?
Dr Forest replied:
Yes, the Curry et al. 2005 paper used the same surface data in their results and therefore, it is directly applicable to the 2006 study.
9. Dr Forest’s statement that the surface diagnostic data used in CSF 2005 was the same as in Forest 2006 is demonstrably incorrect, for the reasons given in paragraph 5. Comparing the CSF 2005 data with the SFZ 2008 data – which data does appear consistent with that used in Forest 2006 – the two sets of observational data are nearly identical, but both the model and the control run data are very different. It seems clear that the problem with the processing of the CSF 2005 surface control data was discovered and rectified, since neither the GFDL nor the HadCM2 surface control data matrices in the SFZ 2008 data are near singular. Moreover, analysis of the model data, discussed below, suggests that, assuming one dataset has been processed correctly, it is the CSF 2005 rather than the SFZ 2008 data that is valid.
10. Although, having available only the decadally averaged data for four latitude bands used in the surface diagnostic, I have been unable to identify an exact relationship between the datasets, the SFZ 2008 surface data is evidently derived from the same 499 MIT model runs as is the Curry data. Regressing model noise patterns extracted from the SFZ 2008 surface model data on those from the CSF 2005 data produces high r2 statistics. Furthermore, the regression coefficients on the data for the same and earlier decades suggest that the SFZ 2008 surface data is delayed by several years relative to CSF 2005 data. Since, per Forest 2006, the MIT model runs only extend to 1995, model data incorporating a delay could not have matched the timing of the observational surface diagnostic data.
11. The finding of an apparent delay in the SFZ 2008 data corresponds with that data showing, as it does, a lower temperature rise in each of the later decades than does the CSF 2005 data, for any given climate sensitivity. Accordingly, the SFZ 2008 data requires a much higher climate sensitivity to match the rise in observed temperatures. Indeed, for over three-quarters of the ocean diffusivity range, the SFZ 2008 model data matches the five decades of observational data as well at the maximum climate sensitivity of S = 10 used in Forest 2006 as it does at S = 3, which is very odd. That finding, supported by Forest 2006 Fig.S.7, means that the SFZ 2008 surface model data on their own provide very little discrimination against high climate sensitivity, unlike the CSF 2005 data. The graph appended to this letter illustrates this point. The CSF 2005 model data produce an extremely well constrained sensitivity PDF, centred around S = 1, while the SFZ 2008 model data produce a PDF that is completely unconstrained at high S. It appears that the other diagnostics, particularly the deep-ocean diagnostic, constrained the final Forest 2006 PDF for S, disguising the failure of the surface diagnostic to do so.
12. The (less influential) upper air model data used in SFZ 2008 also differs from the corresponding CSF 2005 data, although evidently being derived from the same model runs. The pattern of changes between the CSF 2005 and SFZ 2008 upper air model data is complex and varies by pressure level and latitude. The changes appear to make the upper side of the final PDF for S worse constrained.
13. I made Dr Forest aware over a week ago that I had the CSF 2005 and SFZ 2008 processed datasets, and invited him to provide within seven days a satisfactory explanation of the changes made to the data, and evidence that adequately substantiates his apparently incorrect statements. I implied that such evidence would consist of copies of all the processed data used in Forest 2006 for the computation of the error r2 statistic produced by each diagnostic; a copy of all computer code used for subsequent computation and interpolation; and the code used to generate both the CSF 2005 and the Forest 2006 processed MIT model, observational and AOGCM control-run data from the raw data, including all ancillary data used.
I have received no response from Dr Forest.
I look forward to hearing from you as to the action that you are taking in this matter, and its outcome.
Yours sincerely
Nicholas Lewis
Biosketch. Nic Lewis’ academic background is mathematics, with a minor in physics, at Cambridge University (UK). His career has been outside academia. Two or three years ago, he returned to his original scientific and mathematical interests and, being interested in the controversy surrounding AGW, started to learn about climate science. He is co-author of the paper that rebutted Steig et al. Antarctic temperature reconstruction (Ryan O’Donnell, Nicholas Lewis, Steve McIntyre and Jeff Condon, 2011, Improved methods for PCA-based reconstructions: case study using the Steig et al. (2009) Antarctic temperature reconstruction, Journal of Climate – print version at J.Climate or preprint here).
Nic’s previous guest posts at Climate Etc:
- The IPCC’s alteration of Forster & Gregory’s model independent climate sensitivity results
- Climate sensitivity follow up
Note: Nic’s open letter to the IPCC led to the IPCC issuing an Erratum regarding what prior distribution had been used for the Gregory 2002 climate sensitivity PDF in Figure 9.20 of AR4 WG1.
JC comment: I have been discussing this issue with Nic over the past two weeks. Particularly based upon his past track record of careful investigation, I take seriously any such issue that Nic raises. Forest et al. (2006) has been an important paper, cited over 100 times and included in the IPCC AR4. Observationally constrained studies along the lines of Forest et al. 2006, correctly designed and carried out, may be able to provide much tighter estimates of climate sensitivity than has previously looked possible, particularly with another 16 years temperature data now available since that used by Forest et al. 2006. Given the importance of the topic of sensitivity, we need to make sure that the paper has done what it says it has done, is sound methodologically, and that it has been interpreted correctly by the IPCC.
This particular situation raises some thorny issues, that are of particular interest especially in light of the recent report on Open Science from the Royal Society:
i) ideally someone would audit Nic’s audit, perhaps Nic can make the relevant information that he has available. How should the editor deal with this situation?
ii) what should the authors be responsible for providing in terms of the documentation? Obviously the observed and model data, but what about the code? Or is sufficient documentation in the supplementary information adequate?
iii) assuming for the sake of argument that there is a serious error in the paper: should a paper be withdrawn from a journal, after it has already been heavily cited?
iv) GRL has a policy that it does not accept comments on published papers; rather, if an author has something substantive to say they should submit a stand alone paper. Personally, I think the Comment function is preferable to withdrawing a paper that has already been heavily cited, with the Comment linked electronically to the original article.
v) what should the ‘statute of limitations’ be for authors in terms of keeping all of the documentation, model versions, etc. for possible future auditing? While digital media makes this relatively easy, I note that when I moved from Colorado to Georgia, I got rid of all of the carefully accumulated documentation for the papers written in the first two decades of my career (9 track tapes, books of green and white computer paper print out, etc.), although the main datasets were publicly archived in various places. But this is obviously suboptimal. Who is responsible for archiving this information? Some of these issues are discussed in the Royal Society Report.
vi) other issues that you can think of?
Uh uh … and its down the memory hole. ‘Case of the Missing Data,’ Episode two er three …four?
Beth, Geophysical Research Letters is published by the American Geophysical Union (AGU).
Dr. Peter Gleick was chair of AGU’s Task Force on Scientific Ethics until 16 Feb 2012.
http://www.agu.org/news/press/pr_archives/2012/2012-11.shtml
AGU perhaps coined the term AGW, but AGW has nothing, absolutely nothing, to do with Earth’s global climate.
AGW is only the latest in a long stream of deceptive models of reality used to promote a 1945 decision to Unite Nations to protect world leaders and society from possible destruction by “nuclear fires”.
The rest of the story: http://omanuel.wordpress.com/about/#comment-142
With kind regards,
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
http://www.omatumr.com
I highly recommend those not having the pleasure of introduction to Dr Manuel’s writings follow the link; omanuel.wordpress.com/about/#content-142 (The rest of the story) as well as his other insiteful work.
A real eye-opener to the history behind the ‘save-the-planet’ movement from someone present from the beginning. I had no idea of all the details until I read your account….Thank you Dr Manuel.
Thanks, Hugh K.
My experiences and those of my research mentor, the late Professor Paul Kazuo Kuroda, span about seventy-six years (2112-1936 = 76 yrs) and include events on both sides of the Second World War.
The paper by Forest et al. (2006) and the editorial handling by the editor of Geophysical Research Letters, a journal published by the American Geophysical Union (AGU) is not unusual for the period of time between the destruction of Hiroshima and Nagasaki in August 1945 and the release of Climategate emails and documents in November 2009.
As mentioned above, the AGW model of global warming</b is only the latest in a long stream of deceptive models of reality</b used to promote a 1945 decision to
a.) Unite Nations
b.) End nationalism and racism
c.) Avoid destruction by “nuclear fire”.
Beginning in 1946, these noble goals started to be implemented by less than noble means – rewarding research that agrees with
d.) Official models of reality</b, rather than
e.) Precise observations of reality</b.
http://omanuel.wordpress.com/about/#comment-142
During World War II, George Orwell wrote “Animal Farm: A Fairy Tale” to reflect eflects events in the USSR under Stalin leading up to the Second World War.
After World War II ended, George Orwell wrote “1984” to reflect futuristic dangers if a totalitarian government controls information and uses electronic surveillance to monitor individuals.
“Animal Farm: A Fairy Tale” tells of the woes when homo sapiens themselves took control of government in the manner of communism under Stalin.
“1984” predicts the futuristic woes ahead if leaders domesticate homo sapiens using the techniques that successfully domesticated other animals.
Our job now, Hugh K, is to restore contact with reality and control of government to homo sapiens that were domesticated from 1945 until 2009, when the game plan was exposed by
a.) Intentional deceit in Climategate emails and documents
http://joannenova.com.au/global-warming/climategate-30-year-timeline/
b.) Justification of deceit by leaders of nations and sciences
The battle to restore contact with reality and control of government to homo sapiens will be as formidable as historic battles against the forces of darkness, described in
a.) The Bhagavad Gita (Chapter 1) on the field of Kurukshetra
b.) The Bible (1 Samuel, Chapter 17) in the Kingdom of Judah.
What you say is true re Gleik’s FORMER position. Note the adjective. Gleik is NOT the AGU. Nor does he represent the AGU views on ethics.
Thanks for the clarification. You are right, Peter.
Dr. Peter Gleick is no longer chair of AGU’s Task Force on Scientific Ethics, as he was in Feb 2012 when documents were deceptively taken from the Heartland Institute:
http://www.agu.org/news/press/pr_archives/2012/2012-11.shtml
I know from personal experience at the 1976 AGU meeting [1] and from my research mentor’s experience at the 1956 AGU meeting [2] that AGU has long had a problem with scientific ethics, or lack of the same.
[1] Publication of the paper Kuroda presented at the 1956 AGU Meeting describing “nuclear fires” from nuclear chain reactions in the early Earth was blocked from publication through normal channels.
It was later published as two, one-page reports [P.K. Kuroda, “On the nuclear physical stability of the uranium minerals,” J. Chem. Physics 25, 781 (1956); “On the infinite multiplication constant and the age of the uranium minerals,” J. Chem. Physics 25, 1256 (1956)] and confirmed in the Oklo uranium mine by scientists working for the French Atomic Energy Commission in 1972.
http://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor
[2] Presentation of the paper Dr. Sabu and I submitted for the 1976 AGU meeting to Science in January 1976, describing the “origin of our elements in a supernova explosion of the Sun” was switched to a later time than the published schedule of speakers, after we arrived at the AGU meeting in April.
The manuscript we submitted to Science was rejected in May, about the time Science published a news article on the paper that was inserted into the AGU program, without abstract, ahead of our presentation.
We resubmitted our paper to Science in June 1976, revised it on 30 Aug 1976, and it was published on 14 Jan 1977 [“Strange xenon, extinct super-heavy elements, and the solar neutrino puzzle”, Science 195, 208-209 (1977)].
http://www.omatumr.com/archive/StrangeXenon.pdf
Subsequent observations and measurements confirmed the validity of our paper in Science, most recently after a lengthy delay in getting NASA to release isotope data from the Galileo probe of Jupiter:
The latest findings of fresh supernova debris in the meteorite minerals that first condensed in the early solar system, . . .
http://www.foxnews.com/scitech/2012/06/27/16-fireball-meteorite-reveals-new-ancient-mineral/?intcmp=features
http://www.universetoday.com/96000/new-mineral-found-in-meteorite-is-from-solar-systems-beginnings/#ixzz1z6UYt1r0
http://ammin.geoscienceworld.org/content/97/7/1219.full?ijkey=G2n1UMXmu7r4.&keytype=ref&siteid=gsammin
Do not promote the far-fetched suggestion that the material came from a nearby supernova that just happened to explode at the birth of the solar system. All primordial helium (He) in the early solar system came from the supernova, as we pointed out at the 1976 AGU meeting:
http://www.omatumr.com/Data/1975Data.htm
Code should probably be included. I recall a section of scientific papers called methodology. I don’t recall ever being taught there was a section called undocumented data manipulation.
Steven, postmodern science, at least post-1945 government-funded science included an unwritten section on data manipulation to hide energy (E) stored as mass (m) in cores of atoms, planets, stars and galaxies [“Neutron repulsion,” The Apeiron Journal 19, 123-150 (2012) http://redshift.vif.com/JournalFiles/V19NO2pdf/V19N2MAN.pdf
http://omanuel.wordpress.com/about/#comment-142
It’s amazing how bad at keeping relevant records climatologists seem to be.
Nobody would think that they are working on ‘the most important a problem humanity has ever faced’ given their disregard for keeping any sort of audit trail or ‘professional standards’ records. Perhaps they really think that its not that important, or that they aren’t trying to act professionally.
My local vet keeps better records on my deceased cat than Dr. Forest seems to for a paper that is cited over 100 times in the academic literature and in the IPCC report that influences policy around the world. Though a delightful companion, I hardly think that TIddles’s medical history is more important than the future of humanity,
But Dr. Forest apparently does. It only reinforces my opinion that climatology is far too important to be left to the arcane conventions and amateur working practices of climatologists. Get yourselves sorted out!
The difference being that the vet is required by law to maintain records for a specified period. Perhaps should the vet have accidentally killed Tiddles he/she would be much happier to dispose of the record in a more timely manner.
Not in UK Law that I can see. For some animals the registered keeper is obliged to keep the records, not the vet. But ‘my’ vet does so anyway because he is a professional guy and wants to provide a good service to his patients and their owners. So does my family solicitor (attorney), family doctor, computer repair shop, pharmacist etc. They all keep records of their work.
I wonder why climatologits find this simple concept so hard to grasp?.
In the US it is a state law and varies by state. I don’t know what my state law is for dental records, I believe it is 5 years, since I just keep them for 10 years as a matter of principle. People are often identified by dental records in cases of serious deformity. After 10 years they aren’t as likely to be of much help. I have no idea why climatologists have so much trouble hanging on to records. Perhaps they killed the cat?
Even my mechanic keeps the records on all of the cars that he maintains.
Latimer,
people don’t keep records if they don’t expect to be questioned on those records. What this shows is that climate “science” has been a bit of a joke subject where anyone can say or do whatever they like without anyone (inside) objecting.
What has happened, is that the failure of proper scientific scrutiny within the “profession” has been replaced by post-publishing scrutiny by sceptics. But the real fault is the climate “scientists” for having such appalling standards that no one ever seems to care whether what is stated is based on real data.
They even kept records back in the old days, just to prove the point?:o)
Mat 5:18 For verily I say unto you, Till heaven and earth pass, one jot or one tittle shall in no wise pass from the law, till all be fulfilled.
We will all be able to see who has the better system in the end.
Latimer Alder, No one cares about your cat. No one cares about the goldfinches in your garden. No one cares where you went on your pushbike. If your plan is to defeat science by adopting every stereotype of the tea-sipping “Keeping Up Appearances” twit, good luck. A fitting caricature.
Keep up the good work, Latimer. When the brains trust is annoyed, you’re doing something right. There are many of us that enjoy your wit!
@web hub telescope
You heartless uncaring unfeeling brute! How can you not care about the late Tiddles? Nor about the beautiful goldfinches? Still there this morning, still eating their nyjer seed from dawn till dusk. And a lovely family of nuthatches (Mum, Dad, 2 kids) alongside bashing away at the bark of the chestnut tree.
Because I was led to believe all those who are so scared of a wee bit of temperature rise did so because you are morally superior to me in the ‘environmental’ and ‘we love nature’ and ‘sustainability’ stakes? That while I was concerned with the day to day matters of making a living and feeding the family and keeping warm in the winter ..and maybe getting some time to enjoy life… all that grubby mundane day-to-day stuff, you were thinking great thoughts about ‘The Future of Humanity’, ‘Saving the Planet’ and all that important stuff.
And that is why you allow yourselves to be seduced by such dodgy ‘science’ and such dodgy arguments into a global frightfest about not very much at all. It didn’t matter that the arguments weren’t sound or watertight nor that the scientists had the ability and amorality of the average 12 year old…the nobility of the cause far outweighed such minor considerations. . It was not only ‘Good’ of itself – or so you believed – but it gave you carte blanche to go around condemning and damning and denouncing all those whom you disliked. The end justified the means, bigtime.
But now we see the truth. You don’t care about the real observable touchable Nature at all.Not the one with the goldfinches and Tiddles and stinging nettles and rattlesnakes and alligators. But about some abstract idealised ‘Nature’. Maybe a ‘Nature’ that has no humans (bar the morally superior ones mentioned above), A ‘Nature’ that has only ever existed in the utopian/dystopian minds of political theorists and the weaker Romantic poets. One that could never be visited in the past and never can be in the future.
But certainly one without the goldfinches..
‘Nobody cares about your goldfinches’. Sums up climate alarmism in one sentence. Thanks Webbie.
WebHubTelescope | June 26, 2012 at 12:36 am | Reply
“Latimer Alder, No one cares about your cat. No one cares about the goldfinches in your garden. No one cares where you went on your pushbike.”
Ignorant asshat.
Web, no one cares about your opinion on LA either.
Counterexample: I care.
Thank you Scott!
Its nice to know that not everybody is a heartless brute like that nasty uncultured Mr Telescope.
Miaow, Purr
Sockpuppets need to be listened to, as they treat their audience with the contempt that they apparently deserve. Why else would Latimer adopt these sockpuppet identities unless he held this readership with complete disdain?
Now it makes sense. Don’t mind me as I am just piling on.
I do not ‘hold the readership here in complete disdain’. Just a very select few. See if you can guess who qualifies?
So its essentially a mutual attitude that you share with the climate scientists who sent those “nasty” emails.
Don’t flatter yourself Webbie. You ain’t no Steve McIntyre.
Thank goodness for that. Who would want to emulate a corporate bozo like him?
New study forecasts sharp increase in world oil production capacity, and risk of price collapse
June 27, 2012 By James Smith
(Phys.org) — Oil production capacity is surging in the United States and several other countries at such a fast pace that global oil output capacity is likely to grow by nearly 20 percent by 2020, which could prompt a plunge or even a collapse in oil prices, according to a new study by a researcher at the Harvard Kennedy School.
The findings by Leonardo Maugeri, a former oil industry executive who is now a fellow in the Geopolitics of Energy Project in the Kennedy School’s Belfer Center for Science and International Affairs, are based on an original field-by-field analysis of the world’s major oil formations and exploration projects.
Contrary to some predictions that world oil production has peaked or will soon do so, Maugeri projects that output should grow from the current 93 million barrels per day to 110 million barrels per day by 2020, the biggest jump in any decade since the 1980s. What’s more, this increase represents less than 40 percent of the new oil production under development globally: more than 60 percent of the new production will likely reach the market after 2020.
http://phys.org/news/2012-06-sharp-world-oil-production-capacity.html
Shakespear said it very well in Hamlet
“Something is rotten in the State of Denmark”
It does not seem to matter from which angle we look at the issue of climate sensitivity, it always seems to come out that the science is built on quicksand. There does not seem to be any sound basis on which we can rely on any numeric value that have been ascribed to this vital quantity.
I have been trying to have a discussion with Steven Mosher on another thread, and it is, to say the least, slow going. From what I can see, Steven is writing unsound science, but it is difficult to get a reply out of him.
We can never measure no-feedback climate sensitivity, but no-one seems to think that this is important. There are no direct measurements of total climate sensitivity, but this does not seem to deter people from claiming that some value or other is beyond dispute. There are the usual claims of indisputable numbers based on unsound science and non-validated computer models, and we skeptics are supposed to keep our mouths shut, and believe the opinion fo the experts. Sorry, I cannot go along with this.
One of these years the scientiifc community is going to wake up to the fact that there is no sound scientific basis on which anyone can claim that CAGW exists, simply because there is no basis whatsoever on which to base any estimate of climate sensitivity. One just wonders how much longer it is going to take before this happens.
As far as Mosher is concerned, the issue is straightforward: more (anthropogenic) CO2 = more warming. All the other variables are epiphenominal.
Jim Cripwell | June 25, 2012 at 7:12 am said: ”From what I can see, Steven is writing unsound science, but it is difficult to get a reply out of him”
Jim, you have to understand that: Mosher is here exclusively; to interpret the ”Green Das Kapital”. If you are interested in CO2 molestation – he will get into details; but if you are after some truth… he is suffering from ”Truth Phobia” – he goes under his rock
Lots of Fake Skeptics are also prying for catastrophes to happen; so that they don’t appear so stupid; ”have being duped by the equally dishonest Warmist”. Not just that AGW doesn’t exist; but no GLOBAL warmings exist. Warmings are localized; under the laws of physics,; if some part of the planet gets warmer than normal – other part INSTANTLY gets colder than normal – it’s called ”extreme weather / climate” Yes, climate is the weather; global warmings / global coolings are inside people’s heads, not outside.
What a nice way to say data is still being ‘lost’ and code withheld.
That homework-eater dog must be very fat by now.
Perhaps the homework-eating dog’s vet has the records instead of Dr Forest?
Butterfingers, again.
The dog ate my data.
“Show your workings”. Was taught that at school.
Alternatively, we could just take Dr. Forest’s word for it…
In the Net age, there can be no excuse for not having all relevent data / code archived somewhere. What’s to be afraid of – that someone might find an error? Isn’t that the point of science?
Peer reviewers and editors should take part of the responsibility to ensure that everything exists to defend & explain a paper before it is accepted. They have equal responsibility for the result.
Peer review seems to be failing, as it was left to the authors of the Gergis paper to discover their error shortly after publication, and, er, very shortly before SteveM and his denizens pointed it out to them. :-)
I wouldn’t dare suggest that Dr Forest is less than honest because I don’t know the man, but one possible reason for data not being archived in the Net Age is not that someone might find an error but that someone might find a deliberate!
There is something increasingly smelly as Jim Cripwell has pointed out and given the apparent speed with which cAGW appears to be becoming passé after Rio one does tend to wonder if a lot of these papers were designed to serve a purpose that is no longer of any importance to The Cause.
‘Nuff said, since the Forest study is the only one based on actual physical observations (which are now “lost”).
But, of course, we can simply look at the physical record since 1850.
CO2 went up from ~290 ppmv (ice core data) to 391 ppmv (Mauna Loa)
Temperature rose by 0.7 degC.
Anthropogenic forcing was between 7% (IPCC) and 50% (several solar studies) of total forcing over this period.
Other anthropogenic forcing factors beside CO2 (other GHGs, aerosols, etc.) cancelled one another out.
On this basis we arrive at an observed 2xCO2 temperature response of between 0.8 and 1.4 degC (close to the first results of Dr. Forest).
We can now argue about whether the GH warming has reached “equilibrium” over the past 150 years or whether there is still some GH warming “hidden in the pipeline”, but IMO that is like arguing about how many angels can dance on the head of a pin.
Max
Typo error
Should read
Non anthropogenic forcing was between 7% (IPCC) and 50% (several solar studies) of total forcing…(etc.)
Max, you write “Non-anthropogenic forcing was between 7% (IPCC) and 50% (several solar studies) of total forcing over this period.
Ánd herein lies the problem. These are estimates, and what they are based on is unclear, since we do not understand all the natural drivers of climate. Non-anthropogenic forcings could easily be 100% of the total forcing. We just dont know. So your estimate is just some sort of meaniungless, maximum value
The issue here, as I keep pointing out, is that there is no CO2 signal in any temperature/time graph, from data of the 20th and 21st centuries. Zero, nada, zilch. If there was such a signal that can be proven to be caused by addiding CO2 to the atmopshere, it is trivial to measure total climate sensitivity. Since there is no CO2 signal, there is no measuerement of total climate sensitivity.
So it is simply not possible to determine if any of the hypothetical values which the proponents of CAGW keep on quoting, have any validity whatsoever. What little data we have indicates that the total climate sensitivity of additional CO2 is indistinguishable from zero.
Jim Cripwell
I cannot argue with your logic. The entire premise of a “CO2 signal” is based on hypothetical deliberations.
Since 1850, CO2 levels rose, as did the “globally and annually averaged land and sea surface temperature anomaly” (for what it’s worth), but nobody knows whether or not the increase in CO2 had anything whatsoever to do with the warming.
The fact that we have seen NO warming (actually slight COOLING) over the past 15 years, despite the fact that CO2 at Mauna Loa has increased from 365 to 391 ppmv, raises serious questions in my mind regarding the validity of the “CO2 signal”.
The fact that the warming from ~1910 to ~1940 was statistically indistinguishable from the warming from ~1970 to ~2000, even though CO2 levels increased much more slowly over the first period, raises even more doubts.
And the fact that it cooled from ~1940 to ~1970, despite an increase in CO2 emissions, raises even more doubts.
I have simply shown that – even using the premise of a past anthropogenic GHG signal as estimated by IPCC or several independent solar studies – the observed 2xCO2 temperature response is only around 1 deg C (i.e. nothing to worry about).
Max
Agree with Max.
Max, you write “I have simply shown that – even using the premise of a past anthropogenic GHG signal as estimated by IPCC or several independent solar studies – the observed 2xCO2 temperature response is only around 1 deg C (i.e. nothing to worry about).”
You and I are in basic agreement. The difference between us is that you seem to be prepared to accept the possibility that adding CO2 to the atmosphere may be causing some sort in warming that is too small to worry about. I am trying to make the point that what little evidence we have indicates that the real value of total climate sensitivity is indistinguishable from zero.
Jim
“real value of total climate sensitivity is indistinguishable from zero.”
Sensitivity is the change in temperature per change in watts.
Its not Zero. If it were Zero then a dimmer sun would not cause cooling.
and a brighter sun would not cause warming.
http://en.wikipedia.org/wiki/Climate_sensitivity
Steven, you write “Sensitivity is the change in temperature per change in watts.”
Not quite. Sensitivity is a change in temperature for a change of CO2 concentration. No-one has yet measured what the change in forcing is for a change of CO2 concentration, so we have no idea what it’s numeric value is. I dont believe in hypothetical estiamtions from non-validated models. I never said that the climate sensitivity of CO2 was zero, I said it was indistinguishable from zero; i.e. it is positive but has such a low numeric value that this number is indistinguishable from zero,
I have no interest in theoretical estimations. I rely on hard, measured data. I pointed out to you on another thread that there is no CO2 signal that can be proven to be caused by additional CO2 in the atmosphere, in any temperature/time record that comes from the data of the 20th and 21st centuries. You wrote very insultingly to me about my science, and claiming that this is not correct. I rebutted what you wrote, and this statement about no CO2 signal is completely correct. I would be grateful if you would either rebut my rebuttal in turn, or apologise to me.
So, since there is no CO2 signal in the temperature/time grpah, then it follows that the climate sensitivity of CO2 is indistinguishable from zero. QED.
Don’t try to talk Mosher out of his idiosyncratic definition of climate sensitivity. It just is, and that’s the story.
Jim
“Not quite. Sensitivity is a change in temperature for a change of CO2 concentration. ”
WRONG. sensitivity is a system parameter. It is defined as the change in temperature due to a change in RF forcing.
Now, people speak about a sensitivity to a doubling of C02 from a given level. a doubling of C02 from pre industrial levels gives you 3.7Watts of forcing.
Sensitivity can be defined either way and the definitions are closely related as doubling of CO2 corresponds to 3.7 W/m^2. This is one of those results that atmospheric physics can tell rather accurately and reliably.
here PE
“Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing.”
http://www.sciencebits.com/OnClimateSensitivity
What is climate sensitivity?
The equilibrium climate sensitivity refers to the equilibrium change in average global surface air temperature following a unit change in the radiative forcing. This sensitivity, denoted here as λ, therefore has units of °K / (W/m2).
Now, your confusion comes in from people substituting c02 concentrations
doubling C02 leads to a change in RF forcing of 3.7 watts
So the sensitivity to doubling C02 really consists of two claims
1. cO2 doubling == 3.7 Watts
2. the sensitivity to a change of 3.7 Watts
Two different claims.
The first claim is about the additional FORCING ( in watts ) that one can
expect from a doubling of C02. The second claim is about the system
sensitivity to ANY change in RF forcing. So if the sun goes up by one watt
what is the change in temperature. What is the sensitivity to changes in RF forcing.
C02 in ppm => change in forcing (Watts )
change in Watts => change in temperature C
The first relates to the effect of increased C02 leading to increased forcing. The second ( sensitivity ) relates to the systems sensitivity
to changes in radiative forcing.
So, you will see people RE- EXPRESS these two equations as one
“Instead of the above definition of λ, the global climate sensitivity can also be expressed as the temperature change ΔTx2, following a doubling of the atmospheric CO2 content. Such an increase is equivalent to a radiative forcing of 3.8 W/m2. Hence, ΔTx2=3.8 W/m2 λ. ”
here is more
from Forester and Gregory
“There are many definitions of climate sensitivity in
the literature. While the most quoted sensitivity is the
equilibrium warming for 2 CO2 this is not necessarily
the most useful, as differences in the CO2 radiative
forcing can be confused with differences in climate response,
and ideally one would like to know the climate
response to any forcing mechanism. In this work, we
use a standard linear definition of climate sensitivity,
which we state here, and go on to show how it can be
derived from observational data.”
The problem, as I explained, is that people have ( for political reasons )
conflated two issues into metric in order to popularize an idea
and make C02 a focus. Sensitivity as it is laid out in the math
is the system response in C to change in forcing in Watts.
So for example, when you estimate the sensitivity from volcanic Forcing. Even Lindzen, you know him, understands that there are two
issues
1. the climate sensitivity to changes in radiative forcing of any kind
2. the additional forcing that doubling C02 creates.
http://eaps.mit.edu/faculty/lindzen/184_Volcano.pdf
You could argue that Sensitivity was .5 and that C02 supplied no additional radiative forcing, but sensitivity cannot be zero.
sensitivity to doubling C02 can be zero IFF doubling C02 supplies
ZERO WATTS, but sensitivity ( the change in C due to a change in Watts ) cannot be zero.
steven you write “Climate sensitivity is a measure of how responsive the temperature of the climate system is to a change in the radiative forcing.”
Ok have it your way, I could not care less. The fact is that there is no CO2 signal in the temperature/time graphs, so we know that, whatever the definition of sensitivity is, adding CO2 to the atmosphere has a negligible effect on global temperaures. That is all that matters.
And you have not apologized for insulting me.
“Ok have it your way, I could not care less. The fact is that there is no CO2 signal in the temperature/time graphs, so we know that, whatever the definition of sensitivity is, adding CO2 to the atmosphere has a negligible effect on global temperaures. That is all that matters.
And you have not apologized for insulting me.”
######################
There can be no C02 signal in a temperature time series. That is a temperature series.
The temperature ( C) is the result of ALL FORCING. that forcing
includes forcing from C02, forcing from methane, forcing from the sun.
forcing from aerosols, INTERNAL FORCING, feedbacks etc. You cannot look at the temperature series and see one component when many components go into the final forcing. all you can see is temperature. because that is what it is. You are seeing the effect.
Suppose I told you that your foot speed was the result of many variables
1) genetics
2) the length of your thigh
3. the headwind
4. the size of your foot
5. the shoes you are wearing
6. the composition of the track.
7. the animal chasing you
You show me that your time in the 100 meter run is 11 seconds.
Guess what? I wont be able to find the size of your foot ( one of the causes ) in that time series. I wont be able to see the size of foot signal.
With enough time and controlled experimentation we could probably estimate the exact contribution of foot size. But we know that if your foot
is 2 inches long, you got problems and if its 3 feet long, you got problems.
Simple mechanics tells us that. we know that if you have a tail wind you will run faster. If you have a head wind you will run slower. Simple physics tells us that. What you have with the historical temperature of the earth is one foot race. an uncontrolled experiment.
basic physics tells us that certain things will matter: the strength of the sun. the opacity of the atmosphere. the amount of land versus oceans.
the amount of GHGs. basic physics tells us that more GHGs will lead to a warmer surface, all other things being equal. Looking at one experiment ( the one time series of earth) and drawing a conclusion, as you try to, is sometimes instructive…
What you can do is try various explanations.
here is a simple example that explains the temperature as a
response to forcings ( including C02)
http://rankexploits.com/musings/2008/lumpy-vs-model-e/
or you could look at C02 versus temperature
http://www.skepticalscience.com/The-CO2-Temperature-correlation-over-the-20th-Century.html
In neither of those do you “see” the C02 signal in the temperature.
you cant “see” a signal of ppm in a temperature series.
You can see how they relate. you can see how they correlate.
You can see how much variance one explains. But fundamentally
we knew over 100 years ago that increasing C02 would cause an increase in warming. That is, we knew it before we even saw the current warming. We could lose the entire temperature record and still know that C02 causes warming. The same way we can know that a headwind will slow you down in a race. You running or not running will never change that fact
Steven Mosher | June 25, 2012 at 4:28 pm |
“What is climate sensitivity?
The equilibrium climate sensitivity refers to the equilibrium change in average global surface air temperature following a unit change in the radiative forcing. This sensitivity, denoted here as λ, therefore has units of °K / (W/m2).
Now, your confusion comes in from people substituting c02 concentrations
doubling C02 leads to a change in RF forcing of 3.7 watts”
The assumed [or unmentioned presumption] is this refers to a model.
Both sensitivity and forcing are suppose to equal to a watt of processed sunlight.
Sort like throwing everything in a blender, giving so many cups of watts.
So noting that it’s term related something made by blender [old socks, tangerines, and cotton candy] should be mentioned.
Jim Cripwell and Steven Mosher,
I get the impression of you two standing on the beach arguing whether the sky is dark blue or light blue, with one facing east and the other west.
Steven, you write
@@@
Suppose I told you that your foot speed was the result of many variables
1) genetics
2) the length of your thigh
3. the headwind
4. the size of your foot
5. the shoes you are wearing
6. the composition of the track.
7. the animal chasing you
@@@
Your analogy has no bearing on the situation with respect to CAGW. What we have is a variable, the amout of CO2 in the atmosphere, which in continuously increasing. Supposedly, this causes global temperatures to increase. However, there are all sorts of other factors which affect global temperatures, in many different ways. These other factors represent the noise in the system..
What we are trying to determine is how much the added CO2 increases temperature. We know what sort of pattern that existed before we started to put more and more CO2 into the atmosphnere. We know the pattern that has occurred since we started to add lots of CO2. The question is, are the patterns different? If the patterns are different, then we can conclude that the difference MIGHT be due to the additional CO2. But unless the patterns are different, then we know that the CO2 is having no effect. The fact of the matter is that there has been no change in the pattern since we have had decent measurements.
Unless you can show me that the temperature/time pattern has changed, then CO2 can be having no effect. There is the additional problem, supposing there had been a change of pattern, in proving that the change was due to the CO2. How anyone could prove that I have no idea.
But this brings me to my main objection to the whole issue of is CAGW real? The fact of the matter is that the way the atmopshere works is so complex that physics cannot tell us what happens when we add CO2 to the atmosphere, And it is the claim by the proponents of CAGW that physics can tell us what happens, that I object to strongly.
So, show me how the temperature/time pattern has changed since CAGW is supposed to have been in effect, and then let us discuss this further.
“Steven Mosher
WRONG. sensitivity is a system parameter. It is defined as the change in temperature due to a change in RF forcing” If find that a very odd statement.
Tav =(Tmax+Tmin)/2
Tmax occurs just afternoon when light flux, on global average is about 1,200 W/ms; of course back-radiation is a component of the total influx and without CO2 call it about 150 W/m2 and CO2 gives what; about 4 W/m2?
Just before dusk we record our lowest temperature, the influx is a function of atmospheric temperature, the conversion of gaseous water to liquid water and CO2 photon recycling.
Now climate sensitivity can’t be ‘A’ number. The effect of a putative CO2/water photonic IR recycling system is going to depend on local Tmax/Tmin, the IR albedo and the amount of water vapor in the air. Oceans, deserts will have completely different sensitivities; and as Gavin Schmidt admits, the Antarctic high altitude desert will see no green house effect at all.
So if the sun dims, you will say that we have no idea the direction of the temperature, all due to complexity. That’s called CripLogic. If it dims completely, Cripwell still needs to see the signature of the sun on the temperature record to become convinced.
Tip of the hat to Mosh for the dimming sun argument.
If the wavelength makes a difference than you could have different sensitivities for solar and for co2.
lowot
You write:
or more logically:
Although we are quite certain that our planet’s climate has changed in the past, we do not know enough about the causes and effects of climate changes in our planet’s geological past to be able to conclude anything meaningful about natural or anthropogenic forcings, negative or positive feedbacks as well as their potential future effects on our climate.
Max
Latimer,
You ask: “Please remind us where we should be looking for ‘empirical data to support AGW’ ”
Skepticalscience is always a good place to start.
http://www.skepticalscience.com/empirical-evidence-for-global-warming.htm
Hope this helps!
Web it is so simple it is painful :) You have a latent engine with 240Wm-2 minimum average energy required, that is 50% efficient, 120Wm-2 work, That is maintaining the lapse rate, 120Wm-2 wasted, radiated to space. With 240Wm-2 constantly supplied, only 120Wm-2 needed for the work all other energy is wasted or stored. 120+240=360Wm-2 great right? only one minor issue with the max entropy model, 35.4Wm-2 of energy that doesn’t play, the atmospheric window PITA. 394.4Wm-2 is the average design surface radiant energy.
The fire box is the tropics, you can feed more energy or less, but the specific volume of the atmosphere limits the boiler pressure which limits the maximum boiler temperature. You can use steam tables, S-B relationships any thing you like, but the tropical SS temperatures are limited by the properties of water and the density of the atmosphere. If the max SST is limited the maximum rate of diffusion is also limited. You cannot exceed 50% efficiency in an open system.
@lolwot
You are remarkably good at confusing people’s theories with actual observations. If you were a gambler, you would be the ideal mark for a scam.
Some people have an expectation that pH might be headed in the way you describe. Others think otherwise.
But it is the absolute essence of science that you test the theory against what actually happens. So far, there is no evidence whatsoever that pH has changed at all anywhere, let alone to the ‘dramatic’ effect that somebody expects. You can plot out scary ‘expectations’ until you’re blue in the face, but that ain’t science, that is fortune telling unless you can produce some experimental evidence to back it up. Ocean pH is a hugely variable quantity already…and there are no consistent time series measurements that allow us to tell anything at all about whether it is changing.
And jfi there are plenty of places around the globe where the pH is already ‘naturally’ below 7.8. River mouths for example. There does not seem to be any great harm to the environment in those places, Like with a minuscule change in GAT. I really cannot see any reason to worry if other places joined them. But as there is no evidence that they are changing at all, that is maybe a bit of a hypothetical argument.
manacker | June 25, 2012 at 9:22 am told the common lie: ” but nobody knows whether or not the increase in CO2 had anything whatsoever to do with the warming”
.manacker, it hasn’t being any ”warming!!! Warming is inside your head, not in the environment. It’s one thing, saying a lie that was any ”global warming” it’s another thing the reality. On 99,9999999% of the planet’s surface areas nobody is monitoring; temperatures change every 10-15 minutes… loaded comments as yours:: ”with the warming” are bigger lies, harmful / destructive lies, than just a white lie, down in the pub.
If you are asked, on a whiteness stand, under oath; can you personally prove that it was any warming; or are you just repeating liar’s lies?! think about that.
Agree with Jim.
“We do not understand all the natural drivers of climate.”
To pretend that we do is pseudo-science, sorry.
Hmm, yet he seems to know that sensitivity is zero. an impossibility
but never mind that you agree with jim
What Jim said was that the “real value … was indistinguishable from zero…” which is far cry from saying that is is zero. In fact, it is infinitely different. Hasn’t anyone done some experimentation of different mixes of CO2 in a flask at standard pressure with an IR source and a thermometer? There are lots of “just so” efforts on Youtube, but not that looks carefully controlled and thought out.
Duster, you can experiment with CO2 in a chamber until the cows come home and it will never give you a clue about climate sensitivity. In fact, for home insulation you can add a perfect radiant barrier to a R-2 wall and all you get is an increase to R-3, that has been done. The problem is that sensitivity to “forcing” is nearly meaningless. You can do a sensitivity using each and every spectral line that means something, sensitivity to spectral sections means something, but without specifying what the forcing is – what the radiant AND thermal properties of the forcing and the forced are- and where the forcing is in relationship to the object being forced – ya got nothing.
When you try to figure out what the sensitivity of the oceans is with respect to atmospheric forcing, you are really up the creek without a paddle. The estimated diffusion rate from the skin layer down is from 0.04 up to 4 cm/sec. The average rate of the deep ocean current is around 10 cm/sec. The gulf stream current is around 250cm per sec. The rate of evaporation at 21 C is about 88 Joules per sec which if that was averaged over a large enough area would be about 88 Wm-2 per sec. You increase the surface temperature by one degree, the rate goes up to about 92Wm-2 per sec. At 1 meter per second to the average wind speed or convection rate and all the warming disappears. And generally, adding energy to a system increases motion in the direction of the greatest energy loss.
Fun little problem :)
mosh
It’s not that “sensitivity is zero”.
It’s just that we do not know what the sensitivity really is, because we do not understand all the many factors (natural as well as man-made) that influence our planet’s climate.
As a result, the argument “we can only explain past warming if we include anthropogenic forcing” is an “argument from ignorance”, not an “argument from evidence”.
If, as Jim Cripwell writes, we cannot even show that the temperature/time patterns of the past changed significantly when human CO2 emissions increased, we have no empirical basis to support the notion of any “sensitivity”at all – only theory.
This is not simply an abstract deliberation, mosh, it is a real dilemma for a high sensitivity assumption/conclusion and the resulting CAGW premise.
And, while you may believe that a high sensitivity has been validated by model simulations, there are many parts of the past record, which speak strongly against this premise – and you have so far been unable to cite the specific empirical evidence to support this hypothesis.
Jim Cripwell can correct me if I have misunderstood, but that is my understanding of the real dilemma here.
Max
mosh
At the risk of repetition, let me reword what I believe is a major part of Jim Cripwell’s source of skepticism regarding a significant “sensitivity” estimate.
IPCC tells us:
a) Our models cannot explain the early 20th century warming period
b) We know that the (statistically indistinguishable) late 20th century warming was caused principally by human-caused forcing.
c) How do we know this?
d) Because our models cannot explain it any other way.
A dilemma.
Max
Captain doesn’t understand that phase transitions that remove heat will return that heat on the reverse transition.
But where we talk about the overall energy imbalance causing the persistent rise of temperature, the heat that goes into melting snow and ice does defer the temperature rise.
So the approximately 1 W/m^2 going into the ocean is roughly accommodated by about 1/3 of the 3.7 W/m^2 forcing plus thermal energy going into melting and increasing the land and atmosphere temperatures.
The ranges of diffusion are real. In the ocean, one can have relatively fast eddy diffusion coefficients, both vertical and horizontal, and then also have slower conductive diffusion coefficients. The mix of these diffusion coefficients gives rise to a dispersive rise in temperature that has subtle differences from the rise expected with a single-valued diffusion coefficient. I have documented a nice way to solve these kinds of problems.
BTW, diffusion coefficients don’t have dimensions of cm/sec, they have dimensions of cm^2/sec. That’s what separates people that know how to work the problems from people that don’t. Sorry.
Max you write “Jim Cripwell can correct me if I have misunderstood, but that is my understanding of the real dilemma here.”
Absolutely correct. There is no emipirical data to support CAGW. Zero, nada, zilch.
well apart from all that empirical data like ice cores, etc
He didn’t say it was zero. And zero sensitivity (to CO2) is not impossible.
lolwot, the ice core records show that climate shifts from warming to cooling at maximum CO2 and from cooling to warming at minimum CO2.
I for one don’t understand at all the basis for Jim’s claim “There is no emipirical data to support CAGW.”
On the contrary it’s a simple fact that there’s very much empirical data to support AGW (and perhaps also CAGW, alhough I don’t know, what the C really means). There’s also some data to contradict AGW. All data put together provides the overall evidence. It’s legitimate to discuss the strength of the overall evidence, but claiming that there’s no empirical data in support of AGW is just false.
the ice core records show that climate can change a lot despite comparatively low forcings meaning that climate is not governed by negative feedbacks that keep climate sensitivity low.
The learning point of that is “Introduce a large forcing from elevated CO2 at your peril”
@pekka
Please remind us where we should be looking for ’empirical data to support AGW’. Tx
Pekka
You say that there is empirical evidence for CAGW (i.e. a 2xCO2 temperature response of 2 degC or more).
Where is this?
Thanks.
Max
Pekka you write “It’s legitimate to discuss the strength of the overall evidence, but claiming that there’s no empirical data in support of AGW is just false.”
You are absolutely correct. AGW is real. Adding CO2 to the atmosphere will cause warming. The question is how much warming? And that is where the C comes in, which you dont seem to understand. Increasing CO2 levels causes warming that is so small that it cannot be measured, and we dont need to worry. It is only if the warming is catastrophic that we need to be concerned; that is what the C is.
As to lolwot’s comments on paleo data, this is just ludicrous. The paleo data has all sorts of problems with what it actually means. But we have excellent data from the 20th and 21st centuries which show no change in the way global tempeatures have risen prior to and post the alleged start of CAGW. But we are supposed to disregard this excellent data, because the dubious paleo data can possibly be interpreted to mean that CAGW might just be real.
Give me a break!
Web, I ran across this paper recently. Perhaps it will be helpful to you in the distribution of your observations. Granted it doesn’t try to determine the entire vertical profile but it is a rather impressive start I thought.
http://www.agu.org/journals/jc/jc1205/2011JC007382/body.shtml
There are hundreds or perhaps thousands papers that connect some empirical data to AGW. They all use empirical evidence at some level. All those papers do exist, nobody can deny that.
Jim Cripwell writes: “But we have excellent data from the 20th and 21st centuries which show no change in the way global tempeatures have risen prior to and post the alleged start of CAGW.”
Yes.
Excellent data like the Mauna Loa CO2 record which supports CAGW
http://www.woodfortrees.org/plot/esrl-co2
Excellent data like HadCRUT3 which support CAGW
http://www.woodfortrees.org/plot/hadcrut3vgl
Excellent data like GISTEMP which supports CAGW
http://www.woodfortrees.org/plot/gistemp
Thanks for reminding us of this excellent data.
Latimer,
You ask: “Please remind us where we should be looking for ‘empirical data to support AGW’ ”
Skepticalscience is always a good place to start.
http://www.skepticalscience.com/empirical-evidence-for-global-warming.htm
Hope this helps!
PS Sorry for posting it twice. It ended up in the wrong place first time.
Once again, the aggressively stupid makes a command appearance. By the logic of the stupid, the daily temperature shifts from warming to cooling at maximum daylight and it shifts from cooling to warming at minimum daylight. Therefore, the stupid says the sun is responsible for cooling.
The stupid, it hurts.
Grade: F-
@pekka
Ooohhhh…..nice body swerve. But no cigar.
‘There are hundreds or perhaps thousands papers that connect some empirical data to AGW. They all use empirical evidence at some level’
What do you mean ‘connect some empirical data to AGW’? Papers that go so fas as to say ‘we note that this data is not inconsistent with AGW theory’?..which is a pretty meaningless statement. You might as well say ‘we note that this data is not inconsistent with biblical or koranic teaching’, or ‘we note that this data is not inconsistent with Manchester United winning the Prremiership again next season’.
It doesn’t matter if such weasel words have been written once only or a zillion times by a gazillion different authors. They are not empirical data showing AGW
Or the sceptical science link that Peter Martin (tempterrain) pointed me to.
Its argument boils down to: A is occurring. There is a theory that A can cause B. Therefore B if we see B, and is proof that the theory is right.
Which would be a pretty weak circumstantial (not empirical) case at the best of times. But is made worse by teh existence of circumstances where we know that B occurred without A, and where A occurred without B happening.
Whatever the argument here, it is not empirical data of AGW. it is data showing GW..
Got anything better?
‘
“Captain doesn’t understand that phase transitions that remove heat will return that heat on the reverse transition.”
Webster obviously doesn’t understand that where phase transitions take place is important. Most evaporation takes place where phase transition energy is lost to the upper atmosphere, most fusion take place where the transition energy is retained. Where things happen is just as important as what happens :)
lolwot, you write “Thanks for reminding us of this excellent data.”
You provide references to two graphes showing a rise in global temperatures as a function of time. I agree they are correct. Now provide the proof, or the reference that proves, that this temperature rise was caused by adding CO2 to the atmosphere. That proves that the rise noted is any different from similar rises that have been observed since records began about 150 years ago. That somehow, these rises are unusual.
When I have seen those references, let us talk again.
But it doesn’t magically reverse the direction of the heat flow!
The stupid, it does hurt.
This has always been a discussion of subtle effects and possibility of large impacts. One can say that small changes in temperature at an average temperature near 300K are insignificant. This is true on an absolute scale as 2+/-1 around 300K is small, yet the fact that water shows a phase transition at 273K means that much of the subtle effects need to be compared against differences relative to 0C and not 300K.
The non-GHG temperature of the surface is estimated at 255K or -18C and that with atmospheric GHG raises it to 288 or +15C. The melting point phase transition just happens to be close to the median temperature. Suddenly you realize that a couple of degrees is significant.
Furthermore, rate laws with high activation energies magnify the effects of small temperature changes. The huge heat capacities and significant latent heats of transformation can mask any changes at the moment and come back and bite us hard later.
So the earth may warm or it may cool. None of these skeptics want any funding going to atmospheric research because they think they can do it better themselves?
@webbie
‘the fact that water shows a phase transition at 273K means that much of the subtle effects need to be compared against differences relative to 0C and not 300K.’
Why? Please provide a few concrete examples to persuade me that you are right. The difference between 10C and 20C is not 100% but (293/283)-1 = 3.5%
the data shows a sharp change in climate
this is of course the basis of CAGW
“By the logic of the stupid, the daily temperature shifts from warming to cooling at maximum daylight and it shifts from cooling to warming at minimum daylight. Therefore, the stupid says the sun is responsible for cooling.”
Yes, that is by the logic of the stupid, I thought you would understand simple logic.
Sometimes I think there’s hope for believers, but then I’m reminded that it’s easier for a Camel to through the eye of a needle.
I’ll try again. Allegedly, there’s a correlation between CO2 and temperature in the ice core records. I accept that and I think everybody does, although there might be disagreements about accuracy. Now if there’s a correlation, it must be so that in general global climate shifts from warming to cooling at maximum CO2 “forcing” and vice versa. That’s all, I am not claiming anything else here. You seem not to like the point.
@lolwot
‘the data shows a sharp change in climate’
It does? It got warmer from 288.1K to 288.8K in about 100 years? Wow! You consider that to be a ‘sharp change’? Looks like a very very stable system to me,
Edim, I don’t think they can understand simple logic. They find one thing they understand and quite thinking that is the only answer that matters, they have no curiosity or tenacity to solve the whole problem.
Web nearly finds a limit 273K, but forgets that salt changes that, so there is a range -1.9C to 0C that has to be considered. Then he neglects to look for an upper limit just assuming 373K or 100C is that limit. But the force of gravity limits the atmospheric pressure so the upper limit is well below 100C. If the oceans were small, that limit would be 37C, but with the big ocean and the -1.9C to 0C thanks to seasons, 32C is the limit. The reasons engineers are skeptics is because they could actually build and control of steam engine :)
@lolwot
‘this is of course the basis of CAGW’
No much of a basis is it? If we hadn’t gone to all the trouble of collecting zillions of datapoints from around the world ,losing most of them and ‘adjusting’ the rest, then slicing and dicing them to sixteen places of decimals, we’d probably never have noticed anything at all unusual.
Are there any truly ‘unusual’ features of the weather/climate that would have tipped us off?
Sealevel…nope ..carries on slowly going up as it always has done.
Storms and hurricanes – nope. If anything going down, not up.
Ocean pH – no measurements of any change.
Growing seasons – much the same.
Ocean currents – they jes’ keep rollin’ along
Ice cover – pretty stable.
The only unusual feature (and I’d imagine that there is a very strong correlation between belief in CAGW and this measure) is:
Employment in the ‘climate’ industry – exponential growth
Funny Old World innit?
lolwot, you write “the data shows a sharp change in climate
this is of course the basis of CAGW”
Wow! This is supposed to be a scientific discussion. Just take my word for it, says lolwot, the data shows what I claim it shows. I have seen no evidence whatsoever, that the graphs that you have presented proves that CO2 causes the observed rise in temperature.
Please, lolwot, could we discuss this issue in a scientific manner. I dont thnk our hostess provided this excellent blog so that you can post this sort of drivel.
The excellent global surface temperature data shows about 0.8C warming in the past 100 years. That is a lot compared to how much the Earth typically changes in temperature over such a time frame. What we have seen so far is unusual warming, if it continues it may even become unique.
At the same time we see CO2 levels shooting up at staggering rates unknown to us in Earth’s history due to human activity. Ocean pH levels dropping a corresponding staggering rate.
Such large and sharp change, with further change in tow, is the basis for CAGW.
@lolwot
‘Ocean pH levels dropping a corresponding staggering rate’
Are they indeed?
I’ve been looking for the data that confirms this oft-held mythical belief for three years now and I ain’t found it yet. The best I’ve found is a total of about 100 measurements taken off Hawaii in two separate six year spells. Not even a continuous series using the same instrumentation. And even they don’t really show anything ‘statistically significant’.
Perhaps you can end my quest by pasting a link to more, more reliable data? Thanks.
Hi, Mosh. I like your analogy with the runner. You said:
“We know that if you have a tail wind you will run faster. If you have a head wind you will run slower. Simple physics tells us that.”
This highlights a general problem with linear thinking in the complexity of the global climate. With linear thinking, a headwind does make you run slower. But a headwind could also cool you more efficiently, giving you more energy, and allowing you to run faster. This is non-linear thinking, and shows that, if the two forces balance out, then the “sensitivity” to a headwind could be zero (within a certain range).
This is (I think) one of Jim C.’s points, and shows how sensitivity in the climate could be indistinguishable from zero.
The hawaii measurements are bridged by calculations showing the expected decline in pH as a result of elevated atmospheric CO2 levels.
Here’s the significance of what we are doing with CO2 and ocean pH in context of the last few hundred thousand years:
http://www.windows2universe.org/earth/images/OA_big_sm.jpg
very large and fast changes outside of natural variation due to man are taking place. This is the basis of CAGW.
@lolwot
(Sorry – also posted elsewhere…wordpress seems to be having trouble again)
‘The hawaii measurements are bridged by calculations showing the expected decline in pH as a result of elevated atmospheric CO2 levels’
Surely even you can see the application of ciircular reasoning in that remark.
Your theory says ‘you expect’ a decline in pH. When there are no measurements, therefore you insert your ‘expected’ data. And lo! in a masterpiece of circularity- the inserted data shows the effect you wished to prove.
Wow…I mean wow!
Do you really think that such a piece of twisted logic is going to pass the average teenager. let alone the Climate Etc audience?
BTW – your ‘very frightening’ – but completely anonymous and uncredited graph supposedly of the pH of the last few hundred thousand years doesn’t even match the Hawaii data. It purports to show a recent (last 10K years) pH plummet from 8.2 to 7.8. But the Hawaii figures are still at 8.2.
Got anything better? The pH stuff you have presented so far is, quite frankly, bollocks.
“Surely even you can see the application of ciircular reasoning in that remark.”
sorry i didn’t explain it enough. As far as I am aware they are calculating pH from two observed quantities. Ie they are not calculating it from some theory that pH should drop. It does match that expectation though.
“BTW – your ‘very frightening’ – but completely anonymous and uncredited graph supposedly of the pH of the last few hundred thousand years doesn’t even match the Hawaii data. It purports to show a recent (last 10K years) pH plummet from 8.2 to 7.8. But the Hawaii figures are still at 8.2.”
That graph is what is expected, I wasn’t presenting it as measurements (apart from the atmospheric CO2 of course). The end point goes up to 2100 (hasn’t happened yet) after-all.
I am pointing out that this is where pH and CO2 are expected to be heading – far outside typical natural ranges in a very short amount of time.
Latimer,
I don’t think you’ve read the information I gave you on the skeptical science link. Or if you have, you just haven’t understood it. Try again and come back with a question or an argument which shows you have at least some scientific comprehension.
There is no ‘proof’ in the mathematical sense but there is a wealth of evidence that increasing CO2 has caused temperatures to rise and will cause temperatures to rise even further unless CO2 levels are brought under some control. You can argue , like Judith that the exact amount of warming is uncertain. She says it could be anything between 1 and 6 deg C for a doubling of CO2 levels. Of course, scientifically we’ll not know for sure until we try the experiment.
However, from an engineering viewpoint it is dangerous experiment to try. You wouldn’t appreciate Boeing doing experiments on their aircraft while you were a passenger, I’m sure!
I know its a waste of time trying to convince you of all this. You obviously don’t have a open mind. Your objections to the scientific case are based on your political ideology which means that you’re unlikely to ever change.
@tempterrain
Yes I did read the ‘sceptical science’ article and yes I did understand it.
It makes a fairly weak circumstantial case for there being a causative effect from CO2 rising and temperature increase. And I have absolutely no objection to anybody presenting that argument.
But there is not ’empirical data’ of this casuative effect. As you say, the crucial experiment has not been done.
It is mendacious to claim that there is empirical data to show it, when there is not. Please refrain from doing so.
You also state:
‘Your objections to the scientific case are based on your political ideology which means that you’re unlikely to ever change’
Later in this thread you state something similar about Nic Lewis.
‘I’d say he’s motives are ideologically based and he started off deciding to do what he could to discredit the scientific case.’
I think you seriously need to rethink that part of your argument. The key thing about science, surely, is that it is determined by the evidence independent of faith or belief or ideology.
And I think that your constant focus on others ‘political ideology’ tells us more about your own motives than you imagine.
If you look at the temperature over the past ten thousand years and the CO2 over the past ten thousand years, CO2 rose and Temperature went up and down with no correlation with CO2 for the past seven thousand years. The Temperature Response to CO2 is 0% plus or minus 0%. That is what is the data shows us.
Judith,
Interesting how anyone who wishes to review data sets, they suddenly disappear???
In physical evidence science, everything is ALWAYS out in the open to scrutinize at will. But this ignored by mainstream scientists as tradition must be protected over truth, evidence and planetary understanding.
Did you realize that the velocity and distance at 85 degrees latitude to the equators velocity and distance is 12 days difference in a single planetary rotation ?
This means that the measurement of distance at the equator for 24 hours to the measurement at 85 degrees latitude have a different time period due to the immense velocity difference. If the equator had the same velocity, we would be having 6 days of sunlight and 6 days of darkness…wonder what that would do for temperature data and environment?
I find it interesting that we who work up North see six of dark and six of light thank’s Joe.
This is very interesting, not the missing data or that stuff, but the ocean surface temperature looks to be tightly constrained by the specific volume of the atmosphere and the freezing points of water. With that tight of a constraint, diffusion would also be limited into the deep ocean. Kind of an interesting shoe to drop about now.
I clicked on the link provided for the paper and noted this:
This upper bound implies that many AOGCMs mix heat into the deep ocean (below the mixed layer) too efficiently. …
I had read that before in Hansen’s recent paper on earth’s energy imbalance. So I looked again and Forest (2006) is cited where he discusses that.
Yep, Diffusion from the surface is over whelmed by evaporation between 30 and 32 C. Unless gravity changes or water is not allowed to freeze, that is the limit. The dead sea with no ice is limited to around 37C.
So you are agreeing with the data losin’, code withholdin’ scoundrel?
JCH, speaking of lost data once again, where did PJ dump his stuff?
no really, 30 to 32c is the high limit. -1.9 to 0C is the low limit in the oceans. The average is fixed between 294 of 294.4C. If you look at AQUA where the peak solar is 40Wm-2 above average and the valley solar is 40Wm-2 below average, the average surface temperature ranges between 294 and 294.4 C. 30 to 32 C is the relief temperature at atmospheric pressure, the SST is regulated by the properties of salt water. In the dead sea where there is no ice and no thermocline with a temperature fixed to the range between the surface temperature and the freezing point of ice, it is higher. I would say they should have include another layer in their model that considered the thermal impact of the Earth heart, the water heat cycle.
http://en.wikipedia.org/wiki/File:Pack_ice_slow.gif
If you assume an infinite heat sink for the ocean instead of determining they are a thermal reservoir for the heat engine, you get incorrect results. They should have consulted an engineer and a statistician :)
BTW, the 1998 super el nino is a perfect example of the relief valve opening. The peak temperature of the 1998 event would be the maximum temperature and that can’t last long because water vapor is a negative feed back above that temperature. Pretty neat engine, almost idiot proof.
RE :
Pretty neat engine, almost idiot proof
Which is probably a good thing considering the idiot mechanics who want to mess with it.
BTW, For the radiation fans, they could look at the portion of the solar spectrum absorbed by water vapor that also penetrates below 10 meters in the ocean. I think that they will find that with a long enough path length, atmospheric water vapor tends to regulate the energy absorbed at and around the ocean thermocline layer at about 100 meters. That would kinda help explain the inexplicable solar prolonged minimum impact. Increase atmospheric water vapor, you increase the impact. Climate is a fun puzzle.
We used to have Fred Moolten to explain these things for us and reassure us that despite all appearances, Dr Forest’s results should continue to terrify us. How times change. De haut, en bas.
“Climate Science where Quality is Job uhhh …… quality? Did you say quality? We don’t need no stinkin’ quality! This is science, What does quality have to do with it? The only quality we need ’round here is the ability to get more grants. You want quality, you’re gonna have to pay double, maybe triple. The people are lucky great men like us even bother to share our exalted thoughts at all. We can’t be bothered with stupid details about quality. Give us our money and shut the heck up.”
Nic Lewis’ academic background is mathematics, with a minor in physics, at Cambridge University (UK). His career has been outside academia.
Another Citizen Scientist.
when a researcher loses the raw data, the study is void.
Latimer writes : “Nobody would think that they are working on ‘the most important a problem humanity has ever faced’ given their disregard for keeping any sort of audit trail or ‘professional standards’ records.”
Boy, is that a good point. And yet of course, one gets the feeling that it’s worse than simple carelessness, but an intentional disappearing of data that would quite likely discredit the work.
The CAGW edifice is infested with worms and will soon come tumbling down. It’s all rather shocking, to be honest. But I’ve always been somewhat naive.
It is all pretty funny. The impact of CO2 as a GHG ranges from around 6 to 30%, Natural forcing ranges from around 7 to 50%. Without being able to properly narrow those ranges of uncertainty, CO2 has to be the problem. The uncertainty of how much impact is a cause for alarm.
I agree with that completely, there is cause to be alarmed with the quality of climate science.
And yet of course, one gets the feeling that it’s worse than simple carelessness, but an intentional disappearing of data that would quite likely discredit the work.
=================
An very important paper, cited by 100 other papers, published 6 years ago, and after a year of asking the author now claims the data is lost.
Give us all a friggin break. Seriously. Who in their right mind would not save the research material for one of the single most important papers of their career?
Unless they knew that the release of the data would harm their career more than saying it was lost. This suggests strongly that the authors are aware of a problem and are seeking to cover it up, or they are singularly incompetent, or both.
In either case, it means there is serious reason to suspect that the results cannot be reproduced and the paper should be withdrawn. All papers that rely on this paper need to be withdrawn as well.
The reason for this is simple. Such an action will force the scientific community to clean up its act.
Until a paper can be reproduced there should be a high degree of risk in citing it in your research, because nothing says that the paper is valid. Certainly not peer review.
If you cite a paper that has not been reproduced and validated, and the paper is later found to be wrong, then you should pay a price for having used the paper without verification in your own paper.
In that fashion science will quickly clean house and require papers to have data and method made available sufficient for independent replication of the result by a HOSTILE examiner.
“I made Dr Forest aware over a week ago that I had the CSF 2005 and SFZ 2008 processed datasets, and invited him to provide within seven days a satisfactory explanation of the changes made to the data, and evidence that adequately substantiates his apparently incorrect statements………. I have received no response from Dr Forest.” – Nicholas Lewis
Why is Nic a pompous arse??
I invite him to provide me with a satisfactory explanation for this within seven days.
Lol.
Nic Lewis a “pompous arse”? I must be missing something. It is the good Dr Forest and GRL who would appear to be the arrogant ones. Raw data “lost”, no further information forthcoming and stoney silence. Simple as that and the paper stands. Pompous? Indeed.
That the best rebuttal you can come up with? “He’s a pompous ass?”
Wow.
Michael
Interesting how you personalize the issue.
Nic has been waiting a year. Some people here might speculate on
why the long wait as the deadline approaches.
I invite you to answer why you bring of personalities and avoid the fact
of the year long wait for data
S. Mosher – the letter to the journal is dated June 23, 2012. He says he gave the Forest seven days to respond to a specific set of demands more than a week ago.
Has the journal editor known about this situation for the entire year, or is this letter his first notification?
“Has the journal editor known about this situation for the entire year, or is this letter his first notification?”
Dear Dr Calais:
I contacted you last summer about the failure by Dr Chris Forest to provide requested data and computer code used in Forest 2006, an observationally constrained study of key climate parameters
################################################
Yeah, I saw that later, and then ran off to the grocery store, the pharmacy, etc..
hehe, I figured you just looked quickly at the date. Its what I did the first time. I’m sure there is more to the story. Wish these kinds of things didnt happen
The making of arbitrary demands so one can then make an issue of the failure of others to comply with your arbitrary demands is, admittedly, a clever tactic.
If there is some law that states Forest must reply to Nic Stokes in 7 days, I’d like someone to show it oo me.
If I’d received a request worded like that. I’d be putting it at the very bottom of my ‘To Do’ list.
Michael,
You know, Michael, when I read your comments I can’t help but get the impression they’re all written by someone with undescended testicles. Grow up!, will yah.
Ain’t projection grand!
Michael,
“Ain’t projection grand!”
Dunno, Michael. I mean, like, never having scored any of that “projection” good-stuff, myself, I have no idea whether its “grand” or not–that’s your bag, man.
And, oh by the way, Michael, just where do you dig up words like “grand”, anyway–I mean, like, when I read your last comment I got, like, this instantaneous, creep-out flash of you right there in-my-face chirping out a cracked-voice “grand!”. I mean, like, you know, Michael, you couldn’t find a more uptight, doofus, old-fogey term than “grand” and then to get smacked with it in that tiwtty, squeaky, weenie, “Delinquent Teenager”, little voice of yours–really weirdo bad vibes, dude. Jeez.
It’s not mike’s fault, I blame the education system.
Seeing as Nic has waited a year for data without response, the seven days deadline seems like a final request, along the lines of “I have waited a year, I am going to press in one week unless I receive something new”
Michael,
One would think that one pompous ass would know the answer to this question. Perhaps it is Nic Lewis not resembling you that is the source of your questioning.
Seems like an opportune time to remind ourselves of the usual standards expected in climatology – as explained by Prof. Phil Jones of UEA/CRU to the British Parliament (*).
‘His most startling observation came when he was asked how often scientists reviewing his papers for probity before publication asked to see details of his raw data, methodology and computer codes. “They’ve never asked,” he said’
Jones’s general defence was that anything people didn’t like – the strong-arm tactics to silence critics, the cold-shouldering of freedom of information requests, the economy with data sharing – were all “standard practice” among climate scientists.’
From
http://www.guardian.co.uk/environment/cif-green/2010/mar/01/phil-jones-commons-emails-inquiry
by experienced and widely respected commentator Fred Pearce.
*He has published over 200 papers in a long climatological career and some of his e-mails became public in the Climategate release. Hence his summoning to give evidence. Jones is responsible for the widely-used HACDRUT datasets.
Same for McIntyre. As a reviewer he asked and was told that no one had ever asked before.
See also the original hockey stick. The scientific poster boy for all of climate science. Plastered all over the IPCC’s work. Overturned everything previously believed about the termperature history of the last 1000 years. But no one ever bothered to look at the flawed work before embracing it. Careless, no — reckless bunch. Reckless with their reputations, reckless with the science, reckless with the public purse, reckless with the public’s lives.
I wouldn’t trust them to be able to clean a dog kennel. My teenagers are more conscientious about keeping their rooms clean. Which are a mess.
It’s by no means limited to climatology — I was recently speaking with a very good astrophysicist (one of those time/space theoreticians) who said that he often doesn’t check the math, let alone the raw data, when doing peer review, and knows that others don’t, either. Basically, they read through the article to see if it all makes sense, and if nothing suggests a calculation error they don’t investigate. (Also, I’m told that journal editors often don’t give reviewers sufficient time to really dig into articles, so it’s not merely a matter of laziness.)
How to make Money in Climate Science
1. find a major unanswered question.
3. question top scientists to see what they will accept as an answer
3. check pick data and methods to arrive at that answer
4. re-label this technique “training” – it makes it sound intelligent.
5. publish the result.
The results will seem correct to fellow scientists, especially those at the top, so they wont bother to check the math. Everyone will be impressed you have answered the hard question. More so because you will have proven their best guess correct. You will advance in your career in science. Fame and fortune will follow.
If anyone does question the results:
6. lose the data and methods
Many academic libraries are starting digital repositories. See http://www.jiscinfonet.ac.uk/infokits/repositories/ as a starting point. It would be good practice for all academic researchers to begin using them not only for archiving final versions of publications and datasets, but also the nuts and bolts of work in progress.
There’s a tiger behind Gate #3 and a beautiful woman behind Gate #1.
===============
With it a tossup who will eat you out of house and home first.
Give the beautiful woman the house and be done with it … take the tiger for a walk.
When it comes to history…the first impulse of the Climatists of warming alarmism was to ignore history. Then the Climatists believed they could use statistics to predict the future. And then they did but then they lost the data. Climatism has been the greatest display of ignorance modern man has ever witnessed. Does anyone believe that in case of emergency we should call a Climatist?
http://evilincandescentbulb.wordpress.com/2012/06/24/1887/
Tch. First McKitrick looking bad when Mosher investigates for fifteen minutes, now Forster after Nic Lewis persists for almost two years.
It’ll be interesting to watch these both through to a conclusion.
Err; Forest, not Forster. Dyslexic day already?
Yad dylxsice?
KO!
Bart can be forgiven, Nic’s first shot was related to Forster (& Gregory) though not AT them.
ya bart, that case is going to get even weirder. i’ve given ross the population density using the Ar5 approved dataset ( Hyde 3.1) for every one of his grid cells. It’s up to him to decide if he wants to reprocess and see if it makes a difference. If Im allowed to review the SOD ( I was a FOD reviewer) what the heck can i say if they cite his paper? the review process is not about auditing the science.. That’s one reason why I think WG1 should be a living document. cutoff dates lead to all sorts of nasty politics. Puts me in a very strange situation. shrugs.
Steven Mosher | June 25, 2012 at 7:20 pm |
I’m sure Dr. McKitrick will do his best to accommodate. He himself was in a not dissimilar situation wrt BEST; I admit to mercilessly jabbing at him while he was exposed.
My own bias toward people who audit, look into raw data, do the math for themselves with evenhanded skepticism, review, and make real and meaningful contributions despite the wickedly improvised circumstances of much of the data (as you’ve come across with some aspects of the Canadian weather data, and must appreciate the frustration of more than most) tends to increase my regard for your efforts in this.
My inkling is that reprocessing may even improve McKitrick’s results. After all, the hypothesis he puts forward isn’t completely implausible, at first glance. He could end up thanking you.
yes, while looking at population growth I did find that it explained some of the variance in temperature trends.. So a finer grid of data may actually help his case. Personally I just like to use the best data and see where the chips fall. Knowing that nothing I could find would overturn radiative physics. arrg. dont get me started on env canada and sunshine, I just regained my peace of mind
Steven Mosher | June 26, 2012 at 12:10 am |
“I just like to use the best data and see where the chips fall.”
I’m closer to “I just like to use the data best and see where the chips fall.”
(Note lowercase.) ;)
Which is what makes cases like this more frustrating. It’s entirely plausible that Forest’s work is no worse than McKitrick’s. It’s even probable. But with the problems Nic Lewis is reporting, we all must reduce our confidence in the body of knowledge by just that much more, absent a full and detailed accounting.
And so simple to avoid. Cradle to grave public cloud data repository for all observations. What could be simpler?
@bart r
I agree entirely. Open data throughout is the only way to go…however inconvenient it is for the ‘scientists’. Making ‘their’ data available to all is just the minimum price of entry to the game
But also with the caveat that it is not physically possible for my ‘confidence in the body of knowledge’ to fall any further.
The unprofessional attitude and behaviour of the climatologists convinced me it was mostly crap a long time ago.
“Knowing that nothing I could find would overturn radiative physics. ” Nor will you find anything that will overturn basic Thermodynamics. Radaitive physics and thermodynamics are joined at the scientific hip.
The outside-in radiative approach will meet the inside-out thermodynamic approach, if they don’t – one is wrong. Thermo right now is imposing a stronger limit thanks to the physical properties of water. Those properties are constrained by the latent heats of fusion and evaporation, barometric pressure, and the impurities of the water. CO2 does not change gravity, has a small impact on the temperatures of freezing and evaporation and cannot create energy, so solve the basic thermo and then you can go nuts playing with the radiant, outside of the moisture envelope. Then they will meet in the middle.
Why an I not surprised to discover that Chris E Forest works in the same building (Walker) as Michael E Mann at Pennsylvania State University? And of I guess correctly on the same floor. (rooms 507 and 523 respectively).
An alumnus tells me that it is easy to identify both their offices. Just follow the sound of the shredders running 24/7. It is colloquially known as ‘The Enron Floor’.
Speaking of Forest, this is interesting: “He is an IPCC AR5 Lead Author on Chapter 9, “Evaluation of Climate Models” in Working Group 1.” from his bio at http://ploneprod.met.psu.edu/people/cef13/.
Forest works for Mann:
http://www.essc.psu.edu/essc_web/people/index.html
Poor Dr. Mann. He must be the unluckiest academic in history.
Whenever there is a scandal about missing data or FoI refusals or incompletely described methods or almost anything else in climatology it is always one of his close associates who is involved. What terrible coincidences.
Tomfop: “We used to have Fred Moolten to explain these things for us and reassure us that despite all appearances, Dr Forest’s results should continue to terrify us. How times change. De haut, en bas.”
I’ve been wondering where old Fred went. Anyone know?
He was checking in on Lucia’s until he notice that a lower sensitivity started making sense. Something about ocean layers and e^t/RC. Way over my head.
He got tired of people pointing out the flaws in his logic and now posts at sites that agree with his thoughts?
The ‘ocean acidification’ that he was so worried about finally got him and he has quite properly been neutralised (chemist’s joke :– )
Alternatively he bored even himself witless with his long rambling sentences that began with the long list of reading and judicious assessment (but free of facts or anything concrete) of not only the topic in hand by of all vaguely related topics and then told us that maybe he hadn’t really got an opinion but led you to believe that he was possibly running out of puff and coming to the end of the first sentence when lo! he would get a second wind, remind us once again of all the reading he claims to have done and at that point we would all start to think that an early death from ennui might be preferable to yet more torture by vacuity before he wandered off into a fourth subordinate clause………….
Beautifully done Latimer. That’s a fine piece of writing. Not kidding. You have a satirist’s gift. Laughed out loud when I came to “when lo!”
Thanks Dad!
We lost Fred, but now we have Fan. I don’t think that’s progress.
Fan seems a lot like the idiot Robert?
Fan’s a lot better natured than Robert. You can’t ever get him rattled. I know this from previous history at another blog. I don’t recall Robert ever using Fan’s signature smilies.
True, but I seem to recall that Robert turned over a new leaf at the start of year, sounding both a bit nicer and a bit more sensible. I won’t hold my breath waiting for fan to sound sensible.
Agreed. That ain’t happening.
Fan is more concise. For this relief much thanks.
No comparison. Fred knows the science far better than Fan. you’re right, it’s not progress.
They have one thing in common; they both throw citations in to the middle of a blog discussion which sends the discussion off into the weeds. But Fred’s citations usually had something to do with the topic.
Thank you, Nic Lewis, for a careful, McIntyre -quality audit of Forrest et al’s 2006 work.
It’s always gob-smacking how much weak and poor-quality work is used to support the CAGW hypothesis — and how many billions of dollars have been wasted to “mitigate” what appears to be a largely-imaginary problem. What a surprise that the politics of the scientists involved almost always favor such “solutions”. I’m currently rereading some of Feynman’s classic remarks on this topic (“The Pleasure of Finding Things Out”), and it is truly remarkable how much self-deception and wishful thinking goes on in your field, Dr. Curry. Good luck on the house-cleaning!
Peter D. Tillman
Professional Geologist,advanced-amateur climatologist
Thanks!
Nic Lewis
Nic Lewis
Are you going to submit a paper on this? As Judith point out, this would be in line with GRL’s policy, so if you seriously think you might be on to something here then this is probably the most effective way to pursue it.
You could even try submitting it to one of the new open-review journals instead of GRL itself, eg: one of those published by the EGU so we can see the reviews, and anyone who wants to will be able to comment.
Deadline for papers to be submitted to be eligible for citation in the IPCC AR5 Working Group 1 report is July 31st….
Cheers
Richard
Richard
I hope to submit a paper by 31 July, but it will be tight. I fear that the paper will be too long for GRL. It will concentrate on serious flaws in the Forest 2006 statistical inference method. The existence of two competing datasets complicates matters, but that is too bad.
Nic
And GRL will reject the paper because without Forest’s data, you have no basis to criticize Forest’s conclusions.
Great idea, submit a paper before July, then it can be considered for inclusion in IPCC AR5 by the authors of Ch 9, such as … C Forest.
I hope Dr Forrest clears this up, otherwise he will be included in the sequel to the “Delinquent Teenager” by Donna Laframboise which would drill down below the immature IPCC and look at the childish behavior of a small but influential group of climate scientists.
In other scientific fields, authors have been required to withdraw papers because they could not produce their original ‘lab notes’ –> the daily handwritten diary of the procedures and results for their study.
Not being even able to locate the original data and the existence of two different data sets purported to be identical raises so many issues that, if true, should call for a written withdrawl from the authors that includes an explanation of the problems.
If Forest, Stone and Sokolov wish to save face, in the instance that the original data can not be located and shared with the journal editor and study auditors, they could simply re-do the paper, with the same data sets extended to present time, and replicate their own results, publishing a new paper. This approach allows them to validate their previous work, while getting another 5 years of data into the calculations showing the world that their methods and results were and are correct. Plus, they are guaranteed that the new paper will be published (after review). Of course, this time, they would have to archive all the data sets, the methods, and the codes in keeping with the Royal Academy report.
If they are unwilling to do so, then something smells fishy.
In fact, it would not be unreasonable (perhaps unlikely, though) that Forest, Stone and Sokolov could ask Bruno Sansó, Charles Curry, and D Zantedeschi to join them in the new paper to sort it all out.
Kip Hansen:
In fact, there is 16 more years’ data available, since, rather surprisingly, Forest 2006 used data ending in 1995.
Unfortunately, simply reproducing Forest 2006 using updated data, and ensuring that it was correctly processed, would not produce valid PDFs for the climate parameters, particularly climate sensitivity, since the statistical inference method is faulty.
Charles Curry and David Zantedeschi have left climate science for greener (?) pastures. It is unfortunate that many of the best young researchers leave – particularly those with statistical expertise, which is in very short supply in climate science.
Perhaps they left because the warming stopped. It is likewise suspicious that Forest, working in 2005, used data ending in 1995.
Nic,
Yes, if I understand your view — you think that the method/model used in Forest 2006 was faulty, therefore the results are faulty.
Forest probably thinks he was/is right and might like to re-do his paper with the additional data to prove that he was correct and justify his model and methods. He has every right, even an obligation, to attempt to validate his own earlier work when it is called into question. This would be his job.
If the statistical inference model he used is obviously and definitively faulty (and not just a matter of differing scientific opinions) – regardless of the missing original data – then a paper should be able to be produced and published refuting the paper and its findings on that basis alone. Guess that would be your job.
Kip,
Indeed so.
Interesting that. Pre-1995 raw data “lost” and data after 1995 not included. Wonder why? While a good number of observers have focused on 1998 as the beginning of global temps leveling off, Lindzen in particular has argued that statistically speaking the turning point was in fact 1995.
Kip, you have to sign you lab book pages and every week or so explain what you have done to someone and have the countersigned.
There is no approved electronic data store so we have to, no joke, print out our graphs/spreadsheets and paste (NO TAPE) them into our books.
What is “raw model data”? Isn’t all data from models, by definition, cooked?
P.E.
Yes, but some model data are less cooked than others. At least with a model like the MIT one used in Forest 2006 one can (if the descriptions of it are correct) set the key climate sensitivity, effective ocean diffusivity and aerosol forcing levels independently and with some confidence (I’m not the person to ask how much) that the simulated results reflect those settings. As I understand it, with full scale AOGCMs it is not practicable even to set ocean diffusivity to zero, for example. (I am willing to be contradicated on this by any climate modelling expert.)
There are obviously attractions in very simple climate models like 1D EBMs (energy balance models). But they don’t seem to provide informative enough data to enable a well constrained estimate of climate sensitivity to be obtained. Maybe there is a middle ground.
Nic
Very nice piece of work.
Mind you, if you want to see cooked data you need look no further than the historic temperature data sets. They have been systematically adjusted using by such as Phil Jones who received a substantial EU grant several years ago under the ‘IMPROVE’ project to look at historic data sets. The temperatures were found to be ‘warm biased so were adjusted to match the model expectations.
The painstaking work involved can be found in ‘Improved understanding of past climatic variability from Early Daily European Instrumental Sources.’
Over the five years or so that I have been writing articles based on historical records I have come to suspect that the 30 year period commencing 1710 was around as warm as today.
tonyb
Tony.
IMPROVE consists of one station in crutem4
http://www.cru.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm
Mosh
I see Jones is quoted seven times in the references within your link. I seem to remember a conversation with Nick Stokes a few weeks ago when he said the IMPROVE data was merely available and researchers had the option to use it. Interesting to see it was actually used in crutem4.
Its very good detective work but after reading the book twice ( a real labour of love as its very heavy going) this supposed warm bias in the temperature record concerns me and the way in which it is adjusted is somewhat subjective.
tonyb
Mosh
I was intrigued by this from crutem4
“Mali, D.R. Congo plus a few others. Series given to CRU by various academic visitors.”
This relates to 60 stations. It all seems a bit casual
tonyb
yes tony.
I am doing some work creating a new series for a particular region and the first step was to compare the CRUTEM4 data with the sources that I have. Interestingly, in the region I am looking at the CRUTEM series
has early year adjustments that raise the summer temperatures by 1C or more in summer months.
I know that people love to find the place where the early record is cooled, funny that I should find one where it is warmed.
Ideally the surfacetemperatures.org project will build a first class
all inclusive repository with complete traceability. We know however
that some monthly series can never have their daily underlying records
recovered.
Tony it would be interesting to see your documentary reconstruction
for say 1700 to 1850. sources and methodology of course.
Mosh
Are you aware of the Mannheim palatine series of records?
tonyb
Nope.
Mosh
I give you the opportunity of giving me a one word answer and you take it.
There are a series of records from a network of stations that predate Giss by 200 years. One was created by the Royal Society but the most famous was the network created by the Mannheim palatine. These used standardised records, methodology and instruments. I got the data from the Met office and they are the historic records that Phil Jones and his colleagues are systematically working through and appeared in Crutem4.
If you are interested I will send you some of the Met office pdf’s. I hope to write an article on these historic networks shortly.
I have not specfically looked at 1700 to 1850 but as for methodology and sources I have quoted this to you three times, I reproduce this from a forthcoming article;
“Those interested in learning something of the nature of historical climatology and how material is compiled, might find this comprehensive article on the subject interesting.
http://www.st-andrews.ac.uk/~rjsw/papers/Brazdil-etal-2005.pdf
When sufficient data becomes available –as in part 1 of ‘The long slow thaw- ‘anecdotal’ information is translated into data following the methods detailed by Van Engelen, J Buisman and F Unsen of the Royal Met office De bilt and described in the book ‘History and Climate.’ The back up material to carry this out for ‘The long slow thaw’ was contained in ‘supplementary information’ and used in conjunction with the numerous references from that study.
Basically Nesting is awkward and you might not see this reply, so if I see you hanging around I’ll post this reply again.
tonyb
Hi Steven and Tony
Tony the linked paper sounds very interesting.
Steven thanks for the reply on the WUWT (re spectrum, I’ve just seen it)
There are two reconstructions I came across regarding period you mentioned: one is for NAO, one for the rainfall (from caves stalagmite).
Links are here (I was using data to check my AMO-R reconstruction) at bottom of the page http://www.vukcevic.talktalk.net/AMO-R2.htm
My sous chef advises that over-cooking reduces the nutritional value.
And my attorney advises that cooking the books is very bad for the credibility…
My accountant tells me it’s the only way to get ahead (but he’s in prison now).
Data? Cooked? Hmm…
Just thinkin’ out loud–you know, like, ambush-interview format. Latimer all lean and mean and all buffed up to the max–everything that wimp-out, wannbe, Gordon Ramsay foodie-guy wishes he were but is not. A blubbering, poor-baby, world-famous climate scientist. Latimer’s non-stop, snide-remark patter as he tears open desk drawers and picks over their contents. A climactic, “what do we have here??!!!” moment when Latimer discovers a cache of souvenir photos from Rio and holds them up to the camera. Maybe call it “Show the Data, Sucker!–and I Mean NOW!!!”
I dunno, but I kinda think we’re talkin’ some potentially really great reality-tv here.
Very funny.
http://www.backblaze.com/
Unlimited encrypted online backup of one computer: $95 for two years. I’m at 150 GB and rising in my backup set.
In losing the data rather using statistics to quantify the unknown I think what we are seeing is making unknown what was quantified. If not “very likely” it is at least “likely” — as Dr. Forest would like to describe matters of uncertainty — that in losing the data he is resorting to the old concept of Don’t Do as CRU Says Do As CRU Did.. So, in addition to CRUgate we now have AR4gate too.
Allow me to cast my vote in favor of a requirement to provide code.
My example of a leading researcher maintaining the standard of providing turnkey code and data with all his research results is David Donoho, a former MacArthur fellow and von Neumann prize recipient.
I suspect the reason most research scientists don’t make code available is that a quick look would reveal how unprofessional their code is. Too bad. Real scientists will eventually recognize that if code is an essential part of their research, it is in their interest as well as that of science generally that code be at least clean enough that they will not be embarrassed to release it.
(Understood that GCMs are much more complicated that Donoho’s code; all the more reason they should be seen by many pairs of eyes.)
Not that I want to ride my hobby horse around again, but a lot of this type of ‘missing’ or ‘lost’ or ‘absent’ data could be rectified if authors were subject to a standard for how data is preserved for auditing. In other words, there needs to be data and method traceability. Nic is finding he cannot trace where data came from. Nic is not able to get data or code… traceability to it has been ‘lost’. Of course this raises red flags and confidence in the end product is lowered.
Some have argued that academia cannot be constrained by ‘quality control’ measures… academia needs its freedom from the restraint of having to meet certain requirements, such as proper traceability. I say BS. If you want your work to be taken seriously, it has to be held up against tough scrutiny.
You know you can get a patent on just about anything if you word it right, but it isn’t a patent until a court of law has upheld it and it’s not enforcable by the owner of the patent until such a verdict has been reached.
You might have some of my comments in mind when you write what some have argued, but I do actually agree with you. Weaknesses in data handling will reduce credibility. My point was that this is the natural concequence and nothing more is really needed.
Pekka, we have discussed this before and maybe i distort your point of view a bit. But in a case like this, it has far reaching consequences. This article has been extensively sited. If Nic cannot get answers to his requests, where does this leave everything else that touches this paper? We don’t settle for this in so many other technical fields, why do we settle for it here? We see the damage it does, yet we don’t want to address it. If we identify ‘quality’ issues with the process, we should address them. Perhaps there is no framework to work with. What will it take for one to be created? Where is the idea of continuous improvement in the process of publishing scientific literature? I don’t see it yet and it’s too bad because it can be done by just copying systems others use routinely.
The rule of thumb for thinking about remedies for sloppy data archiving should be to classify the context.
Context 1: Michael Polyani’s “Republic of Science” regime (http://www.missouriwestern.edu/orgs/polanyi/mp-repsc.htm), where pure researchers compete to make discoveries that will be cited by others. In that context, every researcher and journal properly internalizes the community costs and benefits of reporting method details, archiving data etc. In the Republic of Science context, each investigator decides how much to believe/cite/build upon the work of other investigators based on how much he thinks it will help him make his own citable discoveries. He’s betting his own research future and success when he decides to give credibility to someone else’s work. Norms about how much detail to report, how much data to archive, how much code to make available, etc. will evolve to maximize the community’s rate of discovery without any need for prescriptive regulation.
(This conclusion holds as long as funding for research is either not a binding constraint or available on the basis of one’s track record in making discoveries. If outside funding is necessary and the funding sources have interests other than maximizing discovery, the conclusions above may not hold.)
Context 2: Policy-relevant science, where findings’ primary importance is in directing regulations, subsidies, or taxes at third parties not in the research community. Now the incentives do not line up so neatly. A researcher can become important and famous by providing support to one or more extramural agendas, separate from the quality of his work. Investigators may choose to cite or build upon the work of others not based on the projected contribution to their own future discoveries but instead on the attention and support they will get by affiliating with a given policy position.
In addition, even if external agendas do not bias researchers’ decisions about what to publish and what to cite, the level of quality and traceability of data, methods, and code that maximizes community discovery rates is generally less than the one that maximizes the policy usefulness of research. In pure research, I don’t need to believe that a colleague’s work is bulletproof in order to justify gambling on trying to build on it. Another researcher who publishes a little bit faster with less scrupulous care may be more useful to my research than a slower, more careful worker. But in a policy context, that standard is the wrong one. Instead, before substantial costs are incurred on third parties, the supporting research must be shown to have been done right.
Great analysis Steve. The climate folks are using the Republic standard in a politicized policy context, which calls for a higher standard, so there is a mismatch. It is painfully simple, with painful being the operative word.
Steve, perhaps classification of context would be important for the standardization of how data is archived and the classifications you offer seem good to me. But, there should be a standardization. The details of which would be worked out by the science community who will be using it. This is how standards are written in other technical feilds, by the community of the users of the standard. In this case the community would not only include those who write and publish the literature, but those who seek to reproduce it as well.
John,
The fundamental problem is in giving too much weight on a single study. Whether there are explicit errors in the Forest et al study or not there remains a lot of uncertainty in the validity of the methods. What’s called pdf in this case is not a real objective pdf for the climate sensitivity, it is a numerical result for a conditional probability obtained from certain set of data making many questionable assumptions and applying certain mathematical methods. That’s all fine as long as the significance of the paper is understood correctly.
That there are questions about the technical correctness of the analysis is a further point that reduces a bit the proper weight the results should be given.
The difference between my attitudes and those of many others is that I seem to be basically even more skeptical of every scientific study than most. I do accept that as a practically unavoidable fact. Doing and reporting research better is very important and takes off a sizable chunk of my doubts but the fundamental skepticism is usually not removed by that.
Applying this to the Forest et al paper, a careful auditing of the whole study where the best effort is also done to report all potential caveats and to estimate their significance would certainly add to the value of the results. It would, however, not be likely to change my view fundamentally in the positive case that nothing unexpected is observed. Whether such an auditing would be worthwhile use of resources capable enough for doing that is an open question. Auditing all papers so carefully would certainly be involve waste of effort.
By the above I don’t mean that it’s right that the original authors are sloppy. Certainly not. Every scientist has the duty of checking carefully everything that goes to the publications and members of a research group should do cross-checking of what they have done. Not doing that will in most cases be also bad for the career. Thus there is also a selfish motivation for doing that.
My point is again that any single paper may be wrong for very many reasons (known and unknown uncertainties). That must be accepted and understood. The scientific process will ultimately correct errors or forget the erroneous results. That may take time and that process may be sped up with extra effort in quality control but maximal QC is not the optimal use of resources in science. It may also discourage some important work where QC is particularly difficult.
Pekka Perila,
True. But, if important and highly influential papers like Forster and Gregory (2006) http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/ and Forest et al. (2006) incorrectly replotted or wrong, can we rely on the IPCC’s conclusions about climate sensitivity?
What are the policy implications? I’d argue we should not be implementing costly and economically damaging carbon pricing policies (like Australia’s CO2 tax and ETS) on the basis of such papers.
Peter,
Policy decisions are invariably based on uncertain information, because they are made to influence future and the future is always uncertain. Thus basing climate policy decisions on uncertain information is not fundamentally different from any other policy decision.
In a perfect world policy decisions would be based on best (or all) available information, the decision makers would understand that information and would also understand the nature of uncertainties involved. They would join that understanding to ethical principles and make wise and responsible decisions. The real decision making process is not ideal but it has some remote similarity with that.
For climate policy we should not ask for proofs but for best available information whatever it’s quality is. In my personal view the best available information contains sufficiently evidence on potentially very harmful consequences of CO2 emissions to raise the issue for serious consideration. The whole spectrum on empirical and theoretical results on the likely and possible values of climate sensitivity form one part of the best available information. All that supports the conclusion that the outcome may involve severe damages while leaving it uncertain whether those will be really serious even in long term.
Unfortunately this uncertain piece of knowledge remains perhaps the best known of all important pieces, the further important factors are mostly even less known. This leads to the concept of “The Precautionary Principle”. To me that principle is true but extremely difficult to apply correctly – and easy to use to justify all kind of stupidities.
The difficulty of discussing on equal basis all equally important factors has led to the present situation where political shortcuts are made following the more general political attitudes and preferences. When we cannot argue on climate policy in a valid way, we use various pieces of knowledge to strengthen our own beliefs that we have been right all the time and that we should not change our attitudes based on the threat of damaging global warming.
Pekka,
With respect, that answer is hand-waving. It is the sort of spin the government gives for implementing policies based on their ideological beliefs but without proper cost benefit analysis or properly risk analysis.
The government should not implement policies that will cost $ trillions on such flimsy information, IMO. This comment by Nullius in Verba applies: http://www.collide-a-scape.com/2012/06/05/conservatives-who-think-seriously-about-the-planet/#comment-111418
I find Pekka’s argument to be quite deep. What are being treated as objective PDF’s are in fact not such. This problem is far deeper than any weakness in deriving spectific functions.
As for cost-benefit analysis, it is meaningless when the estimated benefits range from zero to saving humanity. As EPA has demonstrated repeatedly, it is a game played with funny numbers.
Whatever we think governments implement policies based on their ideological beliefs – and the opposition opposes based on their beliefs.
I am equally disturbed by the unwillingness of all sides to discuss the difficult questions. Accepting that severe outcomes cannot be fully excluded is not at all sufficient for determining wise policies. I believe firmly that severe outcomes cannot be excluded. Thus I don’t accept the other simple alternative either, that based on belief that the whole issue is just a hoax.
You may say that this is hand-waving, but who can really present something better?
Pekka, I appreciate your answer and agree on a whole with what you say. I want to be clear that I don’t advocate QC measures to make sure any certain paper is ‘correct’ or not at the time of publication. I advocate QC measures to ensure that other researches, auditors, interested parties can take the original data, methods etc and try to reproduce at a later time if someone so chooses to do so. There should be some level of standardization that ensures the auditing process can happen, relatively unfettered. It should not be a struggle… the struggle to obtain the necessary information to reproduce is what causes more doubt and ultimately suspicion about the reported results. QC measures would not be implemented to ensure ‘correctness’ of published work… that is partially the job of auditing, rather they would be implemented to ensure others could attempt to reproduce the work thus reducing uncertainty of a result. QC measures cannot be used to make sure a methodology used or a conclusion drawn is the correct one… the process you describe above is the mechanism that ultimately decides that.
Pekka,
I am very keen to discuss this very important issue. It really is the crux of policy decisions.
But where can we discuss it? We need a place where we can discuss in sequence, not in a tree structure, and especially not one that stops at four levels. I’m going to bed now, and I won’t be able to find your reply by morning. What about here: http://www.collide-a-scape.com/2012/06/05/conservatives-who-think-seriously-about-the-planet/
Pekka: I think the contextual distinction I made upthread–between pure research under Polanyi’s “Republic of Science” conditions and policy-relevant research–is relevant here. The standard you propound here is fine for the pure research context, where each investigator weighs the validity of each paper he sees solely in pursuit of his own race to make discoveries. Then we don’t need to worry about anybody’s incentives for QC, whether publishing author, journal, or reader.
But in policy-relevant contexts, that incentive structure doesn’t line up properly. First, researchers have myriad incentives to behave insincerely in both publication and assessment of papers. Second, even without these biases, the level of QC and auditability that maximizes discovery is not the level that maximizes policy usefulness. Policy-useful research requires much more robustness and engineering-style bound-setting than does pure curiosity/fame-driven research. Hence, policy-relevant research has a higher (though still finite) optimal level of QC, and that level may not be engaged in voluntarily unless external funders require it.
Note that economic statistics, for all their flaws, are produced by professional statistical agencies that have generally been considered non-political and neutral. Studies presented to the FDA for drug approval are produced by actors with suspect motives, but they are generated in far more auditable form and are picked over very carefully. Compare to meteorological records and climate data, and you can see that policy makers really don’t take climate research that seriously, If they did, they’d apply the stricter standards of policy-relevant research, as they do for economic data and for new drug approvals.
Isn’t this what Steve Mosher has been calling for?
Yes, among other things. It happens to be an area of interest for me as well, I work with quality systems everyday.
Yes. If the paper was written with sweave for example it would be reproducible. The paper would be an executable document.
There is no reason to have a paper over here and code and data over there. Executable documents.
Could the data still be available on an MIT backup server ?
Assuming that the data was not purposeful destroyed to hide evidence of scientific fraud.
Manfred: “Could the data still be available on an MIT backup server ?”
I wondered about this. Anyone from MIT reading this thread care to comment?
It;s hard to believe any academic institute does not provide space to backup everything. The institute I know of automate the process. You’d have to believe somewhere like MIT was on top of that.
http://ist.mit.edu/backup
How do you lose a mathematical model? A process as complex as climate would require many pages of mathematics to adequately describe it. Because the task is multi-disciplinary several people would be authors, so many copies would be produced. Programming the model is a separate process and should not be undertaken until the mathematical model has been approved by all authors and the project management. So how do you lose all that paper and computer records? I would advise Dr Curry not to ask for computer programs, as programmers have different ways of programming the same mathematical model, but if the original model has been lost, she might have to. Of course programmers can make mistakes too, but that is a separate issue. In fact I wonder how UN managed projects work: the UN tries to work according to democratic principles but science works best with a highly elitist structure, so there is a disconnect.
Climate Sensitivity (CS) Estimate
IPCC’s 0.2 deg C / decade warming for the next two decades claim is wrong because the global mean temperature trend is cyclic as shown => http://bit.ly/MkdC0k
Observed CS /Observed SECULAR trend = IPCC CS /IPCC trend
Observed CS/0.08 = 3/0.2
Observed CS = 0.08*3/0.2
Observed CS = 1.2 deg C for doubling of CO2.
Considering what this simple engineer has come to understand about non-linear system dynamics; h/t Hagen, Wojick, Curry, MTU EE Grad Classes. and too many others to list. It seems unlikely that a calculated sensitivity from models or empirical data can tell us much about what will happen next.
What we should be doing is determining what moves the system into the attractor that we call a glaciation, and determining if it is even possible to force the system into a warmer attractor given the existing boundary conditions and constraints.
Well, what is the answer?
“…………..can tell us much about what will happen next.”
Typo……..meant “can NOT tell us much…………..” Kinda changed my meaning, huh? Sorry
It’s a shame that the data and code are not available. The Forest et al (2006) study cannot now be replicated, and therefore falls into the category of Art, not Science.
I hope anyone hoping to cite this Art-masquerading-as-Science in the future, has second thoughts, as the paper can have no place in a logical argument – it now is speculation, not science and not a reliable result.
It seems to me to be a simple issue for Journal editors. Ensure that method and data are available to everyone in perpetuity from the date of publication. Unless you are editing an artists’ rag.
A tiger behind Gate #3, a beautiful woman behind Gate #1, multiple choices, so what’s behind Gate #2?
Don’t ask, don’t tell.
Perhaps there is a leopard behind Gate #2 and that is where Forest has put his data.
A cautionary tale from H2G2 on making data open..the bureaucrats way:
‘But Mr Dent, the plans have been available in the local planning office for the last nine months.”
“Oh yes, well as soon as I heard I went straight round to see them, yesterday afternoon. You hadn’t exactly gone out of your way to call attention to them, had you? I mean, like actually telling anybody or anything.”
“But the plans were on display …”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.”
Man was I wrong…. I thought it was a penguin
Cap’n. Get with the programme.
Dolphins, not penguins. The leopard ate the penguins :-(
It seems to me there are a number of holes in the CO2 sensitivity argument.
1. The cited sensitivity is Change in SURFACE TEMPERATURE per additional W/m^2 of SURFACE FORCING.
2. Where I live you can get this number quite easily – the average difference in insolation between summer and winter is 150W/m^2. The average difference in temperature is 15DegC. This gives 0.1 DegC/W/m^2, a number similar to the Idso paper on Natural Experiments, and not far different from the estimates of Lindzen, Spencer and their colleagues.
3. It is claimed that Radiative Forcing (the imbalance at the Tropopause) is 3.7W/m^2 for a doubling of CO2. This is artificial – changes above the Tropopause are ignored (swept away by assuming that changes in the stratosphere don’t have any effect.) Furthermore the tropopause is an unstable place – it varies by kilometres within hours. And it varies enormously depending on Latitude, so the 3.7W/m^2 cannot be evenly distributed in either time nor space.
4. Then there is an assumption, hidden in most arguments, that this imbalance high in the atmosphere (which only manifests as changed temperature high in the atmosphere) translates to the Surface as SURFACE FORCING. The IPCC makes a clear distinction between RADIATIVE and SURFACE forcing. Its figure (AR4 WG1, Ch2, Fig2.2) shows a process which I am unable to see in the radiosonde measurements – there are plenty of large rapid temperature changes, but none of these seem to influence lower layers in the manner suggested in the IPCC diagram. Nor is there an appearance of a temperature hump in mid Troposphere as shown in that diagram. The TRANSLATION of Radiative Forcing to the Surface is an unexplained phenomenon. Does it occur? If so, how, and where is the evidence for the alleged mechanism?
5. Finally, as several have noted there is no statistical correlation showing any influence of increasing CO2 on Surface Temperature.
What is this person describing
“I’m tired. Truly. I’ve grown weary of trying to defend the indefensible and explain the inexplicable. For years, people have stomped their feet and pounded their fists and snorted “XXXXXXXXX bias!” and I’ve always tut-tutted and shooshed them and said, “No, no. Calm down. They meant well. It was just a misunderstanding. A mistake. These things happen.” I spent over 25 years working in the oft-reviled XXXXXXXXX and I saw up close and personal how the sausage was made. I knew the people who wielded the knives and wore the aprons, and could vouch (most of the time, anyway) for their good intentions.
But now?
Forget it. I’m done. You deserve what they’re saying about you. It’s earned. You have worked long and hard to merit the suspicion, acrimony, mistrust and revulsion that the XXXXXXXXX public increasingly heaps upon you. You have successfully eroded any confidence, dispelled any trust, and driven your audience into the arms of the Internet and the blogosphere, where biases are affirmed and like-minded people can tell each other what they hold to be true, since nobody believes in objective reality any more. You have done a superlative job of diminishing what was once a great profession and undermining one of the vital underpinnings of XXXXXXXXX.
Good job.
I just have one question:
What the hell is wrong with you guys?”
http://www.patheos.com/blogs/deaconsbench/2012/06/memo-to-nbc-what-the-hell-is-wrong-with-you/
It’s true. Editing is evil. People should be forced to watch every minute of all footage of every politician all the time.
Easy, journalism, Doc, but I’d already read it elsewhere. Cap’n Stormfield brought aliens back and they’ve taken over newsrooms, even in Tennessee.
==========
“I just have one question:
What the hell is wrong with you guys?” ”
It never was very good.
One could say it should have improved and it didn’t, and then technology changed the rules.
But technology has always done this.
So they may have got slightly worse because they had monopoly,
and technology allow the monopoly to be weaken, and it will
weaken further.
We are still in the information age, and if this is as good as
it gets, I will be deeply disappointed.
Unfortunately, Dr Forest reports that the raw model data is now lost.
===================
Can’t prove scientific fraud if the data is lost. It simply means that Forest et al. (2006) is scientific nonsense. It cannot be reproduced, even by the author. It has the same value as used toilet paper. It is paper, but it is covered in crap.
Even used TP is evidence that someone took a dump and wiped their ass. DNA fingerprinting could even identify the crapper. Lost data is evidence of nothing except perhaps things like sloth or fraud.
It couldn’t have been a climatologist then. Most of them couldn’t find their arse with both hands in a darkened room
My dad’s version was “.. with both hands tied behind their back.”
Another one I liked was “couldn’t pour piss out of a boot if the instructions were on the heel.” Couple of people here seem to fit that description.
Double Standards – A Morality Fable for Academics
Act 1:
Latimer, a bright kid, is 10 yo and sits his science exam. He writes down the correct answers but does not show his working. He scores low marks and learns that you must show not only the answer, but how you got there. He learns his lesson well.
Act 2:
Chris, a climatologist is 35 yo and writes a paper. He does not show his working.
The paper is submitted to ‘Quality Control’ aka peer-review, They do not ask to see his working.
The paper is published to acclaim. It is cited over 100 time by others. They do not ask to see his working.
Because the paper is widely cited, Chris becomes prominent in the IPCC and his paper is influential in IPCC reports. They do not ask to see his working.
Chris’s paper’s conclusions feed into the author(s) of the IPCC’s Summary for Policy Makers. They do not ask to see his working.
The SPM is read by President Obama, Prime Minister Brown, BundesKanzler Merkel and many other senior policy makers, It has a major influence on global economic policies. They do not ask to see his working.
Nic Lewis suspects all is not well, and asks to see his working
The workings have been lost.
Act 3
Omnes: Funny Old World Isn’t It?
Curtain. Fade to black.
After extended arguments with Fred Moulton about this paper on previous threads, I hope he will do us the decent favor of helping us understand why Nick is wrong and Forest right. He seemed to be quite certain about it at the time. Perhaps Fred will even entertain the idea that he might be wrong!! It is quite disturbing that the data for a paper that is so widely cited is apparently lost.
Ca you provide links? I remember an extended discussion on Forster and Gregory. Forster actually made an appearance on this blog.
Latimer, the leopard ate his workings … after it ate the penguin … or before… its uncertain which.
Dr. Curry
Nic’s previous guest posts at Climate Etc:
■The IPCC’s alteration of Forster & Gregory’s model independent climate sensitivity results
■Climate sensitivity follow up
Both links appear to point to the same page.
Second link should be: http://judithcurry.com/2011/07/07/climate-sensitivity-follow-up/
What a big joke!
Let’s see: this story has now been covered today at Climate Etc, ClimateAudit, Bishop Hill, and WUWT Somebody is not going to sleep well tonight.
I want to commend Nicholas Lewis for maintaining his composure and scientific demeanor.
Sorry Dr. Curry but please let my Rant go You all need to hear it !
Steven Mosher | June 25, 2012 at 2:51 pm |
“Sensitivity is the change in temperature per change in watts.
Its not Zero. If it were Zero then a dimmer sun would not cause cooling.
and a brighter sun would not cause warming.”
Apparently someone forgot all about the faint sun paradox. Or never knew it in the first place.
Nic Lewis,
I am trying to understand the significance of your previous post (re: Forster and Gregory, 2006) in combination with this new post (re: Forest et al. ,2006).
In July 2011, you suggested that IPCC had incorrectly replotted the Forster/Gregory06 paper http://judithcurry.com/2011/07/05/the-ipccs-alteration-of-forster-gregorys-model-independent-climate-sensitivity-results/ . IPCC’s replot moved the median climate sensitivity from about 1.6C to 2.3C and gave a much fatter tail.
In a comment on WUWT yesterday http://wattsupwiththat.com/2012/06/25/throwing-down-the-gauntlet-on-reproducibility-in-climate-science/#comment-1017900 you said:
Does this comment mean that it is now generally accepted that the IPCC replot of Forster/Gregory06 was wrong? Does this mean that the now accepted interpretation of that paper is that it suggests the median climate sensitivity is 1.6C?
Could you please elaborate (for a non specialist) on the significance of this.
[I recognise this is just one of many papers providing climate sensitivity estimates.]
“Does this comment mean that it is now generally accepted that the IPCC replot of Forster/Gregory06 was wrong?”
I think any mathematically-competent scientist who believes in objective inference from experimental results would accept that the IPCC replot of Forster/Gregory06 was wrong, in that it did not reflect the (standard) error distribution assumptions made by the paper’s authors. Its median climate sensitivity estimate of 1.6C wsn’t materially changed by the replot, but the upper tail was fattened, with the upper 97.5% confidence limit being increased from 4.1C to 8.6C.
Note that, as I understand it, many members of the ‘subjective Bayesian’ school of statisticians would think it OK to use whatever prior they thought fit, notwithstanding that it did not result in objective probablilistic inference. IMO, and I hope in that of the vast majority of scientists, such an approach has no place in science.
Nic Lewis,
Thanks you for your reply and for the excellent work you are doing.
Your response did not really address what I was asking about. What I was hoping you could explain is what is happening as a result of your paper?
For example, are there any signs that the authors, the IPCC, the journal, and the most influential climate scientist insiders, accept that the IPCC’s replot is wrong? Is there any indication of acknowledgement climate sensitivity may be closer to the Forster/Gregory06 estimate (not the IPCC replot) than the IPCC AR4 concluded?
It took Steve McIntyre 5 or 10 years to get ‘the orthodoxy’ to recognise that the “Hockey Stick” was wrong. So, if what you are saying is correct, I suspect it may well take a long time to get wide acceptance too. I am wondering what progress has been made so far?
Little, if any, progress so far, I fear. But I shall keep trying to get the invalidity of many estimates recognised. I fear that very few climate scientists properly understand how to achieve objective inference using Bayesian methods, resulting in many distorted estimates.
Nic,
Thank you. That is what I suspected. I hope the issue can be given sufficient prominence – and discussed between you and other competent statisticians – sufficiently so that AR5 cannot ignore it.
The paper should be withdrawn if there is no data to support it. It is become, in a word, baseless. The 100 papers citing it are now baseless for whatever portion relied on a Forest citation.
Climate boffins look more like The Keystone Cops with every passing day.
Don’t forget the papers that cite the 100 citing papers, etc.
John Carpenter | June 25, 2012 at 5:51 pm | Reply
That’s pretty far from correct. Once the patent is granted the legal presumption is that the patent examiner was competent and the patent is valid until proven otherwise. I was involved with the evaluation of over 1000 patent abstracts at a $40B/yr corporation. Over 300 of those went on to be granted. I’m the named inventor on four patents of my own. What has more truthiness is that a giant corporation employing world-class IP law firms can get just about anything rubber-stamped by the US PTO if they obfuscate it sufficiently.
No David, that is pretty much correct and not out of line with your comment. I too am named on several patents. A patent only gives someone the right to sue for infringement. It does not prevent anyone from practicing until an infringment suit has been brought against the infringer and the court upholds the patent claims as valid and that the infringer violated the claims. Then and only then can a patent holder prevent the infringer from practicing and collect damages. But to do that, the patent owner has to spend the money to sue the infringer. A crafty patent attorney may be able to obfuscate a claim enough to convince the examiner the claim is not prior art, but in reality the claim is crap. The inventor usually knows this and will take a moment before bringing a costly suit against a worthy competitor who would be able to show the invalidity of the claim in the court of law. The patent holder has to weigh the cost of litigating and the possibility of being found invalid against the chance of winning. If you think your patent is strong you sue, if you don’t, you have a harder decision to make.
And the ultimate decision on validity will be made by a judge with a liberal arts degree. Be prepared for heavy investments in the battle of the “expert witnesses”. It ain’t pretty!
You’re conflating infringement and validity. The patent is presumed to be valid and burden of proof that it is not valid falls on any challenger. Proving infringement is the burden of the patent holder.
See:
http://smallbusiness.findlaw.com/intellectual-property/patent-infringement-and-litigation.html
Even better, see here:
http://www.patentlyo.com/patent/2011/06/microsoft-v-i4i-supreme-court-affirms-strong-presumption-of-patent-validity.html
“You’re conflating infringement and validity”
Maybe so, but I think we are parsing words now. My original statement did not talk about validity, my point in that statement was until the presumably valid patent is upheld by a court of law either through a challange defense or successful infringement litigation, the patent really is not worth the paper it is written on. (We could argue about how patents create barriers for competitors to hurdle but that would be off topic)
With respect to the subject at hand, my patent statement was meant to be analogous to someone, like Nic Lewis, auditing a published paper for accuracy and reproducability. A published scientific paper is only truely ‘validated’ (perhaps a word you don’t think appropriate) until someone else has also reproduced/varified the results. If no one can reproduce/varify the results, is the published material and its consclusions considered valid? I’m sure there is no argument between the two of us on that.
We may be able to agree that the greatest weakness of patent examiners is they generally don’t know about prior art that was never patented. In the 1000 or so patent reviews I sat on there were innumerable instances where I raised an objection due to prior art where I knew of something that had been invented and used in the past but never patented. This happened quite often in software patent abstracts where some young programmer thought they’d done something clever and new and I’d say “No, so and so at XYZ corporation did that in product ABC”. The IP attorneys on the review panel wouldn’t let us object on obviousness. The clever bastids told us engineers on the panel “You guys are the best of the best in your fields. What’s obvious to you is not obvious to lesser beings.” How could we argue with that? :-)
Amen to that, however it only has to be obvious to those (experts) who are skilled in the art, as you know. If it is obvious to those skilled in the art… it isn’t an invention. The IP attornies on your review board, like many others, are just looking to keep themselves busy.
I doubt they were trying to keep themselves busy. They, like the rest of us, were trying to build a defensive patent portfolio. We (Dell) were a new company without the extensive portfolios of competitors like IBM. IBM was really raking us over the coals with a $100,000,000 per year fee to license some of their patents. Texas Instruments was another with their hand in our pockets. We were deeply in bed with Intel and Microsoft who had no love lost with IBM so Intel & Microsoft would assign us some patents to help out. As I recall it was around 1998 when we finally had a patent portfolio with enough meat in it to make IBM roll over and forego the royalty with a cross-license deal. Mike Dell called that the best day in the company’s history as $100M back in the mid-1990’s was a big chunk of change for Dell. IBM neeed our build-to-order business-process patents and we had pioneered BTO and patented the living crap out of it. Our end-to-end supply chain automation, factory automation, and JIT delivery was tweaked so well at the time of my retirement we were measuring inventory turns in hours because it was more frequent than daily and “human touches” to take an order for a custom computer, build it, and ship it was measured in seconds because minutes was too coarse. That’s what made Dell the success story that it is. It’s a model studied in virtually all post-graduate business schools today.
Now the point of telling you all that is who exactly do you think was an expert in the area of build-to-order back then who can judge the obviousness? Another thing the IP attorneys would tell us is “If it’s obvious then someone else would have done it already.”
But as you probably know few patents are ever litigated in the big leagues. That takes too much time and money and the uncertainty for both parties makes planning difficult. For public companies or those planning near term IPOs outstanding litigation is not good for the market value of the stock. So deals are made instead and no one really scrutinizes individual patents but rather evaluate the number and kind knowing that if there’s gonna be a pissing contest in a court some of that number will be upheld. So it comes down to a negotiated cross-license deal in most cases.
Hi all,
At this time, I would like to acknowledge the blog post by Nicholas
Lewis and comment, where I can, on some of the details. Regarding the
missing raw model output data, I realise this is an important concern
and am frustrated by the loss and not being able to check directly
into the issues raised by Mr. Lewis. In working with large climate
model datasets, data archiving was not feasible given resources
available in 2003 when simulations were run to produce the data in
Forest et al. (2006). Although these raw output data are gone, the
model and inputs should be available to re-run the simulations if
needed. In terms of results, I cannot comment on the differences
between the Curry et al (2005) and the Sanso et al (2008) data sets
after recently learning about the analysis done by Mr. Lewis. I will be
discussing this with my co-authors and report what I have learned.
Due to health issues and work responsibilities, I was not able to
respond as quickly as I would have liked to the request for the data
and codes. I agree that reproducibility is a critical component of
scientific progress and that computational and data requirements of
climate modeling make this problematic. Open access to model and
analysis codes along with the data sets can help address some of these
issues.
Regarding the conclusions in Forest et al. (2006), we have confirmed
the previous results with a different model and published in Forest et
al. (2008). The raw model simulation output data are potentially
available to test problems raised by Mr. Lewis. Additional
sensitivity tests have been performed to address whether results are
robust. One recent paper (Libardoni and Forest, 2011) has addressed
how alternative observational records of surface temperature changes
have an impact on the probability density distributions. The results
from Libardoni and Forest (2011) indicate that the major conclusions
in Forest et al. (2008) are robust. My colleagues and I are continuing
to explore sensitivities to choices made in estimating such PDFs of
climate system properties given their importance in understanding
potential risks of future climate change.
Sincerely,
Chris E Forest
Associate Professor, Department of Meteorology
The Pennsylvania State University
University Park, PA, USA
Citations:
Forest, C. E., P. H. Stone, and A. P. Sokolov (2008), Constraining
climate model parameters from observed 20th century changes, Tellus,
Ser. A, 60(5), 911–920.
Libardoni, A. G.,and C. E. Forest (2011), Sensitivity of distributions
of climate system properties to the surface temperature dataset,
Geophys. Res. Lett., 38, L22705, doi:10.1029/2011GL049431.
Chris E Forest | June 26, 2012 at 10:41 am |
Nine years is a long time to try to find data, the way most research is conducted.
CERN has disposed of much of the data it’s collected, too (after confirming it involves only collisions of no interest to the current study), and it is regrettable that we cannot always know ahead of time what data will be part of published work, and what will not pan out on analysis; even with the most well-organized data management it isn’t certain that situations like this one will not come up from time to time.
It’s good to see engagement between auditor and author, scientist and citizen scientist, and I hope to hear more. Perhaps this, too, may be an opportunity to improve the conclusions going forward.
Bart R, once again scientists feel the need to save ‘space’. Almost funny.
Tom | June 26, 2012 at 11:46 am |
It’s not just scientists. I can’t tell you the number of times the religious have saved ‘space’ in various gospels, books, and scriptures..
And every major multinational corporation with legal representation worth its salt has, so far as the law and pragmatism allows, one day a year set aside to shred every document and purge every bit of computer memory of records more than one year old on that date.
Try to find any North American automobile manufacturer with a memo anywhere in any file cabinet from the 1990’s, or even from 2005. Not as easy as you might think.
You may think you have a point, but the data, and lack of it, show you’re just plain makin’ stuff up.
Oil company information as well.
You can’t make profits from what a future generation might do. It’s all about right now, and the past contains dangerous information.
Companies are private, “corporations are people!”, you have no right to dig in their shorts.
That’s why we pick fights with university professors, right?
Hypocrisy on hyperdrive, all the time on Climate Etc.
29 minutes after the day starts, WebHubTelescope secures his hold on the title, “Dumbest Comment of the Day.”
People focus on global warming as a challenge because they feel that they can do something about it.
Discussion of fossil fuel depletion has no positive outcome, as it is a monotonic decline with no respite.
@bart r
Every multinational with an even semi-competent IT department will have taken regular backups of all its systems. And you never never never destroy backup tapes. You stick them in a vault somewhere. You may not pay them much attention, But you keep them.
Old DP manager’s motto. ‘Nobody ever got fired for having too many backup tapes’.
And on a purely practical note, it is far far harder to ‘destroy’ operational data than you suggest. For example, even If you find the ‘master copy’ of an e-mail and get rid of that, you then have to find all the recipients, and all the ‘cc’s and ‘bcc’s ….and all their ‘cc’s and ‘bcc’s and ‘forward’s .And all the printed copies, Some of them may be outside the organisation, on people’s personal PCs etc etc. And all the ones that refer to the originals
‘Purging’ records in operational databases is also very hard. You can delete the ‘master record’, but it will likely have so many dependencies that you also need to decide how to change – and that they too have dependencies – that the effort soon becomes overwhelming, and instead you just design the DB with some sort of ‘non-operational’ indicator, and leave it intact.
The exercises you may have worked on or heard about are decluttering exercises…not data deletion. Far far too much hard work to do that….and for so little benefit.
Latimer Alder | June 27, 2012 at 1:12 am |
Every multinational with an even semi-competent IT department will have taken regular backups of all its systems. And you never never never destroy backup tapes. You stick them in a vault somewhere. You may not pay them much attention, But you keep them.
This is the difference between opinion and fact. In my opinion, records ought be kept. It’s my feeling that IT departments ought overrule legal departments on the matter of records retention. I’d hope backups were well-managed so that not only do they continue to exist, but also so that they are securely accessible.
However, it just ain’t so. Even in places with strong data retention laws, there is data destruction in business practices. And ‘strong data retention laws’ are historically pretty rare.
Show me a backup tape from any North American auto manufacturer containing even a single email more than a decade old. (Or US-based oil company, or any non-government organization that’s ever hired David Wojick..)
Heck, you think the current administration in the White House has access to 100% of the records generated by the previous administration?
Science ought be better than this. And it largely is better than business and government, but not by nearly enough, for its own good.
@bart r
Maybe its different in the US where you seem to sue each other for no reason other than to boost the lawyers pension funds by a process of mutual self-impoverishment, but what I describe is how it is in my 30-year IT experience in the UK – largely with large and medium sized commercial organisations – finance, banking, retail, engineering and public service sectors.
From what I understand about the iT practices in the academic world they are pitifully amateur and unprofessional by comparison with even the most inept business I have worked with. And I have serious doubts if climatologists should be allowed loose on anything to do with IT without passing some form of test of competence. They certainly don’t know much about IT operations.
Latimer Alder | June 27, 2012 at 2:51 pm |
Yes, it is different in the USA, where rent-seekers have successfully corrupted or co-opted the government through the back door. In socialist Europe, rent-seekers have a cosier, more public relationship with the government legally and cleanly, but hardly to the advantage of either the free choices of individuals or of economic efficiency.
There’s less need to destroy (or avoid making) records when the government limits how much exposure a wrongdoer faces from individual litigants, as in Europe, so less pressure to, say, ‘simplify’ laws by removing requirements to document at all. You can ask David Wojick about that practice of co-opting government against the public interest.
@bart r
We seem to be talking at cross purposes.
On the one hand you seem to be wanting to use data backup as a metaphor for some grand philosophical debate about the nature of the relationship between business and government. Or ‘co-opting government against the public interest’…which all seem to be theoretical political points to me, and well above my pay garde.
But I’m just telling you – from personal experience – about the day-to-day life of an IT manager charged with maintaining an organisation’s data. And it isn’t at all like your ‘model’.
The major US-based multinational I know best has a ‘records retention’ policy. And the purpose of that is to do exactly the opposite of what you claim. It is to try to ensure that important business records do not get accidentally deleted in general housekeeping exercises. The fear is not that too much is retained – data storage continues to get cheaper and cheaper – but that something important – engineering drawings, a BoM, a patent, some regulatory paperwork, employment records, etc etc – the list is endless, will disappear as everybody thinks the other guy has got a copy until the last one has gone.
And just because it is so much easier to say ‘backup the lot’ rather than spend huge amounts of effort selectively deleting stuff – and with the risk of human error causing later problems -, then that is what we tend to do.
You probably do the same on your own computing device…with storage costs getting nearer and nearer to negligible, you probably don’t bother to program a selective backup, but just copy a complete disk image.
So – whether your philosophical reflections on the nature of big business have any relevance here, I cannot tell. But down in the engine room of data storage and archiving, the general rule is to backup everything, back it up again, have a coffee, run a backup, take copies of the backup….if it moves, back it up!
Nobody ever got fired for having too many backups.
Latimer Alder | June 28, 2012 at 12:03 am |
Our personal experiences are not so different, except that some places do some things differently, and people not only get fired for too many backups, they get walked out the door without severence, their homes get searched, they may be sued or serve jail time, and if they’re in the military, they may serve it in solitary.. and as for philosophy.. all philosophy is crap.
If it gets seriously in the way of problem-solving, doubly so.
But sadly IT folks don’t run the world, regardless of what we think the engine room gives us control over.
I make no claim to run the world, nor want to.
Please give concrete examples of the incidents you describe.
1. Fired for too many backups
2. Fired without severance fo rtoo many backups
3. Homes searched for too many backups ((??? – not the same as unauthorised removal of an organisations assets = theft??)
and the cryptic:
4. ‘If it gets seriously in the way of problem-solving, doubly so’
You began our little exchange with this gem:
‘And every major multinational corporation with legal representation worth its salt has, so far as the law and pragmatism allows, one day a year set aside to shred every document and purge every bit of computer memory of records more than one year old on that date’
Can you substantiate this claim with. for example the dates set aside n 2012 for this exercise for, say, five multinationals of your own choosing?
And documentary evidence of it? I never saw, or heard of, such a practice in my 30 years in IT.
Latimer Alder | June 28, 2012 at 2:36 am |
Please give concrete examples of the incidents you describe.
1. Fired for too many backups
2. Fired without severance fo rtoo many backups
3. Homes searched for too many backups ((??? – not the same as unauthorised removal of an organisations assets = theft??)
and the cryptic:
4. ‘If it gets seriously in the way of problem-solving, doubly so’
You began our little exchange with this gem:
‘And every major multinational corporation with legal representation worth its salt has, so far as the law and pragmatism allows, one day a year set aside to shred every document and purge every bit of computer memory of records more than one year old on that date’
Can you substantiate this claim with. for example the dates set aside n 2012 for this exercise for, say, five multinationals of your own choosing?
And documentary evidence of it? I never saw, or heard of, such a practice in my 30 years in IT.
To address your points randomly.
#4 – To simplify, a ‘philosophy’ that interferes with problem solving is doubly worth dispensing with as one that merely contributes nothing. So, ‘socialism’ is often merely useless, while dogmatic literal interpretation of religious text is a real impediment and thus worse than socialism.
#1-3 Seriously, you’ve never heard of Julian Assange?
I’m sure his organization could supply you with the education in security issues you seek. That you’ve been in IT for so long and remain utterly unfamiliar with the history of security is shocking, and I doubt I’m the one who can address that gap in your knowledge.
And your unnumbered fifth question: what business is going to conduct such activities as will invariably invite challenge, and may be illegal in some places, in such a way that public tracking of the activity is easy?
So my challenge stands, that you have not answered: show me a tape backup containing an internal email from a North American automobile company from 2002 or earlier. You’re the one making the claim that such data is easy to come by. This obstacle course of five multinationals you’ve constructed, amusing though it is, seems constructed so you can avoid coming to grips with the real world on its own terms.
Holy… Bart R just said:
#1-3 Seriously, you’ve never heard of Julian Assange?
What were #1-3?
That’s right. Bart R just claimed Julian Assange was fired, without severance, and his home was searched, because he made too many backups…
@bart r
Don’t be silly.
You are trying to evade the issue by discussing iT security , not backup policy. Different things.
I set a perfectly reasonable challenge – substantiate your claim that multinationals actually do schedule the days you describe. Assuming that you didn’t make the whole idea up out of nowhere, perhaps you can show a newspaper article, or a copy of a memo or the transcript of a conversation that gives some – or any – substance to this assertion.
Where TF Assange comes into the discussion escapes me completely. Is he the guy who holds the evidence you are lacking? And, last I heard, he hasn’t been sacked for anything. He’s holed up in the Ecuadorean embassy in London attempting to avoid extradition to Sweden on serious sex charges.
http://www.telegraph.co.uk/news/worldnews/wikileaks/9361410/WikiLeaks-founder-Julian-Assange-ordered-to-present-himself-to-police.html
Latimer Alder | June 29, 2012 at 4:42 am |
I see what’s happened here, with a little help from BS:
Bart R just claimed Julian Assange was fired, without severance, and his home was searched, because he made too many backups…
This is a simple issue of reading comprehension! I do not say this to belittle those who just can’t follow the connection between data backup and data security, but to admit I ought have laid out the connection more clearly.
Julian Assange is a name associated with data security and the unauthorized use of data (I could have as easily named Rupert Murdoch).
Tape backups store data. Their authorized use is to securely store data for later recovery should the original data be lost; no more, no less.
The number of copies of data is a question of data security. Any less, or more, than the business-justified number of copies violates the principles of data security.
While having ‘just enough’ copies is good data security, having too few bespeaks incompetency (which Latimer Alder appears to recognize), while having too many opens the field to questions many owners of data are concerned about: data control.
In Science, where all data ought (to my way of thinking), too many copies of the data is unlikely to be a negative from my point of view (so long as it can be authenticated to source).
But if Latimer Alder is claiming that control of data is so unimportant, that the number of copies of data is not taken seriously, that no one has been ever exposed to discipline or criminal charge for violating the security of data through mishandling of backups, then he is flat-out making false claims.
As the claim started with him, and he has not met the challenge of providing even a single copy of unauthorized backup data (which, let’s face it, he could do just by referencing to HarryReadMe — an exception that by the anonymity and police investigation of its source proves the rule), I see no reason to entertain his silly ‘conditions’, which he must also know would similarly violate the security of the five multinational firms as they just as jealously guard operational details their internal data security practices as their internal data, by and large.
However, just scanning the headlines furnishes examples of data backup disposal laws http://www.pcadvisor.co.uk/news/security/3361659/new-jersey-lawmakers-want-copier-hard-drives-wiped-prevent-id-theft/ pertinent to the baseless claim that no one ever got fired for too many backups.
For the reading comprehension problem, I recommend READ HARDER.
I made no claims at all that nobody has been fired for ‘mishandling backups’. And though I have never actually fired anyone for doing so, I have on several occasions not renewed people’s contracts where not paying sufficient attention to the importance of backups have been a major contributory factor.
Nor am I ‘claiming that control of data is so unimportant, that the number of copies of data is not taken seriously’. Indeed, I take it very seriously when asked to advise my clients. But as a general rule, it is wise to start with more rather than fewer tape backups. And to put them in separate secure vaults, preferably separated by an ocean or two. And in the custody of entirely separate organisations.
Because you mustn’t forget that another use of backups is to help you with disaster recovery planning. One of the critical things you plan for is how long will it take you to get back on the air if a disaster strikes. Optimising that to the needs of the business – ie. the technical ‘insurance premium’ your client is willing to pay is another piece of the backup/security/DR jigsaw. There are no absolute ‘right’ answers in this conundrum … just finding the right balance for each individual situation.
And personally, if the choice is between admitting that a third party has got hold of some of our data – which will cause embarrassment and apology and a bit of grovelling – , or that we have lost our last copy of our data and so cannot continue to operate at and must declare ourselves out of business, I’d choose the grovelling any day. If you are in that position and you choose differently, good luck.
As to your rather trivial example of digital copiers, you really aren’t telling us new news. Wise DP/IT managers have always taken a wide variety of precautions against old data being read when equipment is disposed of…from degaussing tapes to sledge hammers..and fancy bits of software inbetween. That the State legislature proposes to fine people for not doing something that is pretty much common sense seems to me to be legislative overkill. But I spot a new services opportunity here..travelling sledgehammers to save you the trouble of trashing the disk and replacing it
I’m working on the Large Hadron Collider at CERN and can confirm that the experiments do not “discard” data (at least not in the way you imply). Experiments employ a multi-level trigger system to select particle collisions for further study. The rate at which experiments can select such “events” is determined by a number of factors, such as electronic latency and therefore a small number end up being collected. We therefore ensure that these “events” have interesting properties. In other words we collect as many events as the detector read out systems physically permit. Everything we collect (and subsequently use for a data analysis) we store.
This is a very different situation from the Forest case. There are no realistic technological reasons why his data should not have been stored.
Roger | June 28, 2012 at 1:06 am |
Thank you for the clarification. I stand corrected. I suspect the Forest case is more along the lines of the poor data management as is practiced in much of academia than malice.
While I’d prefer the data all be well-tracked and stored (indeed, I’d prefer if data used in science were openly stored from cradle to grave), and I’d prefer if the audit were attempted within a plausible span of time after publication (though I know of several researchers still publishing ‘new’ data decades after they originally collected it, one slice at a time), and I’d prefer if experiments were pragmatically subject to reproducibility by citizen scientists as a standard rather than audit.. We can’t always get what we want.
I think you would be among the first to protest if commercial organisations weren’t adhering to the standards you describe.
Why should academia in general and climatology in particular be given a free pass in failing to act as professionally as a toothbrush manufacturer or interior door design company or any other organisation.
Especially when they are working on the ‘most important problem humanity has ever faced?’
Seems to me therefore that we should only accept work that meets
‘the highest professional standards that humanity has ever devised’
rather than any old unchecked junk. And that the very last people who should be making the assessment are other climatologists and.or academics. It is clear that they wouldn’t recognise a high professional standard if it hit them in the face
I suspect the Forest case is more along the lines of the poor data management as is practiced in much of academia than malice.
============
Poor data management is evidence of poor methods. If you can’t manage the data they what evidence is there that you can manage the results? If your data management is sloppy, then this is strong evidence that your methodology is sloppy and your results are likely to contain errors.
I build commercial data warehouses for a living. The purpose of these warehouses is to PREDICT how to increase revenues and decrease costs. Unlike climate science, if our predictions fail, then we are out of a job.
Quality data is a meticulous process requiring attention to detail every step along the way. The idea that you can build robust results on top of shaky data management is nonsense. If data management is poor quality, so are the results.
A chain is only as strong as its weakest link. You cannot make a silk purse out of a sows ear. You can however eat the sow and claim it was silk. Especially if no one ever checks. Without the evidence, none the wiser.
Latimer Alder | June 28, 2012 at 2:47 am |
First to protest? *shrug*
As the standard of record-keeping in academic institutions is a slap-dashery cobbled together over decades by ill-organized and rivalrous individuals notable for independent thinking, subject to frequent upheaval and change, pressured at many levels internal and external, and with too many agendas, it’s not unexpected when data management fails.
This isn’t a free pass. This is recognition of causes. Since the causes have been known for so long, and the people so well-versed in the problems (even before 2003, there had been record-keeping scandals), vigilance and improved methods are more, not less, to be demanded.
Would be nice if citizen scientists pooled some resources and organized to be more timely in their volunteer efforts; hard to make demand of unpaid volunteers in this case, and certainly not very valid to criticise too much people for simply reading skeptically when they find problems.
Would be nice if professional researchers remember they’re not volunteers, and their profession has suffered enough black eyes already.
Most of America’s universities suffer problems of infrastructure, and it would be eye-opening if citizen-scientists looked into the polcies and funding of data storage and management widely, I think.
Chris Forest
Thanks for taking the time to stop by and provide a courteous response. No doubt others better qualified than I will be responding to you.
tonyb
I post this as a response to Dr. Forest but it may not be the right place. Here is a published abstract that in itself is a comment on some of the data and methods in these analyses.
A lower and more constrained estimate of climate sensitivity using
updated observations and detailed radiative forcing time series
R. B. Skeie (1), T. Berntsen (1,2), M. Aldrin (3), M. Holden (3), and G. Myhre (1)
(1) Center for International Climate and Environmental Research – Oslo (CICERO), Norway (r.b.skeie@cicero.uio.no), (2)
Department of Geosciences, University of Oslo, Norway, (3) Norwegian Computing Center, Oslo, Norway
A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative
forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly
constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is
estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat
content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic
and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are
not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations
of ocean heat content and temperature change through an energy balance model and a stochastic model, using a
Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9°C with 90% credible
interval (C.I.) ranging from 1.2 to 2.9°C, which is tighter than previously published estimates. Observational data
up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of
previously published studies that have used the instrumental record in attempts to constrain the ECS.We show that
the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly
narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in
the analysis, the 90% C.I. is 1.4 to 10.6°C with a pronounced heavy tail in line with previous estimates of ECS
constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this
study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1°C, while the 90% C.I.
is significantly broader ranging from 1.1 to 3.4 °C if only data up to and including year 2000 is used.
More notes:
This is the same EGU meeting that Tamsin Edwards blogged about here. Tamsin’s abstract is also available (just google it). Tamsin does not give a numerical estimate in the abstract (perhaps this is wise since these presentations are subject to extensive feedback before becoming published papers). Time will tell if Skeie et al revise their number or how robust it is. Here’s to hoping 1) that they make their data available and 2) that the assumptions are solid and can’t be condemned as another handwaving guess at aerosols.
Chris, thanks very much for stopping by and for your comment.
Chris, thanks for responding. My immediate thought on reading Nic’s post yesterday was that even if the output data had gone, it ought to be possible to reproduce it (or something very similar) just by running the code again, so it’s good to see you suggest that this should be possible.
I am afraid though that many here will not be impressed by the statement that it wasn’t feasible to store the data in 2003.
What size of database are we talking about? Even in 2003, I seem to recall 100 GB hard disks not being that uncommon or expensive.
http://www.jcmit.com/diskprice.htm
Sort of makes the budget excuse look a bit shabby.
Re: PE,
Yeah… around $1 a Gigabyte at that point. Compared to the cost of publication, data storage has always been cheap.
Dr. Forest,
With all due respect, your rationalization is not credible.
.
Maybe check around the locker room in the athletic dep’t for the missing data. Who knows what more got buried in there where the sun doesn’t shine.
@Chris E Forest – thanks very much for responding, and for your further investigation into this issue.
Professor Forest,
Thank you enormously for your clear and courteous response here. From my perspective (a non specialists but interested in policy), the climate sensitivity is one of the most important parameters that need to be tied down better. Only the damage function is of more importance for policy, IMO.
I sent the following to friends the morning:
Dr. Forest,
This statement, written in passive voice, which assigns responsibility to no one is the kind of thing that reduces credibility, (and of course, delaying for a year then only responding when called out in public).
Here is a new and improved way to gain credibility back. Say something to the effect of:
“I was as not as diligent as I should have have been in making sure the requisite resources were allocated for proper archiving when we produced the paper. I will do my best to make up for this oversight by working to ensure the model and inputs necessary to recreate this data are made available to Nic Lewis as soon as possible, and to work with him and others to make sure that the data necessary to support our paper is recreated, checked for accuracy, and archived as it should have been.
Also, I would like to apologize for my earlier lack of responsiveness, a year delay is unacceptable. I realize that now.”
Rather than genuflecting about how the scientific method should work, just step up to the plate, cast off the veil of infallibility so common to climate science and work with others to improve the state of knowledge.
Go Baby Credibility, Go.
==============
The Piltdown Mann’s papers were the tipping point and we expect a credibility free field of science by the summer of 2013. The death spiral of climate science.
===================
P.E.
i think you are miuch nearer the mark. I think Dr Mann is a competent scientist (not a genius) who came up with a theory that was over promoted and he has had to keep on justifying it.
As he probably believes in what he does I don’t think it is fraud.
tonyb
Comment disappeared. WP getting grouchy again?
A free version of the 2011 paper:
http://www.agu.org/journals/gl/gl1122/2011GL049431/2011GL049431.pdf
Steven–
Thanks for supplying the Libardoni and Forest 2011 paper. It appears that the same time series (ending in 1995) was used here as in Forest 2006. Considering that we have just seen another study (Skeie) showing that using more recent data constrains climate sensitivity, can Dr. Forest justify ignoring the 16 years of additional observational data?
It does seem odd that one would stop their data series that long ago. 1995 always reminds me of the stratosphere for some reason, perhaps the lack of cooling since then.
@lolwot
‘The hawaii measurements are bridged by calculations showing the expected decline in pH as a result of elevated atmospheric CO2 levels’
Surely even you can see the application of ciircular reasoning in that remark.
Your theory says ‘you expect’ a decline in pH. When there are no measurements, therefore you insert your ‘expected’ data. And lo! in a masterpiece of circularity- the inserted data shows the effect you wished to prove.
Wow…I mean wow!
Do you really think that such a piece of twisted logic is going to pass the average teenager. let alone the Climate Etc audience?
BTW – your ‘very frightening’ – but completely anonymous and uncredited graph supposedly of the pH of the last few hundred thousand years doesn’t even match the Hawaii data. It purports to show a recent (last 10K years) pH plummet from 8.2 to 7.8. But the Hawaii figures are still at 8.2.
Got anything better? The stuff you have presented so far is, quite frankly, bollocks.
Actually Latimer it made more sense that most of your comments.
It is a rare feat for Latimer. Time will tell how sensitive the ocean is to slight changes in acidity.
http://freeport.nassauguardian.net/social_community/311581527447883.php
@webbie
If the oceans ever get anywhere near ‘acidity’ then we are all in a lot of big big trouble. But there is no cause for concern. The oceans are alkaline now and under any conceivable atmospheric concentration of CO2 will remain alkaline for ever.
The only questions are whether they will become just a tad less alkaline (nearer to neutral) if CO2 concentrations increase, and if they do, whether that will have any significant consequences.
So far, there are no real world observations that suggest either effect is actually happening.
WebHubTelescope | June 26, 2012 at 1:55 pm | Reply
“Time will tell how sensitive the ocean is to slight changes in
acidityalkalinity.”Fixed that for ya. Chemistry isn’t your strong suit either, evidently. What is?
Please ignore this comment – which I have reposted in its correct place. WordPress up to its tricks again :-(. Sorry
The business of finding where to put the reply is getting out of hand. Let me start again.
lolwot you write “The excellent global surface temperature data shows about 0.8C warming in the past 100 years. That is a lot compared to how much the Earth typically changes in temperature over such a time frame.”
Here we go again. Once again lolwot claims something is right because he says it is right. What is the peer reviewed reference that proves that 0.8 C rise in temperature over 100 years is “a lot compared to how much the Earth typically changes in temperature over such a time frame.”? I thought we only had good data for about 150 years.
I am thinking of the various temperature reconstructions of the last 1000/2000 years (not the original mann hockeystick I know that’s wrong). That’s 10 or 20 centuries of data and I don’t recall seeing a 0.8C temperature change in any of them – or at least it isn’t a common occurrence.
Okay sure you don’t accept those reconstructions, but lets flip this on it’s head.
How do you determine what a Catastrophic level of warming is? I look at past changes and compare recent changes to that as a kind of confidence measure that either “okay it’s fine, it’s within natural variation”, or “hmm looks like it might be unusual..could be dangerous”.
What do you do though? You accept 0.8C warming in the past 100 years. Is it really the case that you have no idea what 0.8C warming is in context of what’s normal?
lolwot, can you please link to the reconstructions you’re thinking of?
lolwot, there is no really good way to accurately estimate the peaks or valleys of the reconstructions. On the whole though, they do give a good indication of average and shifts. When the reconstructions try to determine changes in sea surface temperature they seem to be most reliable because SST has less fluctuation and is more pertinent to the problem
.http://i122.photobucket.com/albums/o252/captdallas2/SouthernExtentreconstructionwithGISS24Sto44S.png
That is the Tasmania and Southern South american reconstructions with
GISS temp for that latitude, You have a little hockey stick action but look at the average, in estimate Wm-2 for that chart.
http://i122.photobucket.com/albums/o252/captdallas2/tasmainaSSAHADSST2.png
That is the same two reconstructions with HADSST2 this time in just anomaly, note the average again.
The instrumental is near the peak now starting from the most recent deep valley. We are at a point where the reconstruction typically start turning down toward average. Is 0.4C above average abnormal starting from 0.4C below average? If we were approaching a natural thermal limit would temperatures level as they approached the limit?
So yes, if you don’t know average you cannot predict abnormal :) You can only predict abnormal for your limited time frame of instrumental.
For anyone interested in this matter, I highly recommend they visit lucia’s site. A bunch of recent posts on it have discussed ways in which temperature reconstructions wind up with deflated variance. lolwot makes an issue of the fact most temperature reconstructions don’t show 0.8 C temperature changes, but that’s necessarily true of many of them simply because of their methodology. The way many reconstructions are made ensures they could never show such changes in the past.
By the way, we should be happy lolwot admits MBH (the original hockey stick) was wrong. That’s more than a lot of people will do. Now, if he would admit the various issues with MBH that show up in reconstruction after reconstruction, he could truly impress us all. Of course, that would necessarily require him reject basically everything the “consensus” says about millennial temperature reconstructions…
lolwot, you write “What do you do though? You accept 0.8C warming in the past 100 years. Is it really the case that you have no idea what 0.8C warming is in context of what’s normal?”
Completely fair questions. Let me try and answer. First, I have read all I can on the subject, and I can find no-one who can explain, in detail, why global surface temperatures change in the way that they do. We know about El Nino, La Nina, the PDO, the AMO, etc.etc., but no-one seems to know what causes these changes. We know that over the millenia, global temperatures have changed considerably, from ice ages, to interglacials, and no-one can explain why. Yes, there are all sorts of theories, but no actual explanations. And there are excellent hypotheses which claim to show that extraterrestrial effects dominate in what affects climate.
So, we know that there are factors which change global temperatures, but no-one knows why. So, I cannot see how anyone can be positive that any particular time period is not caused by some sort of natural phenonema, which has nothing to do with what the human race is doing. I see no reason why all the temperature variations that are going on cannot be explained by natural forces, with no need to believe in CAGW.
And this is my point. The world climate is such a complex subject that no-one understands it in detail. I look at the hypothetical estimations which have been made by the proponents of CAGW, and the outputs of their non-validated models, and I am not convinced by any of it. There is virtually no measured data whatsoever that connects a change in CO2 concentration with a change in surface temperature. And I see no reason why the good data we have obtained in recent years proves that CO2 is having any effect at all. And nothing you have produced has changed my mind.
Jim Cripwell
In your post, you have summarized very well what many rational skeptics of the IPCC CAGW premise have concluded.
– Warming has occurred over the 20th century (in two statistically indistinguishable 30-year spurts).
– Human emissions of CO2 and atmospheric CO2 concentrations have increased over the same period, especially since the end of WWII
– There is a GH theory, but NO EMPIRICAL EVIDENCE that links the two (in fact, the 30-year mid-century cooling cycle as well as the slight cooling of the past 15 years despite ever increasing CO2 levels tend to FALSIFY the CAGW premise).
– We do not know all there is to know about our planet’s climate and what makes it behave as it does – in fact, our uncertainty is very likely much greater than our knowledge.
– Even IPCC has conceded that its “level of scientific understanding” of natural forcing (solar) is “low” and that clouds remain the “largest source of uncertainty”
These arguments are hard for lolwot (or anyone else) to counter.
Max
iolwot
Surely the instrumental temperature change of 1.5degrees C from 1700 to 1730 is more impressive? First graph here-from the Met office
http://judithcurry.com/2011/12/01/the-long-slow-thaw/
tonyb
tony b
Wow! That early 18th century 30-year warming cycle topped BOTH the two 30-year early and late 20th century warming cycles together!
Max
Max
Yes, I am sure Iolwot will be pleased to start using it in preference to his own much more modest example
tonyb
Knowing the past helps to understand present
http://www.vukcevic.talktalk.net/CET1690-1960.htm
Chris Forest
Thank you for commenting. Open access to model and analysis codes along with the data sets used in Forest et al 2006 and your other studies would greatly help reproducibility – indeed, both are essential for it. Unfortunately, little if any progress seems to have been made on this since I first asked you for data and code over a year ago. The speed and ease with which Bruno Sanso responded to my requests for data used in Sanso et al 2008 show that providing processed data, at least, is a trivial exercise.
May I ask you to take this opportunity to clear one or two things up? Will you confirm that the processed surface and upper air diagnostic model, observational and control (HadCM2) data that you sent Bruno Sanso, in a 550Kb file named brunodata_May23.tgz, containing a file archive brunodata_May23.tar dated 23/05/2006 18:30, for use in Sanso et al (2008) was the same as that used in Forest 2006, as was stated in Sanso, Forest and Zantedeschi 2008? That statement, incidentally, is only correct as regards the deep ocean diagnostic model data (missing from the tar file archive) if you used the same such data in Forest et al 2006 as you supplied earlier for use in Curry, Sanso and Forest 2005 (that being what was actually used in SFZ 2008). Did you in fact do so?
I’m not sure that I regard Forest et al. 2008 as confirming the results of Forest 2006. The main results marginal climate sensitivity PDFs for the two studies, using uniform priors, differ substantially in shape. The Forest et al 2008 PDF is much less strongly peaked, and substantially worse constrained between 4K and 8K, than is the Forest et al 2006 PDF. The two studies’ PDFs for Effective Ocean Diffusivity also differ substantially: one peaks at 0.65 cm^2/s and the other at 2.3 cm^2/s (both using uniform priors).
Incidentally, would you like to confirm what were the critical retained eigenvector truncation parameters used in Forest 2008? They were not stated in that paper.
The Libardoni and Forest 2011 used an ‘expert’, rather than a uniform, prior for climate sensitivity, so its results are not comparable to those of the 2006 and 2008 studies using a uniform prior. And the shape of the Libardoni and Forest 2011 marginal posterior climate sensitivity PDFs are very similar to that of the ‘expert’ prior used, indicating weak inference from the data. It is surely the common ‘expert’ prior itself, not the data, that accounts for the similarity of the climate sensitivity PDFs from the 2008 and 2011 studies that are based on that expert prior.
I note that the PDFs for Effective Ocean Diffusivity from the 2008 and 2011 studies also differ substantially, with that from Libardoni and Forest 2011 being even further from the PDF in Forest 2006.
Of course, even if your various studies did support each other it would not mean that their conclusions were valid, if the statistical inference methods used in all of them were seriously flawed, as I have reason to believe.
You say that “The raw model simulation output data are potentially available to test problems raised by Mr. Lewis.” May I ask if you have taken steps to make the raw MIT model data used in Forest et al (2008) actually available on a publicly accessible FTP server, along with adequate documentation of the file formats and meta-data, so that myself and any other interested parties may download and use it?
Nic Lewis
This is where the long, long pause comes in.
Hopefully not too long. I’m working on a few responses.
Dr. Forest: Thanks for checking in. We’re looking forward to your responses.
For others: please assume good faith, and please credit Forest for showing up, responding courteously, and promising to help resolve the problems created by his lost dataset. Don’t bite the scientist!
All very reminiscent, NL, of barons of yore, unsaddled(unsettled) lying in the muck helplessly stuck, armored and hampered by iron clothes of peer review, peering out from under their helmet, still bolted in place.
================
pdt, agreed. Let’s not jump on someone who is prepared to engage here. Nic is the best person to raise any concerns re any perceived tardiness or insufficient response, and Chris has raised health and work pressures as reasons for delay. Let us encourage the kind of discourse which can resolve any methodological or scientific issues.
Good for you :)… Although I’m just a simple process control programmer, I really enjoy reading what you folks on this blog have to say. I’m encouraged that you will help in sharing your knowledge. Looking forward to your responses :)
Dr. Forest
Thanks for tuning in here and great to hear that you are working on a response.
It is of critical importance that this be cleared up, so I’m sure everyone here (especially our host and Nic Lewis) is waiting with anticipation.
Max
It’s getting hot. Ain’t it, Chris. I wonder when Darrell Issa will get around to you people.
More likely through the process of legal challenges to the EPA’s unsound regulations.
=========
I see Judith has deleted my recent comment. Are we supposed to baby this guy, in hopes he will come clean (like Jones and Mann have)? Or are we so desperate for condescending participation from any member of the consensus crowd that we have to turn a blind eye to stonewalling and duplicity? He must have had the data when he submitted the paper in 2006. If he didn’t have it a year ago when asked for it, then it must have been lost recently, otherwise he would have mentioned it before now. That’s convenient. He only scrambled here to make lame excuses, because he was called out in public. Don’t you people sense Nic’s exasperation with this character?
My apologies to Judith. Just saw my previous comment. Feel free to punish me for my insolence, Judith.
Most of the commenters including yourself have no idea what Effective Ocean Diffusivity is and its significance. The fact that diffusion coefficients can vary over a large range has huge implications for the equilibrium climate sensitivity. The diffusivity is almost entirely responsible for the uncertainty between the transient and equilibrium cases.
And the difference between .65 and 2.3 amounts to not much.
I’ve been gaining an understanding though. And I don’t know how you can say that three-fold difference is not much. Maybe you mean there are much more divergent estimates out there.
billc, I think it means he can’t figure out how to narrow the range of estimates. With a little basic thermo refresher course he would realize there is no perfect heat engine and that entropy also has limits or we not be here to discuss the issue :)
WebHubTelescope | June 26, 2012 at 2:20 pm | Reply
“Most of the commenters including yourself have no idea what Effective Ocean Diffusivity is and its significance.”
You’re projecting.
True, I know VERY little of a lot of things. I have never stated otherwise. My comment had to do with when Mr. Nic Lewis might get a response that would put the questions to rest. I could be wrong about that also. We will see :)
WebHubTelescope | June 26, 2012 at 2:20 pm | Reply
“And the difference between .65 and 2.3 amounts to not much.”
Yeah. Like the difference between CO2 being 0.028 percent of the atmosphere and it being 0.039 percent of the atmosphere amounts to not much either. ROFLMAO@U
Dave
you are not supposed to state it like that.
You must say it increased from 280 to 390 ppm, an increase of 39%.
The AGW advocates are building sand castles.
Girma, Dave and WHT
390 ppmv is 39% higher than 280 ppmv
2.3 is 3.5x 0.65 (i.e. 250% higher).
Hmmm…
Max
Actually, it’s a percentage my calculator can’t compute. It has grown from from ZERO to 110. The ~280 is still ~280. CO2 doesn’t grow itself.
You guys are a bunch of numerologists.
The effective diffusivity will have the largest effect on the transient sensitivity. The larger the effective diffusivity is, the more temperature changes will be masked by the heat sink that is the ocean. The smaller the effective diffusivity is the less the transient sink effect will be, and the global temperatures will respond more quickly.
The effect it has on the equilibrium sensitivity is more indirect, as the more the ocean can buffer excess heat, the more chance it will give for CO2 to sequester out of the system. These are the interesting questions to ponder.
I would like Nic Lewis to discuss which direction the diffusivity should go to make the uncertainty worse.
“Dave
you are not supposed to state it like that.
You must say it increased from 280 to 390 ppm, an increase of 39%.”
Yeah.
Now 100% is doubling.
If one has certain amount warming per doubling. Pick your number 1 or 1.6 C or whatever.
Shouldn’t one get more more warming in the first 1-50% increase, rather in the 51 to 100% part of increase?
And yes, climate models indicate the opposite of this.
So if warming is 1 C per doubling of CO2.
And going from 200 to 400.
At 50%: 300 ppm, one should have 75% of warming- .75 C
And it 75% increase: 350 ppm one should have .875 C of the 1 C increase of doubling. Or:
At 25% of increase of CO2: 250 ppm one should have 50% of warming: .5 C.
So 39% increase should already given more than half of the expected warming from doubling of CO2.
Or when reached 350 ppm that is 25% increase of 280 ppm.
So if doubling 280 ppm of CO2 gives 1.2 C, then we get .6 C by the time
we reached 350.
Which was around 1990. And when reach 50% should total of .9 C.
And we had half of this difference: .3 C and 39%, so a .15 C increase
from the year 1990.
So since the 280 ppm, if by 560 ppm level we get 1.2 C of warming,
we already got at current level of the 39% increase .75 C of this 1.2 C,
meaning will get another .45 C.
Hmm. Less increase than I expected.
This could explain warming pre 1950. Or since say, 1850 warming could be caused by CO2. Too bad we don’t have an accurate measurement of CO2 prior to 1959.
Wasn’t my point, but seemed to stumbled upon an alternative hypothesis- involving CO2 [no less].
Half the warming occurs for each factor of root 2 (1.41). So 41% gives you half the doubling effect, but this is just the transient part, as some goes into OHC first and contributes at the surface later.
“Half the warming occurs for each factor of root 2 (1.41). So 41% gives you half the doubling effect,…”
That seems closer to being right in terms of math.
But what if you drag some the CAGW stuff into this- or you get more amplification in the start of increasing levels of CO2.
So if CO2 at 41% gives half the doubling affect, what about other greenhouse gases, such as water vapor?
Can I assume that stronger greenhouse gases [CO2 being weak] diminish more when concentrations double?
Maybe they didn’t see increasing water vapor because most of the increase in water vapor had already occurred.
Yeah it doesn’t give you the scary runaway effect, but fits the past history of the planet better. It acts as negative feedback of a sorts.
And explains why Earth because it past much warmer periods hasn’t already had a runaway effect.
And why even start with 280 ppm, according to ice cores it isn’t an average level. What instead 280 you pick say 240 ppm of CO2.
So 240 ppm doubling is 480 ppm. And increase of 240 ppm by 41% is
338.
I am not a big fan of the whole greenhouse thing, it seems like a hypothesis that could be tested.
gbaikie, with the constant relative humidity assumption (and it is only an assumption), the water vapor effect is proportional to the CO2 effect. My own expectation is that RH won’t stay steady in a transient climate because we already see that the land is warming faster than the ocean, which implies to me that RH must decrease. Note this is not a good thing for land as soils will dry out, and drier soil leads to more heating. At equilibrium when the CO2 has stopped increasing for a while, the ocean may catch up to the land and the water vapor may catch up too turning the arid transient climate to a more muggy tropical one (like the Cretaceous).
280 ppm is the steady value in the interglacials, while 190 ppm is the value in the ice ages. The climate has had a bimodal state for the last few million years, but the CO2 now is already well above what it was when this state started, so I think we are back to the state 10-20 million years ago with limited sized glaciers that won’t grow any more.
“gbaikie, with the constant relative humidity assumption (and it is only an assumption), the water vapor effect is proportional to the CO2 effect. ”
I think there is fair amount evidence that cooler periods tend to be drier, but I don’t at moment recall anything about whether the tropics were dryer during cooler periods. Oh here we go:
“The series of ice ages that occurred between 2.4 million and 10,000 years ago had a dramatic effect on the climate and the life forms in the tropics. During each glacial period the tropics became both cooler and drier, turning some areas of tropical rain forest into dry seasonal forest or savanna. For reasons associated with local topography, geography, and climate, some areas of forest escaped the dry periods, and acted as refuges for forest biota. During subsequent interglacials, when humid conditions returned to the tropics, the forests expanded and were repopulated by plants and animals from the species-rich refuges.”
http://science.jrank.org/pages/3502/Ice-Age-Refuges.html
So tropics and temperate zone tend to be dryer during cooler periods.
“The climate has had a bimodal state for the last few million years, but the CO2 now is already well above what it was when this state started, so I think we are back to the state 10-20 million years ago with limited sized glaciers that won’t grow any more.”
I don’t think we left the Ice age, but do believe we will lose more glacial ice, and the oceans if given time before another ice begins, will tend to get warmer. But before we probably enter another glacial period, human might be able to change planet’s global climate in any way they wish.
I can’t imagine humans wanting to enter conditions of the glacial period- unless skiing becomes very popular.
Girma, are you Girma Orssengo? I have been looking for you.
I liked your graph very much……
http://wattsupwiththat.files.wordpress.com/2010/04/orssengo3
I just think that the warming phase ended in 1994/5
http://www.letterdash.com/henryp/global-cooling-is-here
What do you think of my analysis? Would love to hear from you.
Henry
I think it instructive to look at the oldest database in the world to put the current cooling into context. Half a degree C in a decade
http://www.metoffice.gov.uk/hadobs/hadcet/
We are currently at the same temperature levels as the 1730’s.Gardeners here in the UK will be able to track the downturn by what plants it is not possible to grow any more. My outdoor tomatoes have not been successful for around four years and my succulents have virtually all been killed over the last few years.
Its too soon to say its a reversal of the sporadic warming trend we can trace back to 1660 but its certainly interesting
tonyb
tonyb,
are you Orssengo?
The figures for central England certainly confirm the trend I find but it cannot be regarded as global. Globally, it is not that bad – I hope.
Note the development of maxima, in my sampl:
http://www.letterdash.com/henryp/global-cooling-is-here
You can use other regressions then the one I suggested there, but you still end up finding that the sun has grown weaker from about 1994/5.
that means the orssengo graph
http://wattsupwiththat.files.wordpress.com/2010/04/orssengo3
may have to be adjusted a little bit.
regards, Henry
JimD, mentioned that the water vapor effect based on the 50% relative humidity assumption would be a positive feed back to CO2,
Interesting thing about that assumption. At first glance it is a valid assumption, but it has another side. It provides an upper limit to sea surface temperature. Since the minimum SST is limited by the freezing points, with RH limited, then the volume of moist air has to expand or contract. The lapse rate feedback. But the rate of expansion is limited by the density of air and rate of diffusion. It is like a pressure relief valve.
Check out the AQUA SST data. With an annual solar TSI variation of about 80Wm-2, the SST only varies about 0.9 to 1.0 C, That is a pretty fast feedback.
If you have a reasonable estimate of a maximum SST, you can fine tune a sensitivity estimate to forcing. So why stick to 30 year old talking points when new data is available?
Henry
No I am not orssengo. I just like historic data sets as they tell us a lot. Just click on my name to see my site.
As I demonstrated in my previous article, CET is considered by many to be some sort of reasonable indicator for global temperatures, although as Hubert Lamb remarked on temperatures derived from trends and reconstructions in general;
‘They can show only the tendancy not the precision.”
By any measures it has cooled off here in the UK during the past decade and this current decade is now rather similar to the period 1710-1740. Whether this is a long term shift is too early to say. If it is it would be remarkable as this current warming period can be traced back to 1660 with numerous advances and retreats.
What is slightly worrying is that the current temperatures are probably overcooked as the MET office only make a small allowance for uhi. I suspect we could in reality take 0 .2C degrees off the peak reached around the year 2000, which puts past eras into an altogether different context.
tonyb
Captain is mystified by how control systems work. That is not a fast feedback, it is a strong lag term. How ever did you make it through an engineering degree? Or did you just go to a trade school and get HVAC certified?
Web, do you really enjoy making an idiot out of yourself? The Aqua data just shows the response time and the set point. Determining the other lags is easier with a good base reference. If you had the least bit of curiosity you would look at the system and realize that 294.25 is a SST set point and that it defines the diffusion rates. From that set point using standard barometric pressure, which CO2 will not change, you can determine the range of temperatures possible in the system.
You could be some body Web, actually solve a problem instead of creating one :)
Give it up Captain, you are out of your element. A fast feedback is the opposite of a lagged response. Controls engineering is not your strong suit.
CD, I am not sure what you are saying, but it may be that you say it is impossible for climate to be a few degrees warmer, as it was in past eras. Are you saying the Mesozoic era was not much warmer despite the large tropical areas and lack of ice, or that it operated under different rules from now?
JimD, I am just saying that the ocean temperature doesn’t change much, climate can vary from a degree or so in the tropics to about two degrees in the temperate. The temperate though expands and contracts with ocean heat content. As glaciers advance, global average temperature would drop, but average SST would remain pretty steady. That means that tree line advance and retreat would be a much better indication of past conditions than trying to estimate an average global surface temperature. So if you are trying to determine the impact of CO2 on OHC, you are pissing up a rope. The OHC has its own controls in the water cycle, the radiant impact is on the fringes of the moisture envelope.
Web,http://redneckphysics.blogspot.com/2012/06/follow-energy.html
I actually did a lot of work on controls when the shift from analog to digital started. One of the problems was that digital controls were too fast. I had to go back to the physical processes to determine the energy capacities required to maintain the desired process and then how to slow down the control system to regain stability. So I started with the slowest processes and worked out. Conductive/diffusive and latent are the slower processes. Balance those boxes then work your way out :)
WHT, are you really trying to claim that Nic does not know what diffusivity is? I can assure you that he does (the Maths/Physics degree at Cambridge is a bit of a hint) and comments like this just make you look a fool.
Web is just being consistent.
This Nic doesn’t hold a candle to Nick Stokes, an amateur climate guy who does an excellent job!
See his work here: http://moyhu.blogspot.com/
The observable effects are proportional to the square root of the diffusivity. The second point is that the real spread is that between the rather large eddy duffusivity and the smaller but endemic thermal conductivity. What Lewis is concerned about needs to be discussed in that context. Effective diffusivities can lead to lags on the order of centuries, very similar to the adjustment time of CO2.
I made the challenge to find out if people want to discuss real physics or stand around like a bunch of poseurs gawking over a guy trying to manipulate a college professor.
Frankly, it reminds me of bunch of kids egging on a schoolyard fight. Rather make it fair.
Then you realize that it is the change in the rate of diffusion that is the question. What change would have a greater impact, the changing volume of annual sea variation, the change in the average ocean surface wind velocity or 2C warming over Russia?
The more I read in this and other fields, the more I starting to think that people’s work is almost never double checked (even when it is possible), and on the odd time that it is, it is almost never correct.
Hate to sink deeper into cynicism, but I’m starting to seriously consider this to be a not so loony statement:
“Most Academic Research Is Low-Value Rent-Seeking Data-Massaging Justification For More Low-Value Rent-Seeking Data-Massaging”
http://www.postlibertarian.com/2012/06/why-is-2008-the-worst-year-for-banking-crises/
robin,
You may be optimistic.
And this corruption of science is, sadly, a symptom of a much larger malady.
We’ve had “publish or perish” for decades. We’ve had the education bubble for at least 15 years. All of these solutions in search of problems. I don’t now how you expect anything else. Cynical or not, it’s the inevitable outcome of the system that’s been set up. Not every physics professor at every little liberal arts college can publish a Theory of Relativity. There’s only so much real cutting edge science to be done. Then there’s the other 99%.
P.E.
You make a good point.
In the past (before they screwed it up) climate scientists enjoyed the same respect (even awe) in the general public as all research scientists did.
There were Nobel Peace Prizes and an Oscar (for a supposedly climate science-based documentary film, “AIT”).
But in climate science, as in all professions, there are a few good individuals, a lot of mediocre ones and a handful of bad ones (either simply incompetent or driven by some other agenda as advocates for a cause).
This latter small group has caused ALL climate scientists to come into general disrepute (polls have shown that 70% of the US public thinks they are manipulating the data!).
IOW, the public trust in climate science has been lost – and lost trust is a very hard thing to regain.
It will take a lot of “mea culpas”, open admissions of errors, exaggerations or misrepresentations and complete transparency.
IMO it does not appear that IPCC is prepared to make the changes needed to regain this trust (AR5 looks like a re-hash of AR4, with possibly even more use of doubtful “gray” literature and the same old “party line” CAGW message backed by the “consensus process”).
Too bad for climate science in general and for the few top-notch climate scientists out there.
But, being an optimist, I still see a chance.
Maybe a few straight-shooting individuals, who are recognized as top climate scientists, such as our host here, can help regain the general reputation of climate science through open communication in venues such as this site, essentially making the IPCC irrelevant in the long run.
Max
Socialism doesn’t work if want farming, or making cars, or science.
The best example of socialism working is the Apollo program.
Hence a reason politician were fond of say, if you go to the Moon, you can do…….. And it’s not true. And it’s failure to understand.
There many reason why Apollo program worked. But a single aspect of it, is there wasn’t enough time to include the “wisdom” of politicians
Or socialism can work, if you are in a hurry.
And to be in a hurry, you need good people..
Professor Forest,
Thanks for making an appearance. It’s hard for many of us to understand how the data for such an influential paper can just disappear. I see you made a few comments re data storage above, and perhaps you can enlarge on that a bit for those of us who don’t know much about the issue.
How influential was it?
And why it takes over 1 year to tell
At least the guy has answered something hes an angel compared to Mann o man LOL
The loss of the data is not the problem. It is the continued failure to admit that the data was never worth keeping.
First there is the fraud–the coming out with supportable findings and passing them as truth with a straight face. The total lack of conscience should amaze anyone
Then there is the cover-up. That is the Go Directly to Jail card.
And then, as we have seen in so many other instances there is zero accountability. Academia provides cover.
Academia has gone full circle. It is now like the Catholic church: home to the defrocked priests of Warmanism.
You are right, Wagathon.
Postmodern science dogma has replaced religious dogma of the past.
Out of fear destruction by “nuclear fires” in 1945, frightened world leaders again started promoting official models of reality over observations of reality, as was the case before the advances made by Copernicus and Galileo.
Frightened world leaders thus reversed the scientific method Copernicus used to discover a fountain of energy in the center of the solar system (Sol) in 1543:
http://tinyurl.com/7qx7zxs
The fountain of energy that sustains life on Earth and exerts dominant control over all the planets, especially the tiny, rocky planets close to the Sun – Mercury, Venus, Earth and Mars.
They also betrayed Galileo’s brave commitment to give the public observational reality instead of the official model of a geo-centric universe favored by ego-centric leaders of nations and religions in the 1600s.
Despite noble intentions to “save the world”, the decisions made in 1945 started society’s slide toward a tyrannical government like George Orwell described in a futuristic book written in 1948 and entitled “1984.”
http://www.online-literature.com/orwell/1984/
The results of decisions made in 1945 were finally exposed in Climategate emails and documents that were released in November 2009
http://joannenova.com.au/global-warming/climategate-30-year-timeline/
my bet there will be no reply
Nic Lewis seems to have developed an interest in Climate science after a an specified career which has been “outside academia.”
Two or three years ago, he says, he returned to his original scientific and mathematical pusuits and, becoming interested in the “controversy surrounding AGW”, started to learn about climate science.
This would be all highly commendable if his motives were purely scientific and he actually started his new career with an open mind and a genuine quest for scientific knowledge .
However, and call me cynical if you like, but I’d say Nic’s motives were anything but pure. I’d say he’s motives are ideologically based and he started off deciding to do what he could to discredit the scientific case.
If I’m wrong in saying that maybe he can tell us more about himself. Just what was his other career. Just what are his political viewpoints?
But if I’m right, his work has no more significance than a religious fundamentalist who’s become interested in the controversy about Evolution and decided to learn about Darwinian theory purely to find fault with it.
Errata in above. Should be ‘unspecified career’ and ‘his motives’.
You are cynical. You said I could so don’t complain.
That’s very true!
Without making any effort to discuss Nic Lewis’s post, a post the person criticized in it acknowledges holds merit, tempterrain makes a comment here smearing Lewis with baseless insults. Baseless insults he acknowledges he lacks the information to justify. And just twenty minutes prior, he has the temerity to tell another individual:
Then again, I guess nobody should be surprised. After all, he did say Skeptical Science “is always a good place to start.”
As an aside, when tempterrain promoted Skeptical Science like I mentioned, he pointed to this article. It amuses me because an article written the next year directly contradicts it. The two articles say:
Despite having the exact same wording, the two sentences give significantly different values for the same thing. Fortunately, a third article from a few months later clarifies:
Oh wait. If you check the source for the second article, you see that clarification isn’t right:
Apparently Skeptical Science has no problem using sources with significantly different results without pointing out those differences. It’s perfectly willing to contradict itself, for no apparent reason. That, or we’re to believe that despite using the exact same wording in two sentences, somehow the authors failed to notice the most important part of the sentences were different.
My regular complaints about that site are more serious (like the repeated dishonesty in graphs), but I couldn’t resist commenting on this when I saw tempterrain’s link. Dishonesty in an advocacy site claiming to provide simple explanations for the masses is one thing, but incompetence…
Come on , Brandon. When the numbers don’t agree, you are supposed to go with the average. I am sure they meant 49%. Or, if you prefer 0.49! The latter has a look of accuracy about it. I would go with that.
Actually, I know exactly where the numbers came from, as well as why they are what they are. They aren’t particularly unreasonable. Skeptical Science just made a stupid and inexplicable mistake because someone didn’t bother to learn much about what was being discussed.
That number doesn’t represent a fixed fraction of CO2 that remains in the active carbon cycle. Rather it is a ratio arrived by the rate of emissions balanced by the rate of sequestration. This will change over time depending on the rate of change in emissions year-to-year.
I actually have done the sequestration modeling and see a number around 40 to 50% as well. Of course, the people that have not tried the modeling can assert that someone is being inconsistent. That is their right — the right of the ignorant to say whatever pops into their head.
WebHubTelescope implicitly insults me, claiming I’m ignorant and saying “whatever pops into [my] head,” because I said these two sentences are inconsistent:
Trying to avoid insulting someone here is kind of like trying not to breathe.
I can’t help it if you don’t understand the physics of diffusion and sequestration.
I like that WebHubTelescope thinks breaking the site’s rules is perfectly natural:
After all, if he doesn’t insult people here, he’ll die! That certainly explains why he insults people for nonsensical reasons all the time.
Nonsense comes from the climate clowns who reside here. They come piling out of the clown car and you can’t even move without tripping over them.
You see Brandon, webby done did the modeling. He says it’s either 40 or 50%. When he becomes more expert with the climate science tactics, he will add the two guesses together and divide by two to get the highly accurate, most robustest number fit for IPCC propaganda purposes.
@don
Webbie’s such a clever guy.
Turns out he’s already done the modelling. But he’s so shy and embarrassed that he’s not going to tell us where he documented it. Or exactly what it said. But he just wants us to know that he’s done it.
Ain’t that cute!
Unlike the stuff here, you can actually find it. Thanks for the plug :
http://TheOilConundrum.com
Table of contents, look up thermal. Hot stuff.
Brandon,
Its not the first time that I’ve asked about Nic’s previous career which he seems reluctant to specify. Its not the first time I’ve asked if his original, and current, motivation was scientific or political.
The only sensible conclusion is that my cynicism about this individual is entirely justified.
I note that your bio in the Denizens thread is equally sparse on details of your career post University.
http://judithcurry.com/2010/11/12/the-denizens-of-climate-etc/#comment-79185
Are we to draw the same implications about your motivations as you do about Nic Lewis’s?
Or should we all just get on with arguing about the science and what it shows (or doesn’t) rather than being distracted by irrelevancies about motivations?
@tempterrain
Your denizens entry also has this quite remarkable sentence
‘Er, well Jim, I can see you studied Physics, just like me, but I’m sure we would have both been taught to look at the evidence first and then decide later!’
Seems to me that in the case of Nic Lewis and his challenge to the Forest paper, you would be wise to heed your own advice.
Except that I’m accusing Nic of deciding that the consensus on Climate change was wrong first, and that he decided to look for evidence of why later.
@tempterrain
‘I’m accusing Nic of deciding that the consensus on Climate change was wrong first, and that he decided to look for evidence of why later’
Even if that were true – and I have no reason to believe that it is – exactly how would it affect the substance of his criticism of Foster’s paper? Would the deficiencies in Foster et al miraculously vanish overnight?
Would Mother Nature change her behaviour because Leis was not ‘pure in heart’? Would Chris F suddenly find the ‘lost’ data and prove his work was ‘right’ all over again? Or should we all just ignore it all because Foster could only be challenged by ‘approved’ commentators.
Peter…its a long time since you did ‘science’. A refresher course seems like a good idea.
‘
Brandon,
That is all the believers have: smears and deception.
I don’t normally care to respond to comments like this, but I’m curious about something hunter. When you say “believers,” are you thinking of all people who believe global warming is a serious threat, or are you intentionally trying to refer to a specific segment of that group?
If you intend the former, I’d say it’s a pathetic broad-brush smear. If you intend the latter, you’re basically right, but you should probably be more clear.
I’m not sure that my so-called insults were actually baseless. I’ve been called evil during the course of this discussion but no-one has actually accused me of being wrong. I can live with “evil” but if I’m shown to be wrong – well that really does hurt!
As a fellow troll, it’s hard to be wrong wrt other commentary here. It’s predominately opinions slanted toward attacking supposedly communist or fascist tendencies of climate scientists, mixed with clownish theories about fundamental science and an absurd belief in anecdotal evidence over statistical physics. That has to be all agenda-driven, driven by the techniques of projection, FUD, and twisted logic.
I find it fascinating why there aren’t more people with the commonsense of someone like Mosher making commentary. My own feeling is that it takes some commitment to make useful contributions, something most people aren’t willing to do and actually directly opposed to, if driven by a political agenda.
Tempterrain,
Can you see how silly this statement you made is:
Shouldn’t all scientists be trying to discredit the theory, (or any scientific theory for that matter)? Isn’t that what scientists are supposed to do.
The problem is that the scientists have not been doing the job. If they’d been doing it properly we wouldn’t need the contribution of the unpaid volunteers like Steve McIntyre, Andrew Watts, and Nic Lewis to do the work the scientists should have done themselves.
It is the CAGW alarmists and the scientists that have become advocates for a cause who are the problem. They have been pushing an ideological belief instead of doing the job of scientists. These people include the likes of James Hansen, Michael Mann, Phil Jones and the many others who became advocates for a cause along the way.
We could also include the advocacy web sites run by PR organisations and “communications specialists” such as ‘RealClimate’ and ‘Skeptical Science’.
“…….Isn’t that what scientists are supposed to do?”
Not quite. Yes, they are supposed to argue and discuss, find mistakes, discuss alternative ideas etc.
But they shouldn’t act like lawyers arguing for the merits of a particular side of the argument. Scientists shouldn’t argue in a partisan fashion.
Admittedly sometimes its difficult to know whether that is happening and there is always the effect of human nature to be considered. Once someone stakes out their argument there is natural tendency to defend that position for longer than is perhaps scientifically justified. That happens in all scientific discussion.
However, there are certain scientific topics which are affected by factors of politics and religion. The most obvious example in each case is Climate science and Evolutionary theory.
We have ideologues on the right attacking the consensus position on the former and religious fundamentalists on the latter.
So it is quite relevant to ask any individual whether their criticisms are motivated by either of these factors.
Raising this question shouldn’t be regarded, or dismissed, as smear tactics in the way Brandon Shollenberger did.
tempterrain:
Actually, it should be. Because it doesn’t matter what a person’s beliefs are. If someone raises valid issues, which even a person Nic Lewis criticized in this post acknowledges he did, the issue should be discussed, regardless of who that person is.
It’d be different if you asked for more information in addition to discussing the issues Nic Lewis raised, but you didn’t. You asked for that information in spite of the issues he raised. It’s nothing more than a way of shutting down conversations.
But by all means, feel free to defend insulting a person without any information to back up your insults.
Agreed. But that is exactly what the advocacy sites like Skeptical Science and RealClimate do all the time. They are run by “communications specialists” and PR organisations. They are specialists at spin. How do we get to the truth when this is going on. And that’s just the start of it. The IPCC, the journals editors, the government funded advocacy scientists are all part of it.
You need to consider the possibility that Skeptical Science, and all those advocating for the mainstream scientific position, aren’t so much partisan as arguing for a scientific position which they believe to be correct. The two aren’t the same thing at all.
@tempterrain
‘You need to consider the possibility that Skeptical Science, and all those advocating for the mainstream scientific position, aren’t so much partisan as arguing for a scientific position which they believe to be correct.’
So how come you believe this to be true for John Cook or Gavin Schmidt or Joe Romm, but not for me or Nic Lewis or Steve McIntyre? Please explain. Do they have some special ‘virtue’ gene that the rest of us lack?
There may be one or two exceptions but, generally speaking, the lack of virtue in the so called ‘skeptical’ community derives from a strong adherence to very right wing conservative or “libertarian” political beliefs. Those beliefs aren’t really compatible with the sort of action which would be needed to address the climate change issue. Its probably an intrinsic part of human nature to find it very difficult to change longstanding personal beliefs.
For you Latimer?
you are seriously delusional. Anywhere else but this blog a repeat offending sockpuppet would be banished. The sockpuppet would be shamed the same as a person committing voter fraud would. Not here though.
I suppose that’s OK as this place it totally disconnected from internet indexing.
This is essentially a Twitter feed with no scientific controls on what constitutes the truth and what constitutes deception. Fortunately it all goes into the void as Curry has intentionally made a playpen for inconsequential babbling. The difference is that you get more than 140 words to say nothing. What a deal.
@webbie
Thanks for another content-free rant.
It is odd that you spend so much of your time – and waste so much of ours – at a place you dislike so much.
Webby is a control freak. He wants everything indexed with “scientific controls” on what constitutes the truth and what constitutes deception, so he don’t have to think for himself. He must hate this blog (and science in general).
@edm
The real reason Webbie wants everything ‘indexed’ is because his Mum, Wilma, has told him to tidy his bedroom. Or there’ll be no supper.
Serious stuff!
Latimer, then I understand him very good. Between supper and no supper, I would choose supper too.
Edim, Bad assumption. I hate the blog’s comments. The blog itself is perfectly fine.
Science is about the pursuit of truth, yet Edim thinks I am a control freak about “what constitutes the truth”. That’s why they are called *controlled* scientific experiments and experimental *controls*. If you are an experimental scientist, unless you are a control freak about what you can control, you will go down in flames. Edim has flunked logic once again.
So when Mann argues in a partisan fashion that mcintyre is a oil shill, when he speculates in his book that mcintyre had something to do with the hacking, where do you stand? is he a scientist?
temp, your entire argument is silly. Of course people are trying to poke holes in the “consensus” position, although I’m not sure what the consensus is since estimates of climate sensitivity have a very wide range. Did those scientists predicting doom unless the energy base was completely adjusted expect any different? Are they that removed from reality that they wouldn’t just know that everything they based their projections on would be questioned and double checked? They should be not only prepared for but eager for others to look at their work so that they can show the facts support their argument. Just as a side note the world will be ending tomorrow. There is an asteroid heading straight for us. Don’t ask to see my data. I tossed it.
If I made the same argument that Creationists should not try to pick holes in Darwinian Evolutionary theory, would you think that was “silly’ too?
What difference does it make to me if the creationists wish to try to pick holes in the theory of evolution? None at all as far as I can tell. Perhaps when they start saying I have no choice but to go to church every sunday because of creationism I will be more concerned. The last time I tried that I almost got struck by lightning.
Before the climate issue became prominent I would have agreed with your sentiment. Since then, I’d say the Creationists have actually helped us in our understanding of the human psyche. When faced with a choice between the rational and a personal belief system, whether it be political or religious , people don’t always choose the rational.
Anything about the argument between creationism and evolution has nothing to do with what we are discussing. In one nobody is asking for anything other than an opinion and everyone has the right to their own. In the other there are demands being made on everyone. In one we have eternity to figure it out. In the other there is a time limit, or not, depending on what position you take. Since it is those making the demands that declare there is a time limit, I consider it their burden to support their argument. You can’t support your argument with declarations. Not having your data is no more convincing an argument than my asteroid argument. Do you deny there are asteroids?
Jeez, not bloody creationism all over again.
It may be different in Oz, but in my 50 odd years in UK I have never met a creationist, never heard one speak, nor ever heard anybody express a creationist viewpoint – bar a few happyclappys at University many years ago. Even my devoutly Catholic g/f says that ‘nobody believes that old crap any more’. It is a non-issue here.
If your argument is so thin that you have to rely on analogies with creationism, you need to reconsider your position. Or try ‘Heavens Gate’ cultists instead ..or Koresh or the guys at Jonestown……….
“nobody believes that old crap any more”
They’ll say that about climate ‘skepticism’ (ie. it’s natural variability, it’s the sun,it’s cosmic rays, it’s the PDO, CS is zero, etc etc etc) one day too.
Latimer, they have issues resolving their own internal conflicts and have to lash out. They are not equipped to deal with complexity. They are not bad people, they just say hurtful things :)
” but in my 50 odd years in UK I have never met a creationist, never heard one speak, nor ever heard anybody express a creationist viewpoint ”
I do miss the UK There’s not many other places in the world where anyone can say that!
“…. but you didn’t”
As a matter of fact I did first ask Nic for information nearly a year ago now and so far he’s declined to answer. Except that I would suggest that silence is itself quite informative! See my question under his denizens entry. There’s one still there – another was deleted.
A person shouldn’t regard questions, regarding motivation, as an insult. If I’m wrong in my suspicions let him say so himself.
tt says: “As a matter of fact I did first ask Nic for information nearly a year ago now and so far he’s declined to answer. Except that I would suggest that silence is itself quite informative!”
This is witch trial reasoning: If Nic says “As a matter of fact my motivations are good,” you wouldn’t believe him anyway; if he says “my motives are suspect,” you say “aha!” And if he doesn’t say anything, his silence speaks volumes.
Ever heard of self-validating theories?
Well he could start by telling us about his mysterious “outside academia” career!
What’s so hard about that?
He has disclosed exactly as much about his post-University career as you have.
Latimer,
This isn’t the first time I’ve posted up information about myself. I’m an electronics engineer by background and this is the company I run which started in 2001.
http://www.rfshop.com.au/
In the spirit in which you address Nic Lewis, can you prove that this is your only source of employment? That you are in fact ‘Peter Martin’?
Running an ‘electronics’ shop could be a front for all sorts of undesirable activities…..please prove that it isn’t.
Am I being lectured, on undesirable activities, by a self confessed “Witchfinder General” ?
@tempterrain
Not lectured. Just making a point.
Sauce for the goose etc……
” Running an ‘electronics’ shop could be a front for all sorts of undesirable activities”
Well it all depends on what you mean by undesirable. Things may have got slightly out of hand at the last RFShop Christmas party, but I’d say that was more due to desire, rather than “undesire” :-)
tt, what biography could Nic write for you, that would cause you to give up the belief that his motives, rather than his reasoning, are what cause him to reach the conclusions he does? And, if there is no such biography, then isn’t your theory of arguments more of an irrefutable theology than a sound psychological theory?
NW, Maybe Nic Lewis can show us some posts he made denouncing some of the sillier arguments made by his ‘climate sceptic’ ideas.
Maybe he can show he’s been a life long member of the UK’s Liberal Democratic party! A lifelong member of Greenpeace?
Look, I don’t know. Its hard to say until we see something.
Why am I not holding my breathe on that one? I do get the feeling that Nic may well feel his story isn’t going to sound too good.
NW, I won’t really know that until I see something. I’m sure we are all capable of knowing when something has a ring of truth about it. You know I’m right really. You just don’t like me saying it!
So you go with your gut instinct if something rings true to you, eh Temp?
How is your gut calibrated?
tt – NicL was coauthor on ODonnell et al’s response to Steig’s Antarctic paper. Not sure where he commented before but he showed up at JeffId’s with some very on point comments spotting problems in some of the critique. FWIW he has always struck me as an independent mind bringing his skills to an area much in need of some informed input.
****//****
I wonder how you’d react if you saw somebody advancing peer reviewed science that was fundamentally flawed in your own area of expertise? Would you put the hours in to bottom it out or let it go? How about if it also had major repercussions for many people’s lifestyles? Maybe something like a correlation between spread of mobile communication technology and increasing GAT leading to proposed extensive taxation and regulation of the sector might spark your interest?
****//*****
Example of Nic’s previous input here:
http://noconsensus.wordpress.com/2009/07/15/effects-of-surface-trends-on-antarctic-reconstruction/
I always try to avoid the phrase “gut instinct”. If anyone else uses it, I often can’t resist the temptation to tell the what their guts are full of!
NW,
tt cannot deal with the message, so he must go after the messenger.
Ignore him. He is trolling and doing the trollish thing: trying to hijack the thread by means of fibbing and cowardice.
“tt cannot deal with the message, so he must go after the messenger.” The messenger is the worldwide body of climate scientists who have reported that allowing CO2 and other greenhouse gases to rise uncontrollably is going to cause increasing climate problems in the coming decades and centuries.
Yes we do have to deal with that message. But there are those who can’t and do resort to challenging the messengers credibility. Is Nic Lewis one of them?
How dare you be sceptical.
How dare you!
tempterrain:
The page you’re talking about explicitly says:
You apparently had at least one comment deleted for violating this simple and explicitly stated rule, yet you claim it is informative Nic Lewis didn’t respond to you breaking the rule yet again. Personally, I’d say it’s more likely it indicates he never expected to get questions asked of him on a page where such questions were explicitly forbidden.
Nevermind the fact you just grossly misrepresented my remark. You took the last three words out of this sentence:
Anyone can see I explicitly said you didn’t ask for more information in addition to discussing the issues Nic Lewis raised. Despite this, you cut out almost the entirety of my comment, making it impossible for anyone to tell what I actually said. You did this while responding in another fork, meaning it wasn’t even clear to readers where my previous remark was. In fact, you didn’t even say who you were responding to. All of that makes it harder for a reader to see your comment not only failed to respond to what I said, but by pretending to be responsive, grossly misrepresented my remark.
The only explanations that come to mind for this behavior of yours are massive incompetence and rampant dishonesty. Would you care to tell us which of those is the right explanation, or would you provide an alternative explanation for massively editing a sentence, obscuring it’s meaning, while offering a response which pretends to be responsive but obviously is not?
Brandon Shollenberger,
My first degree in is Physics, and I do from time to time comment directly on scientific issues, but nevertheless I think it is fair to say that the main thrust of my posted arguments against people like yourself and, indeed Nic Lewis, is more political than scientific.
And why? It’s because discussing science with so called climate sceptics, (IMO deniers), is akin to discussing Evolutionary theory with religious fundamentalists. It’s never going to lead anywhere. You can see that for yourself on not just this blog but many more blogs on the net as well.
Scientific consensus is formed by a process of give and take. Occasionally a controversy will arise like the recent one about ‘faster than light’ neutrinos. People had their opinions, and the issue was resolved fairly quickly. The ones who were wrong had to accept that, or at least accept that the consensus held sway until such time as they had more evidence at their disposal it may be wrong. There is just no way that those who attack the consensus on climate change will ever stop their opposition when presented with purely scientific arguments.
His self appointed mission looks like its to pick as many holes as he can find in papers which support the consensus argument. If it was to pick as many holes as he could find in all arguments that would be different. That would be perfectly acceptable and it would mean he was dispassionate. But secrecy about his background and funding for his new career doesn’t inspire confidence that ‘dispassionate’ is an appropriate word for our Mr Nic Lewis.
He looks like a lawyer arguing a case. He looks like a spoiler.
I note that you have still not disclosed any more about your own career than you wish Mr Lewis to do.
Though why the f..k it matters a jot in either case escapes me. Nic Lewis has raised what seem to be valid concerns. The concerns exist independently of his motivations – whatever they may or may not be.
Science does not need a ‘pure of heart’ test to judge the correctness of a case. It needs evidence and experiment. and observation. You somehow seem to feel that such things are secondary to ‘consensus’ and having the correct ‘motivation’.
‘Consensus’ and ‘motivation’ are words from the language of politics, not from the language of science. The evidence of your own writings lead me to suspect that it is you, not Nic Lewis, who is viewing the topic through a predominantly political prism.
Latimer,
My political views aren’t extreme by any stretch of the imagination. My views on climate change aren’t defined by political belief anymore than my views on evolution are defined by any religious belief, or lack of it. But it is indisputable there are those whose views are.
I can live with whatever the scientific consensus says on either issue. I would prefer that the scientific consensus was the opposite of what it is on climate change. I agree that action to prevent climate change may well be disproportionately disadvantageous to the less affluent. The wealthy will always be able to afford the bill, the poor won’t. That’s why it isn’t a socialist invention. In any case, humanity has enough problems to solve without there being another one. But, if the scientific consensus is what it is, then we don’t have any alternative to take it on board, and act in the most effective way possible to reduce GH gas emissions. IMO.
@tempterrain
So your beef with Lewis is that you think he is questioning the consensus for the wrong reasons? Or that he dares to question the consensus at all?
It would be also interesting to hear your views on what – in your opinion – are legitimate grounds for questioning ‘the consensus’. And how Lewis’s previous career – whatever it may be – would make such criticism from him invalid?
Beats me, Peter.( if that is who you really are). Whatever you’re doing It’s not science as I know it.
Latimer,
You ask “So your beef with Lewis is that you think he is questioning the consensus for the wrong reasons?”
Yes. Exactly. He ( if I’m right about him) , and many others with very right wing political views, are not questioning the scientific consensus because they genuinely believe it to be wrong. That would, of course, be fair enough. They are questioning it because they find the political consequences of scientific acceptance to be unacceptable.
So – in your eyes – the ‘science’ can only be challenged by those whose motivations meet your exacting standards?
Got news for you Peter. Old Mother Nature don’t give a bucket of warm spit for people’s motivations. She’ll carry on doing what she does regardless of whether the guy who figures her out is the sweetest, kindest man that ever lived – or the most evil rogue that has ever walked this Earth. She don’t care.
And if we are to successfully find out what OMN has in mind, we shouldn’t care either.
Tempterrain, you said:
“Yes. Exactly. He ( if I’m right about him) , and many others with very right wing political views, are not questioning the scientific consensus because they genuinely believe it to be wrong.”
Why is this so important to you? I don’t understand it. I disagree with other skeptics on almost everything else (politics), but I won’t change my mind on global climate change because of that. It’s strictly about science.
Edim,
We agree its strictly about science. Yet , you say you don’t understand and you ask why is impartiality is important?
Does science exist independently of scientists? In one sense, yes it probably does, but we do rely on scientists to interpret the evidence correctly. There can’t be any credibility without impartiality. Would you trust a scientific opinion on the dangers of blood transfusions from a Jehovah’s witness? That’s exactly the issue I’m raising here.
@tempterrain
So why do you believe that Mr Lewis is any more or less impartial than Mr Forest?
It could be strongly argued that Forest has a vested interest in keeping the alarmist flame burning. He is a full-time climatologist, dependent for his salary on the continuance of a stream of grant income, and working in an institution where ‘scientific worth’ is measured, in part at least by the amount of revenue he can attract.
He’d be a fool to turn off the tap by producing papers that don’t show a big problem that needs lots more research. Especially when his current and previous academic bosses – that he has chosen to work with – are widely known and very vocal proponents of alarmism.
And those who so argued might compare and contrast the strength of the possible motivations and weigh something about a vague political ideology with immediate needs like regular pay, a serene life at work and acclamation and status within one’s chosen profession. And when they do so, I doubt they’d find the ideology to be the more powerful motivator.
But luckily, – as I explained before – if we do science as it should be done we don’t need to worry about any of those things. OMN doesn’t…she just keeps on doing her thing, entirely oblivious of human characteristics. And the best way we know of unlocking her secrets is to do proper science. Without morality tests of virtue assessments or any of that crap.
“So why do you believe that Mr Lewis is any more or less impartial than Mr Forest?”
Its not just a question of these two individuals. Climate scientists have attracted much attention, as we all know, for reaching conclusions which many, including myself, find uncomfortable.
On the one hand we have this original group who went into the field because they happened to be interested in the subject, and prior to any consensus being reached on the danger of ever rising GHG emissions.
And on the other hand we have a growing band of so-called amateur scientists, who have firstly taken it upon themselves to challenge this consensus, and secondly learn, or claim they have learnt, about climate science in order to try to acquire some credentials . None of these showed the slightest interest in climate before the subject became politically controversial.
Ergo, it does seem reasonable to conclude that the second group are less impartial, or even if there is any impartiality of any kind in their motivations.
@tempterran
H’mm
Your argument seems to be based on some romantic ideal of noble impartial climatologists, uncorrupted – and uncorruptible – by the ways of the world, purely seeking truth in a backwater that suddenly becomes important, vs evil hard-hearted politically motivated corrupted lowlife trying to undermine – by any means possible – their lofty aims of saving humanity and the planet.
A simple morality tale where the good guys wear white hats (hurrah!) and the bad guys (boo!) are clothed in Stygian black. A seductive myth as peddled, for example, by Dr Mann in his recent book. Full of the ideals epitomised, for example by the late (and great) Jimmy Stewart in those 1950s movies.
Lets see how the myth compares against reality.
The first IPCC report was released in 1990. Nobody joining climatology after that date can conceivably have thought that it was an apolitical backwater. Dr Mann joined in 1998, Forest in 1996, Gergis about 2003, Schmidt sometime in the 90s….all the guys who are at the peak of their careers now (40s) would be after IPCC 1. And Hansen had already declared himself to be an activist, not a scientist, by 1988.
The ‘original group who went into the field because they happened to be interested in the subject’, seems largely to be confined to a few old guys heading towards retirement…Lindzen, Michaels, Phil Jones, Trenberth, Briffa.
Though they may once have been influential, their day has passed. Nobody believes a word that the UEA guys say any more, and the Climategate e-mails show that in reality they were very far from
the white-suited incorruptibles of your imagination. But were quite prepared to get down and dirty to make sure that they won the game.
So the first part of your myth doesn’t really stand up to examination.
We’ll look at some other stuff later…I have to prepare for work right now.
I’m a physics prof i.e. a “real” scientist for those that like such titles.
I once had the opportunity to criticise the work of some people that I disliked. I found them arrogant and patronising and went looking to torpedo their work. Eventually I found a problem which completely invalidated their study. Rather than letting them know in advance I waited until they presented the work in an open meeting and then pointed out publicly how their work was flawed. It was great. I was young.
For motivation and “purity of heart” I scored 1/10.
For scientific achievement I scored 10/10.
One is often motivated by a dislike of another individual. This is utterly irrelevant as long as the criticisms are valid.
Roger,
I think you are rather missing the point here. Yes, a dislike, or otherwise, of a particular person, or group of persons, can often influence our chosen course of action.
I’m not accusing Nic Lewis of having any personal animosity to anyone who might have a different political view to himself. I’m accusing him, and most other so-called climate sceptics, of allowing their political views to guide, or override, their scientific judgement.
tempterrain:
What do you think rigorous scientific methods, like the Popperian method were devised for?
All scientists are human, so can be biased, political, mistaken, delusional etc etc.
We cannot rely on any scientest or group of scientists to be impartial, or to interpret things correctly – which is exactly why we need systems of checks and balances, in order to circumvent human weaknesses.
Peer-review isn’t nearly enough – it merely scratches the surface.
I forgot to add another big human failure which scientists are prone to: groupthink
@tempterrain
‘I’m accusing him, and most other so-called climate sceptics, of allowing their political views to guide, or override, their scientific judgement’
And your evidence for this is what, exactly? Just your gut feel?
tempterrain, you just responded to me highlighting a flagrant misrepresentation by you by refusing to address your wrongdoing. Instead of discussing anything I said, you choose to continue your attack on Nic Lewis, only this time you directed your attack at me as well. When confronted with your blatant misrepresentations, you resorted to ad hominem used as red herrings. There is no level of stupidity which could explain this, so the only explanation is dishonesty.
You may accuse many people of intellectual dishonesty while demonstrating extreme levels of dishonesty yourself. That’s your prerogative. It’s dishonest, hypocritical and pathetic, but nobody can force you to be anything other than what you choose to be.
Humorously, your current course of action demonstrates, quite adequately, the very behavior you condemn. You serve the community well as a self-lighting effigy burned in hope of showing people how not to behave.
Wrongdoing? OK I’ll admit that I posted a question on the denizens blog when I knew full well that I wasn’t supposed to do that. Maybe you could let me know what other ways there might be?
I’m asking about NIc Lewis’ political views, and if they are his real motivation for getting into the bear pit, rather than an intrinsic scientific curiosity on the issue of climate change. What’s so wrong about that?
“That would be perfectly acceptable and it would mean he was dispassionate. But secrecy about his background and funding for his new career doesn’t inspire confidence that ‘dispassionate’ is an appropriate word for our Mr Nic Lewis.”
You certainly dont try to poke holes in all arguments. You are secretive about who you are and your funding. your motive seeking and speculation is evil. Nic’s math will hold up or it wont. As somebody who happens to
believe that sensitivity is between 1.5 and 4.5, I am hoping that nic is wrong. But since I know him I can say that without a doubt he will
A. share his data
B. share his code
C, preserve his records
That means he will share his power. Forester, on the other hand, did not share his tools and his power and should be distrusted until he vindicates himself. which I am sure he will.
” your motive seeking and speculation is evil”. Evil, eh? Seems a bit extreme! You say you know Nic Lewis. Yet, I notice you don’t say I’m wrong.
Actually I believe that sensitivity is in the range you specify too, but , unlike you, I’m hoping that Nic is actually right. I’m hoping that climate sensitivity is actually zero or close to it.
But, if it does turn out that it isn’t, and vital action which could have prevented a future climate disaster isn’t taken and which is Nic’s real motivation, then future generations will look back and wonder how it all was allowed to happen. They will be asking why politically motivated individuals only became interested in scientific issues when their politics were threatened by them. They will be wondering why public opinion on a important scientific issue was manipulated for political and short term economic purposes.
They may well be inclined to use the word ‘evil’ themselves.
Forester, on the other hand, did not share his tools and his power and should be distrusted until he vindicates himself.
By all means mistrust the findings of his paper if you wish, I don’t see any need to mistrust him personally.
No temp you are evil.
Nic’s motives are not observable nor are they relevant. His opinion about what his motives are is just that: a theory about himself. What matters is his actions. What calculations does he do. are they right? can they be challenged. If he shares his power his motivations, however unknowable they are, vanish. On the other hand, if a scientist does not share his data and methods, we can never examine whether his motives played a role.
That you focus on Nic motives, unknowable as they are, indicates that you prefer to not look at facts. That is evil.
Steve,
You’d like to say I was wrong , but you know you can’t, so you have to settle for ‘evil’. OK I can live with that.
“The page you’re talking about explicitly says: Don’t reply to anyone else’s post” And such a ” simple and explicitly stated rule ” too.
Just on a point of information, I wasn’t the only person violating this rule.
The worst offender was a certain curryja !
Aren’t we naughty! I guess Steve Mosher might call us ( well at least me anyway! ) ‘evil’.
Anonymous one.
I assume your motives are evil. case closed.
As far as I am concerned, Nic could be a member of the Stark-Staring-Lunatic-Fringe Party or the Quietly-Tea-Sipping-Club-in-The-Corner Party or a lizard sunning himself on a rock or an axe-murderer.
Makes no difference to me. He has asked for data and method so published results can be checked.
These have not been forthcoming. The lead author states they have not been archived.
This means the results cannot be replicated, THE REQUIREMENT for all science. The paper cannot be considered as scientific if it cannot be replicated. Whether Nic is a lizard or not doesn’t change that ugly little fact.
Actually, the replication requirement has traditionally been that if you do what I did then you should get similar results. There is no audit shortcut, where you just check my work. The audit is a new movement in science policy. The unresolved issue is how much I have to tell you about what I did.
However, it is not possible to do what the authors of most climate science studies did without a copy of the data that they used – such data is usually not something that can be obtained by oneself.
Nor in many cases is it possible to get similar results without knowing exactly what computational processing, and ancillary data, was involved – methods descriptions are rarely adequate for replication.
For instance, the results obtained in Forest 2006 appear to be sensitive to the weighting applied to the upper air data by pressure level, as well as to the upper air diagnostic eigenvector truncation parameter. Nowhere in the paper or Auxillary Materials (Supplementary Information) was even the existence of such a weighting scheme mentioned, let alone details of it given. Nor was the upper air truncation parameter revealed.
Nik, why cannot you get the data the same way the authors did?
I understand the methods problem, but it is a slippery slope to having the work done for you. Normally you should use the values you think right, then if you get a different result, failure to replicate is your finding.
Replication is not an audit. The methods audit is a major new policy proposal, unless there is strong evidence of wrong, in which case the funding agency can always investigate. People who want a new policy need to argue for it.
David, this is interesting. The thing about technical auditing (or QA/QC in engineering calculations) is the need for the reviewer/auditor to “buy” certain assumptions that the originator makes. And there are always assumptions. I do think that in this new business of auditing climate science, auditors often tend to get hung up at that point – they don’t “buy” the assumptions. At which point, replication should perhaps be the recourse.
However I would argue for the policy of auditing, on a limited bases, by arguing that in many cases it is faster and cheaper than replication. In my last engineering firm, “independent confirmation” of a calculation was sometimes acceptable as an alternative to line-by-line checking.
Interesting indeed, BillC. My point is merely that it is a new policy. One which has to be carefully crafted, because there is a good potential for using it for harassment. We do not want to over-regulate science. Better to reduce the politicization, if possible. It is a great design problem, as there are many possible levels of QA. I have done a lot of work on this issue, going back 40 years: http://www.craigellachie.us/powervision/Mathematics_Philosophy_Science/ENR_cover_story.doc
David Wojick
“why cannot you get the data the same way the authors did?”
Unlike the authors, I don’t have access to the (highly complex) MIT climate model nor have the computing resources of a top university to run it.
The surface and upper air temperature observational datasets are continually revised and then made obsolete, so obtaining the data used in a study carried out using 10 year old data is not very practicable. Try doing so.
If you can point me to a source of complete annual data for surface, upper air and deep ocean temperatures from the HadCM2 and/or GFDL (R30b?) long control runs, please do so. I think the UK Met Office (who tried to help me as much as they could) and GFDL would like to know of it too!
Nic,
Auditing that six years old paper may have some interest, but that by itself is not essential.
It’s not essential because the interpretation of the results is very dependent on the trust we have in the suitability of the MIT climate model for the task.
It’s not essential because this is anyway just one of the papers, where climate sensitivity has been determined. Looking at the overview from AR4 it’s pretty obvious that this paper differs enough from the other papers to raise questions on the reliability of the results.
It’s not essential because there’s already so much more data and model development has also continued since that MIT model was built and its parameters fixed.
You have told that the results have been used by many people. Yes they have been but usually not alone and I dare to guess that few people have taken the results as face value. They shouldn’t as one should always be skeptical on results of any single scientific analysis and very much so when the analysis is as complex as this one is.
Perhaps you will find out that there has been an outright error in the study, or perhaps you will find find out that there are issues of some other nature that explain the apparent inconsistencies in related publications. Whatever you will find out is of rather little real interest by now. Auditing one six year old paper is not the most fruitful way of improving knowledge at the present time.
Nic,
I asked you a year ago why you were proceeding on this issue as you were doing (posting a blog article that many “denizens” predictably interpreted to be an exposure of fraudulence rather than first asking the authors about your questions and then going public if you found their answers to be lacking). You answered that your intent was to avoid creating an uncomfortable situation for the authors.
I told you then that I didn’t understand your logic (i.e., by proceeding as you did, you most likely put them in a more uncomfortable position than they would have been had you proceeded to get answers to your questions in other ways) and said that I found your explanation implausible.
I’m curious whether, now, you would say that you are proceeding in such a way as to minimize the extent to which you put the authors in an uncomfortable position, or whether because of what has transpired since your previous post, you would say that such a consideration is no longer primary. I assume that the later is the case?
Here’s something I hope that you’ll consider:
If this exercise is motivated by a desire on your part to play gotcha with climate scientists, then I think that you have chosen the correct path.
If your motivation was to answer questions as to the validity of the science in the papers in question, I think that you have not chosen a particularly effective path.
In the end, it’s entirely possible that the nature of your motivation will not be determinative as to how the science of the matter is determined in the eyes of non-combatants in this junior high school cafeteria food fight.
I suspect, however, that given the uncertainties and ambiguities that necessarily are associated with the climate debate, this endeavor of yours will have little impact other than to further entrench already existing battle lines.
I think it would have interesting to see what might have happened if you had taken a different course of action, although of course it’s possible that it doing so might not have lead to significantly different outcomes.
Still – maybe next time you might consider a different course of action. One can hope, can’t one?
Pekka Pirilä makes a disturbing comment:
This is similar to much of what was said when the hockey stick was criticized. People said the old work didn’t matter because we had new and better work. That never made sense to me because the old work was still used (even in the IPCC AR5 ZOD), and that “new and better” work had many of the same basic flaws as the work being criticized.
How does one say it’s not important to look at earlier work when failing to do so encourages errors to propagate throughout the field? That makes no sense to me.
Joshua says:
The two situations aren’t remotely comparable as Joshua implies,* but to me, a more important issue is Nic Lewis spent a year trying to get certain data. Within a week of him making this post, it was admitted the data had been lost. That’s certainly worth highlighting.
*The reason Nic Lewis said he wanted to “avoid creating an uncomfortable situation for the authors” was his criticisms weren’t of the authors work, but rather, of how the IPCC used their work. That isn’t remotely comparable to a situation where he is directly criticizing the work of some authors.
Actually, Bradon, what Nic said was that the authors, at least “tacitly,” accepted the misuse of their data. When I pressed him on the issue, he was, in my view notably, silent in response.
And on top of that somewhat qualified statement, it was obvious as soon as his post went up that many “denizens” would interpret his post as direct evidence of fraud. Go back and read the thread.
As I said then, Nic is not necessarily for people (mis?) interpreting his intent or the meaning of his post, but given the easily predicable nature of the response, if he really had wanted to avoid putting the authors in an uncomfortable spot, he could have acted proactively.
I’m not saying that he necessarily had an “obligation” to do anything differently, but I questioned then, just as I do now, the plausibility of his explanation.
Our actions have consequences. Sometimes, some aspects of those consequences are rather easily predictable. Justifiable or not, if someone said that I tacitly accepted the misuse of my work, and publicly posted a blog article that quite predictably elicited many accusations of fraud, I would not be particularly inclined to respond cooperatively. I like to think that I would overcome my emotional response, but at the same time I think that when alternatives are available, if the goal is resolution of the scientific issues at hand, I’d say that actions that predictably incite jello-mold flinging should be avoided when possible.
When alternatives paths are eschewed, it may or may not mean something – but sometimes if you ain’t part of the solution you’re part of the problem.
Sorry… I guess I’m too old to learn to preview:
“As I said then, Nic is not necessarily responsible for people (mis?) interpreting his intent or the meaning of his post,…”
Brandon,
The Hockey Stick obtained a symbolic value far exceeding its scientific significance. I cannot see anything comparable related to the paper discussed here. One of the main arguments in my previous post is exactly that the paper is just a small piece in the present overall picture.
Pekka – the forthcoming AR5 ups the ante. If not for the IPCC ARs it would be as you say.
Pekka Pirilä, could you explain what point you are trying to make with your response? I didn’t make any reference to the “symbolic value” that reconstruction gained. All I said was errors in one paper can propagate to other papers. It’s true a paper which gains a lot of focus will have it’s errors propagated to a larger extent than a different paper, but so what? That the scale of an example doesn’t match this particular case doesn’t somehow mean errors couldn’t propagate from it.
Incidentally, I think that argument of yours is inherently fallacious. How could we possibly check a conclusion if it isn’t important to look at one “small piece” of the picture? As far as I can see, that just leads to dismissing every criticism as being unimportant, no matter how many different problems are found with the overall picture.
It’s remarkable how people will seem to try to win an argument by latching onto mostly irrelevant points. For example, Joshua tried to draw parallels between two situations. I pointed out the two situations were quite different, and thus his portrayal was false. He responded, not by defending his portrayal, but by saying:
Not only does this not defend the portrayal Joshua originally gave, it doesn’t even contradict what I said (even though Joshua implies it does). All it does is clarify what I said. I said Nic Lewis’s reason was “his criticisms weren’t of the authors work, but rather, of how the IPCC used their work.” To incorporate Joshua’s “disagreement,” all one would do is add a few words to the end of that sentence, “which they accepted (at least tacitly).”
I raised a particular criticism. Joshua showed a description of a particular issue in my criticism was mildly lacking. That much is normal. What isn’t normal is Joshua never addressed my criticism. Instead, he has implied that mild lacking somehow invalidated my criticism.
HI Brandon.
If you’re interested in a good faith discussion with me, let me know, and I’ll respond with a similar objective.
Along those lines, try addressing your comment to me — rather than explaining to the general reader what it is I have or haven’t done in posts that are readily available for them to read and determine what I am or am not doing for themselves. I mean I imagine that some folks might find your explanation interesting, and perhaps some of them will respond, but from this point forward I see no purpose for me to respond to that type of post.
To save effort I just note that I my feelings are exactly in line with those expressed by Joshua.
Pekka Pirilä, that’s a strange remark to make. The feelings Joshua expressed couldn’t possibly apply to your situation, so I have no idea why your feelings would be “exactly in line” with his. The thing he took issue with is me not addressing him directly. It is snubbing him, and I wouldn’t expect him to want to respond to it. I just happen to have no faith any discussion with him would be fruitful given my experiences with him, so I won’t respond to him.
But since I haven’t done that with you, I can’t begin to imagine how your feelings would be the same as his. As such, I’m going to assume what you meant is you just see no point in responding to me at this point. I don’t understand your choice, but it’s your call.
Brandon,
There’s exactly one person writing to this blog with whom I have had repeatedly difficulties in figuring out what is personal attack and what is not.
Pekka Pirilä, I’m assuming the person you’re referring to is me.
I don’t think your comment answers anything in mine, but since you’ve said it, I have to say I’m not sure why you’d find it difficult to tell what is and is not a personal attack in my comments. One, I don’t really attack people. I almost always restrict my remarks to their words and their behavior. There are some exceptions, but not many, and I don’t believe I have ever made one for you. If you think I have attacked you personally, please let me know what it is I said.
Two, I am very transparent. When I criticize something, I criticize it. I don’t hide behind flowery language or misleading wording. The only exception to this is when I use facetiousness/sarcasm as a rhetorical trick, and I think I do a good job of making it clear that I’m doing so.
If there are any specific comments of mine you are confused over, do let me know. I never have any problem with clarifying what I’ve said.
By the way, I’m sorry my response was so late. I missed your comment due to nesting; I wasn’t trying to ignore/avoid you.
In any event, I think having difficulty understanding my comments is a bad reason to avoid a discussion with me. I’m an extremely open person, and I have no problem clarifying things/answering questions. If I do cause confusion, all you have to do is let me know and I’ll do my best to sort it out. If that isn’t worth the effort for you, I can understand, but I really do hate the idea of people being confused by something I’ve said.
Brandon,
I’m not saying that my problems are not in some way due to myself. Some of my comments may be completely undeserved, but they have background that’s very true to me. Their origin is in part in one lengthy thread long time ago where I really couldn’t accept your responses and where I perceived some of the comments as personal attacks, but that’s a long time ago and should be forgotten even if that’s sometimes difficult. When I have in the back of my mind such feelings I have reacted also when others have expressed feelings of frustration with your comments as Joshua did here.
While I continue to have problems in seeing or accepting the point of some of your comments I do also find many other of them to be good and well justified. Thus I do apologize for my over-reaction. Perhaps I’ll get over the fact that I feel exceptionally uneasy in exchanges with you.
Nic said, “For instance, the results obtained in Forest 2006 appear to be sensitive to the weighting applied to the upper air data by pressure level, as well as to the upper air diagnostic eigenvector truncation parameter”
I am nearly positive that any modeled sensitivity over 2.5 has a problem with tropical atmospheric pressures. None properly predict the acceleration in the convection rate in the tropics or the lower impact over cold oceans due to the reduced transfer of energy from the tropics.
Fine, but you cannot audit someone else’s work, on demand, just because you think you disagree with them. You can show they are wrong by getting a different result. That is how science proceeds.
David you are right, you cannot audit someone’s work on demand just for grins. Nic has a valid reason to attempt to reproduce the results of a paper that appears to use an assumption that has a larger than anticipated impact on the results. If the data were available or there was no apparent discrepancy, it would be a mute point.
That assumption is a constant relative humidity, which may cause issues if it requires atmospheric pressures and conditions that are unrealistic, which in turn would result in a higher than realistic rate of diffusion into the oceans. Other than that, the Forest paper is probably perfect :)
Your philosophy of science opinions are quaint, David.
Get caught up.
http://www.computer.org/csdl/mags/cs/2010/05/mcs2010050008.html
More at link.
Dave, my quaint opinions are also the present law. Have you looked at the rights in data clauses in the federal research contracts? The stuff you are citing is a recognized reform movement but so far that is all it is. Let’s see some specific legislative or regulatory proposals.
https://www.acquisition.gov/far/current/html/Subpart%2027_4.html
You’re conflating policy and law.
No its not. Its equally “traditional” to point out a flaw in someone’s methodology which may invalidate a study without doing a full replication. Whether or not you call this an audit is irrelevant.
Pointing out a flaw is very different from an audit. Audits take huge amounts of time. Have you ever been audited? It is a nightmare.
The idea that anyone should be able to literally audit a scientists work on demand is insane. Requests for supplemental materials need to be reasonable and well bounded. People pursuing this reform idea should drop the word audit if they hope to make any headway.
Just about all data is already in electronic form. There is no reason data can’t be put on a public server. Anything else is nothing more than an excuse.
@david wojick
‘Have you ever been audited? It is a nightmare’
It is supposed to be! It is not done for the convenience of the auditee :-).
For the purpose of an audit is not only to check that the work has been done correctly in this particular case, but to send a clear message to all the others that if the auditors come after you you’d better be squeaky clean or your life will become very unpleasant indeed for months.
But for climatologists it probably not that bad. There is no requirement for the auditor to discover every transgression in every paper. Just finding the first big one – a la Jean S/Gergis , McIntyre/Mann, Lewis/Forest (?) should be enough to have the paper withdrawn.
And I’m guessing that absent the work of those three mentioned above, there has been absolutely no checking whatsoever of anybody’s work for decades, that a trawl through the literature of climatology will provide a ‘target rich’ environment..and a frenzy of shredding, office moves, slavering dogs to eat one’s homework and hammers applied to hard drives the day before the auditors arrive. Just like Enron.
And those who say that the existing scientific method has sufficient checks and balances to find any bad stuff already might care to ponder why all those three – afaik the only three in the history of climatology – were found by ‘citizen scientists’ not by fellow climatologists. Like Meatloaf nearly sang ‘Nought Out Of Three Ain’t Good’
You cannot be working on ‘the most important problem humanity has ever faced’ and be expected to be left alone to approach your work with the ‘professional standards’ of seven year olds.
Shape up or ship out.
Heh, one for the impenetrable Forest: I gotta know right now.
================
@kim
Reply from Chris Forest
‘Let me sleep on it…I’ll give you an answer in the morning (maybe)’
alternative
‘It was long ago and it was far away and it seemed so much better than it does today’
(With apologies to Jim Steinman).
Old forgotten far off things, and battles long ago.
==========
tempterrain | June 27, 2012 at 6:17 am |
Except that I’m accusing Nic of deciding that the consensus on Climate change was wrong first, and that he decided to look for evidence of why later.
Anyone who runs a business of any size has at some point been audited.
All good auditors
1) Assume that all employees steal
2) Assume that all managers cook the books.
3) Assume that all businesses Lie like a rug to the tax man.
They then go about the business of finding evidence to support their assumptions. When no evidence can be found they then declare that they found ‘no irregularities’.
The idea that because someone has a degree in ‘science’ makes them somehow ‘uncorruptable’ is nonsense.
If you got a few hundred thousand to spare I can find you a Phd that will write a convincing paper that the moon is made out of cheese. There is always someone with a Phd who has a ‘bimbo problem’ that only money will solve.
Well, I have not lost my data.
Just ask my Lecturer at University!
It is simpler to consider that the convinced are pessimistic and the unconvinced are optimistic. Right wingers are more skeptical because they see an anti-growth agenda at the root of it all. Left wingers and greens are less skeptical because we pollute the planet often just for the profit of the few versus the many. Both groups are incredibly dogmatic. It amounts to “we are the righteous and if you disagree with us then you are evil”.
Or maybe it is far better to just consider if the data, properly used, bears out the hypothesis. In any properly conducted scientific review though the study should be withdrawn on the basis that it is not replicable, regardless of whether right or wrong. Otherwise anyone can just say almost anything is “science”, even raw opinion.
This is a thread on climate sensitivity, and I would like to go back to where my original discussion started, before it got sidetracked, in the