by Judith Curry
The comment thread is getting unwieldly on the original post Lindzen’s seminar at the House of Commons, so here is a new thread.
For your reference, here are some other posts on Lindzen’s seminar:
Misrepresentation from Lindzen – Real Climate
Richard Lindzen vs the aerosol forcing – Fred Moolten at IdiotTracker
Lindzen’s misrepresentation at House of Commons – Nick Stokes
Might also help if the maximum indentation level was put back up again.
I’ll increase the nesting a bit, it can get out of hand tho if it is too large
Thank you, Professor Curry, for re-posting Professor Lindzen’s message
You will both be vilified for standing between warring factions and repeating an ancient truth that our weary society desperately needs to understand: “To know that you do not know is the best.
To pretend to know when you do not know is a disease.”
Lao-tzu, The Way of Lao-tzu
Chinese philosopher (604 BC – 531 BC)
The vaporization of Hiroshima on 6 Aug 1945 and increased fear of mutual nuclear annihilation after the 1962 Cuban missile crisis apparently encouraged an unhealthy alliance of frightened scientists and world leaders to “save society” by formulating and adopting policies that were based on controllable computer models, whether or not the computer models agreed with experimental observations.
“Better safe than sorry” was their self-righteous mantra, as they led the world’s society into disaster.
I’ll increase the nesting a bit, it can get out of hand tho if it is too large
Thanks, Judith! A small step in the right direction. It would be great if WordPress would come up with a system that exhibited the thread structure more clearly. I just had a big argument with Brandon that was aggravated with what started out as a one-hour break between two halves of a post by Max re-reiterating his perennial “CO2 is growing at 0.5% a year so no biggie for 2100” argument. Over a period of days comments piled up between the two halves, leading Brandon with the impression that they were as he put it “different forks.”
Are there blogs that handle the thread issue better? And could WordPress do something to mitigate the indefinite accumulation of comments between two previously consecutive comments? Deeper nesting helps only slightly, but would help much more if there were an easy way to hide the more deeply nested ones, the way many programs (Windows Explorer, Google Earth, etc. etc.) display their folder hierarchies by letting you click on a subfolder to open and close it. (And other thread-structure-visualization ideas along those lines, but let me not clutter up this comment with them.)
unfortunately, wordpress.com is limited in this regard, and there are a bunch of other reasons that i think wordpress.com is the best for me to use
I suggest you get rid of nesting alltogether (e.g. WUWT) :
-certain posters go in for the first comment’s reply to attract attention (I’ve done it two times).
-some posters rattle for too long making scrolling for next comment tedious.
Posters instead could either have short quote to which they whish to replay, quote name or click, mark copy and paste date stamp of the originator of the comment.
Advantages are that too long, boring posts can be ignored, lazy posters will have to put some effort, and whole thing will run much smoother and faster.
It would be nice if we could ‘bookmark’ our last comments as generally you are expecting a reply to them.so want to keep an eye out for any response
That is easy Tony
Just click on ‘March 8, 2012 at 1:59 pm’ wait for second or two and book-mark the new web-page. The web-page address for your comment now is:
I love Fred’s post, the modellers generate their line-shapes through sheer intellectual effort and never, never, are swayed by the historical pattern.
It is a pity that scientists in other fields cannot be so disciplined, we could do away with costly external/internal controls and never have to run placebos in clinical trials again.
Myself, I like the Locard Principle:-‘Every contact leaves a trace’
We cannot unlearn, we are all biased, we have to design our studies to explicitly remove our biases.
Or job is not not to construct beautiful models, but to destroy them. The battered, mangled surviving models are much more likely to be true.
The models are very complicated. Parts of them need to be destroyed and replaced with valid parts. The latest study by Dr Curry and her team is a major step in that direction. Now we can get the Ice Albedo that increases when the Oceans are Warm and the Arctic is open and that decreases when the Oceans are Cold and the Arctic is closed, properly in the Theory and Models. The Missing Link is now know again. Ewing and Donn had it over 50 years ago, but it was misplaced until recently.
Dr Curry has a huge head start on the other Climate Scientists, but they will not want to be caught with no clothes on when she nails this down.
Tom Wysmuller has had it right for several years. Judah Cohen has had it right. I got it right, 4 years ago from Tom Wysmuller. I don’t know how long Dr. Curry has had it right, but I suspect she spent some time before she published the paper.
On the very iconoclastic UK site ” Number Watch ” is a segment on “The Laws”. One of relevance is:
The law of computer models
The results from computer models tend towards the desires and expectations of the modellers.
The larger the model, the closer the convergence.
So all those gigaflops of added complexity are causing convergence with the modellers’ desires. I wonder if it’s asympototic, or linear …
It a model doesn’t deliver what the modeller expects to see, it is assumed that the model is in error and changed until it does. Thus over time a model will always evolve to deliver what the modeller expects to see.
the way to avoid this is to have as few independent variables as possible. Since almost everything is a feedback of something else this should be a priority.
Here here, Doc.
Maybe all us other branches of science can finally learn to be as progressive as some of these climatologists (not all, mind you, there’s still some stubborn holdouts) and their models; maybe we can all finally throw off the pretense of the scientific method, verifiability, controls, and reproducibility–never mind inconvenient things like empiricism and observation. Who needs evidence when we have simulation? Who needs experiments when we have computers?
I feel like I’m watching a world gone blind. A true and clever scientist would figure out the ways to make real observations and physical experiments, and never rely on a computer to tell him/her an answer. Oh, it can surely be done. All our questions can surely be elucidated bit by bit by brilliantly crafted scientific methodology. But no one really tries, because they’ve become so complacent on computers to give them an answer. It’s like we’re in the era of Facebook science.
Models can help guide us to seek out new lines of evidence, help point to things we should experiment with to gain knowledge; but never, ever, should models be used as evidence or knowledge in and of themselves.
Actually work with people who model the movements of amino acids and water molecules within proteins.
We can take a large protein, crystallize it, then get the 3D crystal structure.
You can predict what site directed mutagenesis will do, in silico, and compare it to the actual outcome.
Want to guess how well the models of protein structure do?
Want to know why people do bag of an envelope drawings instead?
Doc, that is one of the main reasons why I am suspicious of models: I am a computational chemist myself, and some of our research is on modelling protein structure and function. As you say, most ‘predictions’ we make turn out to be wrong due to deficiencies in the models. Even though many of the predictions are post-dictions. I’m always grateful that we have a quick turnaround from computation to validation (or not) by experiment: it keeps us honest. It is much harder to have validation for climate models. It seems to me that the best way to improve this is to have lots of well audited attempts to predict climate features on the 6 months to 5-year timescale.
Feyman’s warning about the seductiveness of pretty models has been ignored for so long … like so many of his other prescient concerns. );(
BTW, GED, it’s “Hear! Hear!” A British Parliament call of approval (Short for “Hear the man!”)
And DocM., pleeeeze! It’s “back of the envelope”. “bag of the envelope” is painfully incoherent.
Some of us are unable to create mental auditory sounds from words or words from sounds, mental or auditory.
back, bak, bac, bag are all pretty much the same, at least in my inner monologue. When writing I can think about writing or about what I am writing, can’t do both. The spell checker is great, unless I use the wrong word, so I don’t get the squiggly red underscore, then I’m screwed.
I have a working register of about 300 words, at any one time, however, if I don’t use that word for a while, it evaporates. Before computers I had to write my essays around the words I had some confidence in being able to spell.
I have three degrees; B.Sc., M. Sc., and Ph. D., and yet do not have any qualification in English. In the US I would not have finished high School, as it is I got a C.S.E. Grade 3 in English.
The countries prisons contain about three times the proportion of inmates with dyslexia than outside. Dyslexics are also over represented in the sciences, especially biological science.
I have tried to read at least most of the thread regarding the issue of whether the models are tuned to the historical trend with aerosols. The discussion was enlightening, although some of it was over the head of this amateur. Nevertheless, this is a topic that I have found to be important and interesting.
From the previous comments, I think that it is fair to say that the modelers deny explicit tuning and I have no evidence that would cause me not to take them at their word.
I have two questions that perhaps someone can take the time to answer. First, does the process of creating a model look something like what I describe below.
The modeler makes his best estimate of historical forcings, writes his code to reflect his understanding of the physical processes involved and then runs the model to check to see whether the model accurately backcasts. If it does, his model has past its first test.
If it doesn’t, he goes back and evaluates its written code, as well as his estimates of historical forcings. He might correct or refine the computer code to better reflect the physical processes he is modeling, without necessarily changing any of his estimates of forcings. He might also reevaluate his estimates of physical forcings, but he does not necessarily have to change his estimates of historical aerosol forcings, although he might. He likely does some combination of all of the above.
Perhaps after several reiterations of the above, he comes up with a model that accurately backcasts and that he can stick with.
I think it would be fair to say that the above modeler has not necessarily explicitly used aerosols to tune his model. I think it is also quite possible that he has not implicitly used aerosols to tune his model. Maybe he adjusted something else.
To answer the question whether he has explicitly or implicitly tuned to the model to aerosols, one would need to either take the modeler at his word or review the paper trail of the above process. Right? Do I have a correct, albeit oversimplified, understanding of how models are developed?
Now we look at a group of these models, and notice that they have a wide range of climate sensitivities. Looking further, we determine that their climate sensitivities are inversely proportional the assumption made regarding the cooling effect of their aerosol forcings.
This brings me to my second question: is it reasonable to conclude from this that the uncertainty regarding climate sensitivities is due primarily to the uncertainty in aerosol forcings?
I think you might like to look over the Omitted Variable Fraud article by Alex Rawls. Aerosols are just the stand-in, the tip of the iceberg, IMO.
you might want to look over the
all the VAST evidence for the “omitted variable fraud”
Paul – Your perceptive questions reinforce my sense that what is lacking for an optimal discussion here is input from one or more individuals who actually design complex climate models for a living. I’ve quoted some of them, but that’s not quite the same thing. Nevertheless, here is some of my take on your questions.
Creating a large AOGCM is an enormous undertaking, demanding much time (often years), personnel, money, expertise, and computational resources. Once it’s finished, including aerosol input and all other parameterizations that are tuned to existing climate, that’s it. It doesn’t matter how the simulations of twentieth century climate turn out, and it doesn’t matter how the model’s climate sensitivity turns out, the model remains untouched. Note that both those hindcast results and climate sensitivity values are not known at the time the model is finished. The modelers don’t find them out until they run the hindcast, or until they run a simulation with 2X CO2, and by that time, it’s too late to change the model – they are stuck with it. There is no going back to tune the model to the observed temperature trend.
Of course, modeling groups do use their experience when they design new versions of their model. As I suggested above, a new version is a huge and expensive effort, and can’t be attempted simply because the modelers wanted to arrive at a new value for climate sensitivity or a new hindcast simulation, which again they wouldn’t know was an improvement until the model was finished and untouchable. Obviously, in creating new versions, modelers try to estimate what needs fixing and what doesn’t, but typically, that involves the ability of the model to reproduce existing climate features, where they can test the model against observations. Attempts to make a new version come out a certain way regarding climate sensitivity or hindcasting would be very likely to fail, because model complexity makes it difficult to know how any specific tweaking will affect these outcomes. An interesting example is described below.
As you suggest, aerosol forcing is an important source of uncertainty, for all the reasons given in the foregoing discussion. As I’ve mentioned, many models, apparently for practical reasons, have not included an indirect aerosol cooling effect, which if added, would have increased the cooling, albeit only slightly. This omission is the opposite of what one would want to do in order to exaggerate aerosol cooling and climate sensitivity.
Relevant to all the above is the comparison between the CCSM3 model and its newer CCSM4 version. The article on this is illuminating, I think, in its focus on what parameters the modelers concentrated on to create an improved version. Both models lacked an indirect aerosol effect, and probably created a slight warm bias. However, that is somewhat beside the point, because what I found particularly intriguing was the difference in the accuracy of a twentieth century hindcast. The CCSM3 hindcast matched observations well. The “improved” version, CCSM4, made a poorer match. If the main concern of the group had been an accurate match to observations, there would have been no reason to design a new version, particularly since the modelers didn’t know how the hindcast would perform. Nevertheless, it does seem that CCSM4 was improved in many ways over CCSM3, suggesting that what was important to them was something other than simply how well one temperature curve matched another.
I have the sense that some of the blogosphere discussion overemphasizes the extent to which modelers have been focused on hindcast accuracy. I don’t see how we can completely exclude the possibility that it has on some occasions influenced choices made between alternatives when there is no compelling reason otherwise to pick one over another, but from what the modelers say, and from the CCSM4 example, this is not a routine part of model design. Notice, in any case, even when choices might be made, that is not the same thing as “adjusting” a value from what it otherwise would have been.
I would be grateful to read further input from the modelers themselves.
First, thanks to Dr Curry for the extra indentation:
Fred Moolten: The article on this is illuminating, I think, in its focus on what parameters the modelers concentrated on to create an improved version. Both models lacked an indirect aerosol effect, and probably created a slight warm bias. However, that is somewhat beside the point, because what I found particularly intriguing was the difference in the accuracy of a twentieth century hindcast. The CCSM3 hindcast matched observations well. The “improved” version, CCSM4, made a poorer match. If the main concern of the group had been an accurate match to observations, there would have been no reason to design a new version, particularly since the modelers didn’t know how the hindcast would perform.
What you described there was an intention to “tune” the CCSM3 model to make better forecasts; the resultant, “tuned” model, named CCSM4 did not do as well as the predecessor, so the “tuning” (which was intended) was unsuccessful. With these experiences, the next “tuned” version, possibly named “CCSM5” will be tuned based on the ways in which CCSM3 and CCSM4 were successful and unsuccessful. That the “tuned” CCSM4 was not as accurate as its predecessor CCSM3 does not mean that they did not attempt a “tune up”, only that it was unsuccessful.
Perhaps all that you are meaning to say is that the parameters for the full model are not estimated using a least squares (or another) procedure selecting parameter values to minimize the model inaccuracy using least squares of all output to all data as the measure of inaccuracy.
Matt – I think all models are “tuned… to make better forecasts”. Not to do so would be negligent. My point was that the tuning was not done by consulting the climate trend that the model would eventually try to simulate in order to make the simulated and observed trends match better. The model was not “tuned to the trend”. That certainly seems to be true for CCSM4 (and that has been explicitly stated elsewhere – in the USGCRP Report – for CCSM, GISS, and GFDL models).
Fred Moolten: My point was that the tuning was not done by consulting the climate trend that the model would eventually try to simulate in order to make the simulated and observed trends match better. The model was not “tuned to the trend”.
That is an assertion about the mental states of the modelers during the phases of tuning. That the modelers did not “consult” the trend (or trends) since CCSM3 while tuning to get CCSM4 is incredible.
Matt, it is not in the slightest bit incredible. Typically climate models are developed to run efficiently on the next supercomputer, not the current one – that is, when they are being developed they tend to run rather slowly on the current machine. Running a 20th Century experiment would therefore take some months of computing time (excluding the time to set up the model). While the model is running the science development cannot move forward. The outcome of the model (the fit to the data) will never be perfect, and the arguments about why it is not perfect would be legion (is it the data, is it the forcings etc.)
The cycle for comparing against climatological data is much quicker, and tests can be done in parallel (eg. simulating a number of summer and winter seasons, each simulation running from prepared initial data, and comparing against climatology). Such simulations are too short to diagnose the impact of forcings.
Fred: Thanks for your response. It has certainly helped me better understand the process of developing climate models.
My obvious follow up question is what accounts for the inverse correlation that has been described in the literature? Coincidence? If you or someone else has already answered, please point me in that direction.
Now for a few observations. I am surprised by your observation that including aerosols in models in which they are not included would result in only a slight cooling. Lindzen suggests that the effect is sufficient to cancel large differences in climate sensitivities.
Second I think that the focus on backcasting you encounter in internet discussion arises, in part, because the IPCC cites it as an important validation of the model. If the models are developed more or less blind to history that would be helpful information to in lude in the IPCC.
Finally, I share your frustration that more climate modelers and scientists don’t show up at forums such as this. Perhaps if you have connections you could let them know that discussions here are reasonably civil as such things go.
Paul – “Direct aerosol effects” are included in all models, but many don’t yet include “indirect aerosol effects”, referring to changes in clouds induced by aerosols. Including these would add slightly to total aerosol cooling.
I referred to the inverse aerosol/climate sensitivity relationship in the preceding thread, and the long comment I wrote there is cited again in Dr. Curry’s post above – it’s the “Richard Lindzen vs the aerosol cooling” link, which is to the comment as reposted in the IdiotTracker blog..
I think some of the IPCC statements on model development are confusing, and it would be worthwhile for them to be more explicit as to what is or isn’t done. Some of this was also discussed by Gabi Hegerl in a previous thread here on the Uncertainty Monster. She distinguishes “inverse modeling”, where different aerosol values are tested to see which permits the best match with temperature trend observations, from “forward modeling”, where aerosol forcing is entered into the model from aerosol data without regard to the temperature trends. The forward models are the ones cited in the hindcasts. The extent, if any, to which they may have sometimes been influenced by awareness of past temperature trends has been argued, and I discuss some of this in the comment I referred to above. My opinion, based on what the modelers tell us, is that they have not been a major factor, and the CCSM4 story I mentioned earlier is consistent with that view.
I would take Fred’s post here with a grain of salt. What the modelers say for outside consumption and what actually happens are often different things. This all has to do with the prominent place the models play in the IPCC reports and their use to try to motivate action. I’m not saying that modelers are actively dishonest, merely that they have a vested interest in how the public perceives what they do.
Pekka and I have both extensive experience in this area. I would say the following:
1. The narrow issue of whether aerosol forcings are tuned is rather irrelevant. There are plenty of other parameters to tune and since the uncertainty is very large in some cases, failure to tune would result in clearly unrealistic results. We don’t know how many “alpha” versions were thrown away.
2. There are several phases in model construction and validation. The construction phase usually starts with a huge code of many hundreds of thousands of lines of code. Then you want to update one by one all the interactions, the subgrid models, and the equations solved themselves to reflect better knowledge and data. It is foolish to try to update all of them at the same time. The scientific way is to update them one at a time and then test each update, not just to tune it but also to find bugs. At some point as more and more updates are made, it becomes a complex tuning problem and “expert” judgment comes into play. This is probably where implicit tuning takes place.
3. In any rigorous effort there is a validation suite of realistic cases or scenarios where you have data of some kind and past model results as well as the results of other models. I would be very surprised if the suite did not include hindcasting scenarios. It is a natural since there is comparatively good data to compare to.
4. The fact of the matter is that in any complex exercise like this it is impossible to rigorously approach it the way Fred says. You do the best you can and examine as many sets of parameters as you can.
The aerosols are interesting in that I believe that there must be a subgrid model for this to convert aerosol concentrations into forcings. Of course this model is mostly guesswork. The CERN experiment will be used I’m sure to change it. This is where the opportunities for tuning come in. The question is how do you set parameters whose values are largely unknown? The only scientific way is to use data. Fred claims that the modelers say they don’t use temperature trends. OK, but that may be irrelevant. It would be foolish and stupid to not look at model outcomes for your validation suite for various choices.
I am not against the modeling enterprise. It’s the best tool we have for some of these things. I do believe that in general models of fluid dynamical systems involve thousands of choices and are well known to be very wrong for some things. The trick is to determine what they are good at.
The point about the CCSM3 vs. CCSM4 by itself tells us nothing. There is bound to be a suite of validation cases. You can’t make all of them better all the time.
In any case, there is no such thing as a double blind study in model construction. That may come in the future, but not now.
David – This is why I would welcome the participation of modeler’s in this discussion. You imply they are not telling the truth, perhaps by being dishonest with themselves, and since they have explicitly and repeatedly stated the opposite, this is something I can’t resolve here. I expect there may be some occasional self deception going on, but if you read the extensive descriptions about how aerosols are addressed in models, it’s clear that there are ample justifications given for the choices. I’ll assume that influences based on knowledge of past trends may occasionally affect choices, but it seems extremely unlikely that it has been a major factor. If for no other reason, it’s because the models are so complex that no modeler knows how a tweak here and there will actually change the output. The CCSM3/4 illustration I cited is a good example.
It’s also important to understand that the question is not whether models are tuned, but what they are tuned to. It appears they are not tuned to match observed temperature trends – either not at all, or to a very small extent. Given the detailed description of what they are actually tuned to, the assertions from different modeling groups they are not tuned to the trends, the empirical data that changes from one version to the next may make the trend matching worse rather than better even as it improves other aspects of model performance, and the absence of convincing evidence that multiple members of the modeling community are misstating the facts, I conclude that claims such as Lindzen’s that the aerosols are “arbitrarily adjusted” to match the trends are almost certainly false.
Others can of course make their own judgments, but there seems to be something strange going on here. I have stated my conclusions on several occasions, because I think it’s important to challenge what Lindzen says. On the other hand, David, you have now consistently followed my comments on almost every occasion with claims that it’s unimportant or irrelevant. If it’s so unimportant, I don’t understand why it seems important to you to keep stating how unimportant it is, along with advice not to heed what I say. It seems to be something I can now almost count on without fail.
Perhaps, if you are so convinced of the unimportance of this issue, you might consider simply ignoring it and instead proceed with the other points you want to make, many of which sound reasonable but unrelated to whether or not models are tuned to match trends.
Fred- I’ll let David talk about the detailed dynamics, but (an introduction to) Reynolds averaging is covered in atmospheric fluid dynamics. It is not possible (or desirable) to describe every small scale fluctuation in the atmosphere. Suppose, for example, you have a time series of wind measurements at an accurate station. You would be see a lot of small variations in this time series, even on timescales of minutes, and perhaps more coherent signals associated with time of day, the entrance of a new weather system, etc. You can decompose the velocity of the wind field (say u, the east-west component, and v, the north-south component) into u = u_bar + u’, and v = v_bar + v’ (where u_bar is an averaged operator and u’ is a turbulent fluctuation).
There’s a lot of mathematical rules that students will learn to make sense of this. There are some closure methods (e.g., K-theory, that suggests these turbulent fluxes mix the properties of the environment). This is a reason we have parametrization and scientists try to validate them with respect to the real world.
Once we get into the implications, for say, climate sensitivity or long-term chaos then it is here that I disagree fundamentally with David. The model performance is not fundamentally constrained by Reynolds averaging, and many atmospheric features are part of the flow that is resolved (rather than the unresolved). That said, a lot of people have looked for evidence of multiple equilibria, strong long-term sensitive dependence to initial conditions, etc. There is no magic rule that the climate can’t be sensitive to initial conditions, but it’s a very small issue for the present climate at least.
Fred says a mouthful: ‘If you read the extensive descriptions about how aerosols are addressed in models, it’s clear that there are ample justifications for the choices’. Yes, ample justifications for a plethora of choices.
Fred’s inner ridiculer wants out.
A more specific and open discussion of the role of implicit as well as explicit tuning in model development is something that I have been hoping to see. It’s also one of the wishes that I have stated soon after I started to contribute in climate discussion.
Let’s look what the modelers are really telling, when we expand from those few comments that have made David and myself wonder at least assuming that Fred has succeeded in representing them correctly.
One of the papers that I read at an early stage of trying to learn about climate science was written by a Finnish rather well known modeler (and IPCC author) Jouni Räisänen, the 2006 review article How reliable are climate models?. The concluding remarks include a list of five arguments in support of usability of models and a list of six issues that weaken the arguments. Number 2. on this list is:
His overall conclusion is rather positive for the models, but the the content of his paper gives a lat of reason for caution.
Another interesting article is that of Peter Müller in WIREs Climate Change, July/August 2010: Constructing climate knowledge
with computer models. The conclusion includes the following sentences (my emphasis):
These are just two examples, which are by no means exceptions. My impression is that whenever climate scientists discuss these issues directly they in most cases tell the same message: The uncertainties are difficult to quantify, the role of implicit tuning cannot be separated, etc. In most cases their overall assessment is positive with caveats.
For an outsider with generic understanding of the issues but without the inside view of the issues it’s really difficult to judge to what extent the overall assessments are on target as there are certainly arguments to expect bias, but those arguments are not all in the same direction.
The most obvious reason for expecting positive bias is that many people wish to see their own work as significant and that also getting funding is easier, if others have trust in the work.
In case of science the opposite bias may also be very important as scientists do really search for explicit and objective evidence and are very often uncomfortable with less formal judgment. Judith is definitely not alone in her worry about the “uncertainty monster”. When such worry is repeated and passed from one specialist to scientists of neighboring fields, the result may well be overemphasis of uncertainties.
Personally I feel the forces for both types of bias are so large that it would require a lot of work to figure out, how strongly they have affected the conclusions. That would require extensive analysis of the process of model development trying to list all important choices done during the process and the real reasons for the choices. Figuring out the real reasons means that the scientists should try to identify all the related knowledge that they have and to what extent the choices have been done to create a model that “is correct” in view of their pre-existing expectations.
Chris – Thanks for those points. One of my questions is about the extent to which uncertainties regarding these subgrid processes affect overall climate dynamics and their modeling. I’m sure that they do, so the question is “how much?”
Pekka – I do think we need more input from modelers rather than simply quoting them. My interpretation has already been given, and I haven’t seen much additional evidence to change it. I certainly agree that modelers may sometimes have more private concerns than they express in public, and there is I believe universal agreement that you can’t use observations that models are calibrated against to then claim model accuracy. Nevertheless, no convincing evidence exists at this point to refute the statements of modelers that aerosols are not tuned to temperature trends in their models, so I doubt that it’s an important element of the picture, even if it might occasionally play a role. Without more direct input from modelers, I’m not prepared to conclude more than this..
As I understand it, here is what goes on re aerosols. The inverse modeling approach to determine aerosol characteristics does model simulations in the context of the 20th century historical record (of available emission info, other forcings, temperature, etc) to determine a space/time data set of aerosol characteristics that are used to force 20th century model simulations. There are a number of different aerosol data sets that modelers can choose. And there is some implicit assumptions made in the inverse modeling that are not completely ignorant of the 20th century temperature time series. So that is generally what occurs, but in the absence of rigorous, complete and transparent documentation of model calibration and forcing data set collection, who knows exactly what goes on, particularly in some of the modeling groups outside the “big 5”. From the emails, seems like Gabi Hegerl and others don’t know either.
Judy – I think we really do need some modeler input here. Certainly, historical trends go into inverse modeling estimates of aerosol forcing, but the modelers tell us that they don’t go into the forward modeling used for hindcasts. At least for some of the models (e.g., GISS, CCSM), aerosol forcing is not entered as an input value (derived from inverse modeling or anything else), but rather is generated within the model from aerosol data and the known physical properties of aerosols). Since exactly what is entered will entail some choices, we need more evidence to determine to what extent, if any, those choices are influenced by the historical record. If at some point, you can persuade one of the current experts engaged in recent model design to discuss this here, we would all benefit. Gabi Hegerl is a model user, but I don’t think she is a model designer, is she?
In the AR4 simulations, most of the aerosol “data” was generated by inverse modeling. For AR5, greater emphasis on forward modeling. Apart from the detail we are talking about, all of this should be transparent, rather than impossible to find out about (presumably each modeling group does it differently)
The basic details of aerosol implentation can be found in AR4 table 10.1.
Under the SO4 header there are numbers for some models giving the reference for the aerosol data. For example, quite a few use Bouncher & Pham 2002. Boucher & Pham provide both gridded SO2 emissions and forcing datasets, you’ll probably have to look at references for the individual models to see whether they used prescribed forcings or whether the forcings were derived from emissions “on-line” in the model. The aerosol forcings for CCSM3 are described in Meehl et al. 2006b, which states that Smith et
al. 2001, 2004 are used to provide gridded emissions information for sulfate aerosols and then forcings are calculated in the model.
Some just have a Y for Yes. Again, you’ll need to look up the model references to find out exactly what was done. For example Schmidt et al. 2006 is the reference for GISS Model E and states ‘The sulfate and carbonaceous aerosol fields were
generated by the model (SI2000 version; Koch 2001; Koch et al. 1999) with industrial SO2 emissions based on the inventory of Lefohn et al. (1999).
So far I haven’t found any models for which the sulfate aerosol forcing (or total aerosol forcing) is derived from inverse modelling.
I’ve focused on sulfate aerosols because that is the only species used in all GCMs in the AR4 ensemble. The main differences between models in terms of total net aerosol forcing seem to depend on the choices of species incorporated, and whether indirect effects are included. For example CCSM3 includes Black carbon but no indirect effects. This meant that CCSM3 had only a very small negative aerosol forcing (perhaps even non-negative?) GISS E-R has the same sensitivity as CCSM3 and likewise includes Black carbon, but also includes an indirect effect. Accordingly it has a stronger negative forcing.
Where tuning might come into it is in the development plans. In the CCSM4 papers there is a mention of a plan to add in a module to produce indirect aerosol effects, which are likely to cause a larger negative forcing. Would this be such a priority if CCSM4 didn’t overshoot the historical record? Another of the NCAR models -PCM – does show a good match with historical trend observations, with a fairly similar net aerosol forcing to CCSM3, but there don’t appear to be plans to incorporate indirect effects in this model. Having said that PCM is a fairly old and simple model, which might explain the lack of development plans.
I like Fred’s statement that the forcings are derived from the known concentrations and the “known properties of the aerosols.” According to the IPCC, the properties of the aerosols are essentially unknown within a pretty large range.
One thing that really puzzles me is why there is no tuning to match trends for example in hindcasts. It is the trends that are likely to be better predicted than the absolute levels anyway. Exactly replicating the current climate is almost certainly impossible anyway. What’s the rationale for this? Trends are generally much more important and more robust. Absolute levels, not so much. Since you can’t get everything right, you might want to pick the most important things.
A quote from Wilcox: “In essence, Reynolds averaging is a brutal simplification that loses much of the information contained in the Navier-Stokes equations. The function of turbulence modeling is to devise approximations for the unknown correlations in terms of flow properties that are known so that a sufficient number of equations exists. In making such approximations, we close the system.”
I just want to add that in a lot of circumstances, the effect of the subgrid scales is large. One can see that just by observing the differences between different turbulence models.
Fred, I think you didn’t read carefully. I apologize for “shadowing” you on this blog. The reason is that I actually find your comments interesting and worth reading. I find you a good stimulant to formulate my own thoughts.
What the modelers say may be technically true about the aerosol forcings. But the subgrid model I suspect is tuned and it converts concentrations into forcings, unless the models use an incredibly crude method. They are mathematically equivalent. Verbally differences are not necessarily meaningful.
Yea, its critical what the models are tuned to. One would hope its the test cases where the best data is available. Your assertion is that the models are only tuned to the current climate. If so, that’s a shame because hindcasting is perhaps more relevant to predicting the future, because it involves changes in forcings and trends.
I don’t think the aerosol issue is important. But your other assertions about the models and their construction seem to be second hand repetitions of what the literature says. Pekka and I have first hand experience and we find them to not be credible. That’s an important issue because of the prominance the models get from the IPCC and climate scientists generally.
My point about the stages of model construction seems to me to be pretty important. It would seem to me to be wierd to not look at the influence of all the choices made in construction of the model. The only way to do that is to compare to data and to look at them one at a time, at least initially.
Pardon me, I meant that the aerosol tuning issue is unimportant.
David – Leaving aside the trend issue, why don’t you expand on the subgrid modeling that you’ve referred to frequently.. It’s something I know little about, and that probably is true for most others here as well. I’m sure that turbulent flow, convection, and the like at subgrid levels have implications for the more global perspective, for long term as well as short term climate dynamics, and for temperature change as well as atmospheric and oceanic eddies and circulation patterns, but it’s not clear how much. The question then is – how important are the errors in estimating these subgrid processes for the larger scale and longer interval estimates that the models engage in? Is this a matter of opinion or are there quantitative metrics to judge this? Since this is a subject you know well, are you overestimating its importance in the grand scheme of climate modeling? How much will these non-radiative processes affect the estimates of how forcings affect the climate energy balance as a function of radiative restoration of balance in response to a perturbation? At how minute a scale must we understand convection as part of planetary radiative/convective equilibrium? I don’t think anyone has the answers to all those questions, but it would be worth hearing views on the subject.
Sure, First let me reference Wilcox’s book on the subject. It’s very technical but is very accurate.
First, as Wilcox states virtually all flows of interest involve very small turbulent scales that are impossible to resolve explicitly computationally. This includes things like climate, clouds, etc. The turbulent eddies are not molecular in scale so they are described by the Navier-Stokes equations on a fine enough grid.
There is interaction between all the scales. So how can you can you account for the subgrid scales on the actual grid? The idea is a concept called Reynolds’ averaging. Basically you decompose the velocity field into resolved scales and unresolved scales. Then you derive a new set of equations for the resolved scales which has terms related to the unresolved scales. Then you try to “model” these terms. Of course this is impossible in an accurate way, but you can make approximations based on both physical principles and heavy reliance on test data. For example you use the “law of the wall” for boundary layers. The effect of the unresolved scales is assumed to be a dissipation term. That in itself is an assumption called the Boussonesque approximation, but its OK in mild flow regimes.
The problem here is that the range of phenomena you want to capture is pretty broad. The errors in turbulence models are case dependent. I can’t go into complete detail here both because its technical and for other reasons which I can’t talk about here. Basically, NASA Langley has a good web site on turbulence modeling that nicely documents the accuracy (or lack thereof) of the most widely used ones for a range of flows. Basically, for simple situations, they are pretty good. For more interesting flows, they are pretty bad.
There are various adjustable terms that are added to the basic models. These are still controversial but help in some cases. Most people believe that the way forward is Reynolds’ stress models. The problem here is that the additional degrees of freedom is much larger and the computational burden greater. Like many things, you now have more degrees of freedom which can be good, but it can also be bad in that its harder to control all of them in an intelligent way. Basically, all these models are tuned using various test cases. The code developers are well aware of what the influence of various choices are going to be and consider that in developing betters models.
There is a recent paper from Airbus in Journal of Mathematics in Industry (Dec. 2011) that is pretty good on this. It’s available online for free. They are pretty honest about the problems and where we need to get better. You will find the literature a lot less honest especially the papers by the code developers themselves. My own experience is that the literature is worse than most people realize. Once again, I can’t discuss this in detail for a host of reasons. If you email me I can send you some references with some rather startling data that does cast some doubt on some well received ideas.
Another issue that is coming up here is related to the nonlinear nature of the Navier-Stokes equations. Basically, people are now finding a lot of multiple solutions, bifurcations, etc. that people used to think weren’t there. This is just beginning to appear in the literature. By the way, there are lots of situations where the long term solution is strongly dependent on the initial conditions. The climate dogma that this can’t be true for their problem is a strong assumption that needs strong evidence. I think the evidence here is showing the opposite at least in a lot of situations.
Fred, I don’t have time tonight to get into this discussion, but there is little doubt that different numerical integration schemes and different parametrization of subgrid-scale processes can project onto climate sensitivity, as well as other variables (e.g. Knutson et al., 2004 on hurricane changes). There is also a “super-parametrization” technique becoming popular and gives better representations of MJO, Asian monsoon, and other phenomenon on multiple timescales but come with some cost drawback (I’m not familiar with the details) and don’t necessarily represent clouds any better than GCMs.
I should note that it’s becoming popular in many studies to probe a broad range of parameter space either with perturbed physics ensembles (of a single model) or multiple models (can also explore structural rather than parametric uncertainties), involving many different runs. Again, there’s cost drawbacks to this, and I think climateprediction.net has been a great stepping stool forward.
David – Thanks. You’ve given me a lot to chew on.
Chris – Thanks.
David – I’m beginning at this point to start struggling with the Reynolds averaging concept and the decomposition of fluid flow into its mean and fluctuating parts. It’s clear to me that I’m not about to become a master in the field of fluid dynamics, but I want to get some perspective about how much the uncertainties affect the longer term, larger scale processes.
Fred, sorry, I replied to this part of the thread above…
Hissink’s a master. Read some old Climate Audit.
They deny it in public but admit it in private. Never mind the delusions of Fred; they tune their results to match the temperature record and that’s the end of it. If they didn’t do it, with so much uncertainty in each input variable there is absolutely no way they could replicate the past. From my eavesdroppings, most variables are left the same bar around four. Aerosols is obviously the only potent cooling agent so is a vitally important knob. In reality if you put that knob to the max of it’s uncertainty band then you would predict massive cooling. If a true sensitivity analysis (ie test all variables to the uncertainty max and mins one after another) were done then the outputs would be all over the map, not agreeing with each other at all and certainly not capable of predicting anything. I once asked Gavin if he ever did the results of a true sensitivity study (which is common in other fields) and he replied that such a test would “not be useful”. Read that how you like. He’s right though, the results of the ensemble merely reflect the bias of the users. It’s like putting your opinion in a mathematical wrapper and pretending the result proves your opinion was correct. Now that is useful – to an activist – but not for simulating nature.
There is something odd about global temperature data in 1960-1970 period which needs closer attention.
There is a good correlation between the global temperature and the AMO up to 1960 (r2=0.54), and even better on one (incredible r2=0.955) from 1970-2011.
I do not think that climate scientists should either ignore it (if real), or get away with it if not.
The AMO seems to be greatly influence by high northern hemisphere volcanic activity.
That compares a Siberian tree ring reconstruction (Jacoby et al 2006, Taymyr) with a southern south american reconstruction (Neukom et al. 2010) using a 66 year moving average. I would think that dirty snow due to BC and ash would amplify the melt in low volcanic periods. Agricultural expansion of course has an impact.
Not in this case, if so it would recover and return to where it left off as it did in 1920s . The volcanic effect disappears after few years. But look what happened around 1970; a sudden and ‘for ever on’ 0.325 C difference ? !
That is in my second comment. Since the SH and the NH are not responding the same, that appears to be evidence in favor of greater impact of land use and BC than well mixed CO2. So the AMO cool phase is likely to be less strong than in the past.
That one compare Taymyr with the CET and the AMO reconstruction by Gray et al. 2004, It is kind of busy, but the light gray is my attempt at a volcanic impact weighted for the NH plus the instrumental record for Parafilova (?) in the area of the Russian agricultural expansion into Siberia.
Russian records are not available to compare the area by area expansion, but there seems to be a correlation with the fits and starts in Russia to develop more cultivate land in the interior.
What would it look like if you used the pre 2000 dataset?
@vukcevic There is a good correlation between the global temperature and the AMO up to 1960 (r2=0.54), and even better on one (incredible r2=0.955) from 1970-2011.
That’s brilliant, vukcevic. The r2 = 0.955 you’ve observed would show that the big rise from 1970 to 2011 was caused by the AMO rather than CO2. All you need to complete your proof is to show that what you’re calling the AMO is not caused by CO2.
Not to be a wet blanket, or to rain on your parade, or whatever the expression is in Montenegro, but I would guess the following.
1. Most of what you’re calling the AMO is caused by CO2.
2. The “real” AMO (what the AMO would have done if the CO2 level had stayed at its 1958 level of 314 ppmv) would have behaved very differently from your definition of “AMO,” namely more along the lines of the green curve in this plot.
The orange curve, which is the sum of the AMO (called SAW there) and AGW (called AHL), is shaped like the AMO on the left and and like AGW on the right. This results from AGW being fairly flat on the left (which is likely to be flat for many centuries further left) and from AMO being fairly flat on the right (which based on the last millennium of tree ring data and other considerations is likely to repeat the previous 151 year pattern for the next 151 years, with the big upswing in 1912-1938 repeating in 2063-2089).
The r2 in this analysis, when restricted to “long-term climate” defined as the portion of HADCRUT3VGL varying slower than the 11 and 22 year solar cycles, has an r2 of 0.9996 when modeled with 9 parameters, and I estimate 0.9999 with 12 parameters (TBD). The 6-parameter version models AGW as a log-of-raised exponential (3 parameters) and the AMO as the 2nd and 3rd harmonics of a 3-parameter sawtooth (i.e. two sine waves, since harmonics are always sinusoidal). 3 more parameters describe the distortion of the 4th and 5th harmonics, bringing the parameter count to 9 and the r2 to 0.9996.
The model is surprisingly simple for such a high r2. Moreover the unexplained variance has a shape that should be possible to model with only three more parameters to get the r2 above 0.9999.
These numbers tighten those in my December AGU presentation, with no change at all in my estimate back then of 2.83 for climate sensitivity as I define it even though the r2 has improved significantly.
However since no one else uses my definition of climate sensitivity, which incorporates an expected delay for full impact of CO2 (presumably due to the ocean acting like a capacitor that takes a while to charge up, I estimate 15 years), comparing my 2.83 figure with anyone else’s is apples and oranges, a meaningless comparison. If you assume the delay is zero (which can’t approach r2 = 0.9996) then you get a climate sensitivity of around 2, typical of what those who study observed climate sensitivity report, as distinct from modeled climate sensitivity which gives completely meaningless answers. I.e. this sawtooth model would seem to permit accurate measurement of impact delay.
Since I don’t expect anyone to believe r2’s as high as the above, I’m working on an Excel spreadsheet designed to allow people (a) to verify these numbers and (b) to easily adjust the parameters and almost as easily the whole model to see if they can improve on r2 = 0.9996. Vanilla Excel, no VBA, should even work on Excel 2000. Unfortunately not on Open Office CALC—-even the “libre” version seems too badly broken to run nontrivial spreadsheets, maybe its current maintainers in Germany would like to fix it. Oracle seems to have lost interest in CALC.
” Oracle seems to have lost interest in CALC.” So true, open office spread sheets are a unique adventure with large time series. I think that your 2.8 is a touch high since the 15 year charging cycle appears closer to 30 years and seems to be in a discharge cycle if satellite data is to be believed :)
I think that your 2.8 is a touch high since the 15 year charging cycle appears closer to 30 years
You’re confounding the AMO and AGW. The AMO indeed has what looks like a 31 year swing up followed by a swing down of that length. AGW on the other hand is driven by a non-cyclic increase in CO2 and there is no shape from which one can infer any sort of “cycle.” Once you’ve separated the AMO and the AGW, the impact delay of the latter has to be determined by something other than shape-matching.
Whether increasing the impact delay from 15 years to 30 years raises or lowers the climate sensitivity in my sense depends on whether log(CO2) curves down or up respectively. According to Max, CO2 follows an exponential curve with CAGR a fixed .5%. In that case log(CO2) increases linearly, whence changing impact delay does not change the estimate of climate sensitivity. This is because sliding a straight line left or right does not change its slope, which is what climate sensitivity is.
and seems to be in a discharge cycle if satellite data is to be believed
Talking about the climate in this way is like describing the trajectories of Mars and Venus in terms of epicycles. (Today we would say “orbit” instead of “trajectory” but epicycles didn’t have a notion of orbit as we understand it now.)
Hi Dr. Pratt
I think you missed my point. You may get such high correlation for ‘formula’ generated curves, but I’m very sceptical when natural events are concerned, particularly after the ‘bottom fell out of the AMO’ in the late 1960’s, whatever is the cause.
Finally the AMO is unlikely to be cause of global temperature change, since it appears that it is itself a consequence of the changes in the sub-polar atmospheric pressure, already known of having profound effect on the Northern Hemisphere climate.
Either CO2 causes changes in the Icelandic Low some years ahead of its re-radiative effect on the temperatures, which is not what AGW says, or it has nothing to do with it.
For time being I think the later is more likely.
The AMO is last in the line of apparently closely related events.
WTF. I don’t care about your r^2, I believe you on that. I don’t understand your green curve. The ESRL people or whoever they are state that they do the best job they can to remove the GW signal when creating the index. They do allow for the possibility that it is a coupled phenomenon even then, but I trust them that you can’t just subtract GW from it.
The real issue I see with Vuk’s correclation is that the spatial extent of the two must be very different. A “Degree C” in the AMO index is not a global degree C as in HadCrut or UAH. It represents a more limited area (?).
So I think I’d show two green curves: a sinusoid with the period and amplitude of the AMO, and a “globalized” AMO somehow relating its amplitude to a global effect. Not sure how to do this. But not the time-spliced one you have created.
It occurs to me that the critiques of Lindzen did not address the major primary predominant refutation of greenhouse effect with a positive feedback loop, the missing equatorial hotspot around 200 hPa. All the other elements are just posteriorities.
You sum up the believer’s challenge well: they must keep distracting with attacks on Lindzen, rationalizations of Gleick, ignoring climategate, etc. etc. etc. or the wheels fall off the AGW wagon.
Comanche was Cavalry.
How can ignoring someting cause a distraction?
As for criticism of Lindzen being a “distraction”, he is a high profile figure who still has a certain amount of scientific credibility due to his past achievements and his presentation received a fair amount of publicity, as did the recent “skeptical” op eds in the WSJ, so it’s perfectly reasonable that peopel on my sdde of the arguent should want to respond – especially when their own integrity is being called into question.
If you see these arguments as distractions then your criticism should be directed at Lindzen himself and the authors of the op eds, and those that gave them further publicity. You might also consider that if their arguments had not been so badly flawed then us “warmists” would have had much less scope to criticise them.
‘So badly flawed’, so you pick a nit. Methinks you protesteth overly much.
“Badly flawed” is putting it politely.
‘Badly flawed’ is bafflegab. Lindzen’s Razor is the simplest explanation for why the models’ ‘projections’ are off the screen, but not necessarily the correct one. Got a better explanation? Drunken projectionist? Asleep at the Wheel?
The serious flaws in a number of Lindzen’s arguments have been pointed out here and elsewhere. I don’t see why I’m required to provide the “correct” answer because Lindzen’s talking nonsense – I don’t pretend to be an expert on climate models.
It isn’t missing. The trend is just too small to be detected accurately given that it depends on radiosondes flown in the 60s and 70s. These radiosondes were never designed to be sensitive enough to collect data that would enable you to detect a small trend.
‘Small trend’. I see you say it twice. The question is how small? So far it is small enough not to be detected. Maybe it will show up with the next hundred parts per million aliquot. I hope so, because by then we’ll need it.
Apropos ‘the major primary predominant … greenhouse effect with a positive feedback loop’, I recently asked the folks over at Realclimate how many years of non-increasing temperatures would cause them to reconsider.
One of their long-time faithful accepted that 17 years would do it, but said he would still nevertheless NOT question
(a) the greenhouse (ie Tyndall) effect – which seems uncontroversial; and
(b) a large positive feedback – which indeed seems controversial.
What else is there left TO question in that case?, I asked – but sadly at this point I was deemed to be disturbing the karma, and was blocked.
Does anyone here have any idea ?
presumably, the relative strength of the GHE and enhanced GHE (incl feedbacks) w/r/t other types of forcings and variability.
karma has a funny way of working out in the long haul.
Right. When dogma gets in the way, karma just runs right over it. (And don’t get me going about the rampant nepalism in Katmandu.)
Katmandu (sic) is a Bob Seger song. Kathmandu is the capital of Nepal. It’s not clear which one you meant.
CO2 control knob dogma is getting flattened(luv it) by karmic climatic observations. And don’t get me going about the rampant nepotism in CO2-Man do.
don’t tell me you’re a keeling denier
Naw, billc, the pun was too enticing, despite being shabbily dressed.
The evolution of the GISS adjustments in found here:-
So we have two years; 1934 and 1998 for the US.
in July 1999 the temperature of 1934 was 1.459 and 1998 was 0.918
not good enough.
in 2010 the values are now, 1934 at 1.26 and 1998 at 1.29.
We do not know if we will have a warmer future, but we can be sure we will have a cooler past.
go back and read this post.
The snows have started and the current warming has peaked out. We will cycle inside the same limits that have been in place for the last ten thousand years. Ewing and Donn modified for the modern cycle.
Here is an animation – http://i44.tinypic.com/29dwsj7.gif
Soviet adage: “The future is certain; only the past is subject to change!”
“evolution” is an interesting way to put it. I’d call it the “water boarding” of the GISS data meaning it may or may not technically be torture depending on who you ask but it has the same result – getting the data to say what you want it to say with a high probability of not being true.
Lindzens talk was a great examople of the old adage; a lie can get half-way round the world before the truth gets its pants on.
A snore can get half-way round the world before one even finishes reading some advocates of alarmism.
Yes the lies about Lindzen have received more coverage than his talk.
The team is hard at work.
Lindzen has been giving the same presentation for years.
Unfortunately/fortunately the number of atmospheric physicists with his level of qualification is quite limited and a large number of ‘climate scientists’ are not actually atmospheric physicists.
Nor are ‘climate scientists’ actually scientists, since refusal to follow the Scientific Method is POLICY for The Team.
My label for them: “Jackasses of All Scientists, Masters of None.”
Challenge: find one of the 100+ specialties that Carter estimates are crucial to understanding climate where the real professionals give the Hokey Team more than a D- grade …
“a large number of ‘climate scientists’ are not actually atmospheric physicists”
Climatology is an actuarial science. You don’t ask the guy who sets life insurance rates to tell you what causes cancer. You have to ask a physician about that and even they have limited understanding. Unfortunately climate scientists don’t appear to be statisticians either so all kinds of erroneous results emerge from data analysis done by them. I’m not sure WTF they are. Few of them are physicists and those that are, like Lindzen, are corrected by pikers who don’t know quantum mechanics from Quantas Airlines. They don’t practice experimental science. All they appear to do is collect data and then bloviate about what it means to anyone who will listen. The problem is that few of them have the proper expertise to be bloviating about it. They cobble together computer programs which make professional software developers cringe. The math they use in the computer programs they don’t understand and is contained in decades-old modules written by others.
In the Marine Corps we would call this a “cluster f*ck”. That’s what CAGW is – one gigantic cluster f*ck.
But the not-IPCC ditto-heads are ever happy when baaing in unison.
The RC and Nick posts criticize a single aspect on a single slide. Their arguments are trivial. They may as well argue about the font uses
Very good point. Takes people’s minds away from the important stuff.
HA Pope: “Takes people’s minds away from the important stuff.”
Isn’t the technical name for that “deflection”? :-)
Well said – the RS/Stokes observation is trivial and a feeble attempt at deflection.
In any case, it totally misunderstands Lindzen’s point (I know, I was there). Nick Stokes asserts that Lindzen was claiming that “someone is “manipulating the record”” – and that, he says, is “a shocking misrepresentation”. But he said nothing about manipulation. His point was that, not only is it difficult to predict the future, but, as it’s necessary to adjust the past record “in “climate science”, we also can’t predict the past”. In other words, it was a mildly humorous remark. That’s all.
Robin Guenier | March 8, 2012 at 1:23 pm
“But he said nothing about manipulation.”
The heading on the graphic, in big bold letteres, is
“NASA/GISS Data Manipulation”.
True, Nick. But that was not the point he made – see my post.
So unless RC and Nick also point out that in addition to picking up a dodgy graph and misinterpreting it, Lindzen is:
– claiming incorrectly that models are tuned to match 20th century temperatures;
– getting the calculations of greenhouse forcings wrong by 30%;
– implying that aerosols are being arbitrarily set large when they ought to be ignored;
– failing to see that arctic temperature trends from DMI are merely ERA40 reanalysis outputs whose long term trend should seriously be questioned;
their criticisms can be ignored?
Remind me, what *did* the Romans do for us?
The graph may not even be that dodgy. Surely land temps have importance and their calculations (both sides) have to be shown before anyone can say they misrepresent. It’s the number angels dancing on the head of a pin. The global temp is moving slower than the IPCC/RC/Nick said was even possible. They have no cred either.
Lindzen is pretending that the argument is an angel number issue – ie. that you only have to tweak a few inconsequential and arbitrary things and you get very different answers.
Unfortunately, he has produced a huge number of slides where he has picked inappropriate or erroneous evidence. Lots of slightly “dodgy” graphs add up to very dodgy conclusions.
He did the same thing with his Lindzen & Choi 2011 paper, when it was found that tweaking his periods of choice by less than a month gave the opposite result to what he hoped for, showing that the result *he* wanted was too dependent on *his* arbitrary choices.
That sounds like normal everyday climate science to me Steve, so what’s your beef?
Not much of climate science is not “dodgy” (your word not mine). You missed the point. Steve, did you even watch the presentation?
There is a huge difference between uncertain science and dodgy science.
It would be interesting to compare the size of the angel finger nail clippings identified in Steve McIntyre’s investigations of the Hockey Stick numerical analyses and the GISS “Y2K” error (in which the results have since been updated) with the hordes of angels released by Lindzen (in which case the results seem never to be corrected).
Or examine the pollen dust flapping off the tips of Steve McIntyre’s butterfly wings.
Let’s not lose sight of the admitted fact that CO2 is the boogeyman because GCMs could not come close to back-casting late 20th century warming without it. Not only couldn’t they reproduce the instrument record without adding *something* they couldn’t do it with just the physics of CO2 which only produces 1.1C per doubling over dry land if everything else remains the same and is limited in how far it can go by maximum posssible surface temperature that can be achieved by a perfect black body with more or less constant energy it gets from the sun. So they not only had to credit CO2 with the maximum possible rise of equilibrium temperature over dry land they had to treble it by inventing, out of whole cloth, a water vapor amplification effect. This water vapor amplification has no empirical support whatsoever. The highest mean annual temperature anywhere on the planet happens to be one of the dryest places on the planet with average annual rainfall of 1-3 inches. Adding insult to injury the mean annual temperature record was set 50 years ago when atmospheric CO2 in that desert (Dullal, Ethiopia) was much lower. There is nothing but a few loose correlations linking CO2 to surface warming, a bit of physics which predicts a maximum possible increase of 1.1C per doubling over dry land, and beyond that it is entirely an argument from ignorance i.e. “:if not CO2 then what”?
This water vapor amplification has no empirical support whatsoever. The highest mean annual temperature anywhere on the planet happens to be one of the dryest places on the planet with average annual rainfall of 1-3 inches
The first point isn’t true. The second point is unrelated to the first point as local conditions drive local temperature whereas the effect of CO2 (and increased water vapour) is spread thinly over the whole planet.
Where’s the empircal support for water vapor amplification? I provided contrary evidence and you provided nothing. You’re darn right local conditions drive temperature extremes. And in this case a lack of water vapor appears to have driven the extreme in highest mean annual temperature. Please explain why a humid location doesn’t hold the record. Again your response was simply a baseless claim.
You see, the thing of it is, is that CO2 won’t raise the equilibrium temperature of the ocean surface. It increases evaporation at the surface instead (unlike dry land). It takes a tremedous amount of energy to turn liquid water into vapor. There is no rise in temperature in the process either due to water’s extremely high latent heat of vaporization. So without all that extra water vapor from the ocean due to CO2 doing something to raise the earth’s temperature then anthropogenic CO2 warming becomes very desirable given its beneficial effect on primary producers in the food chain.
Come to think of it, Lindzen’s cloud iris hypothesis is supported by the highest mean annual temperature in the world happening in a bone dry, sea level, equatorial desert. If evaporation and condensation (i.e. the water cycle) were a net positive for equilibrium surface temperature then it should be quite impossible for a desert to hold the record for highest mean annual temperature. And if CO2 were all that much the record probably shouldn’t have been set in the early 1960’s. One might reasonably presume that record should be more recent.
I call crank.
I call anonymous coward!
Your comments are not rendered true or false by my identity.
Their nonsense is independent of who I am.
More impact from across the pond;
It’s a the seventh inning for AGW agenda setting, down 5 runs and nothing left in the tank. Lindzen is the true hero of the entire affair and will be remembered as such a century from now.
Just a slight warning. cwon’s link takes you to Page 2 of the article. There’s a link back to Page 1 at the bottom.
I don’t think “NASA-GISS Data Manipulation” claim by Lindzen is correct.
This is because the current GISTEMP LOTI trend is nearly identical to that of HADCRUT3 as shown => http://bit.ly/w337Nb
They both have a long-term global warming rate of 0.6 deg C per century.
If it were adjusted according Lindzen’s claim of +0.14 deg C per century, GISTEMP would have had a warming rate of 0.74 deg C per century.
The claim was not correct and has been acknowledged by Howard Hayden to be an error.
Between noble cause corruption and mass acceptance of assumptions by those compiling the databases, I think the failure in data will be more subtle than simple manipulation. No one has yet made a convincing justification for the ratcheting down of past temp records and the maintenance of the idea that they are meaningfully accurate.
On the other thread you said;
‘I don’t know about glaciations in the Alps (I have not looked into it), but I have trekked to Everest Base Camp in the Himalayas; and I could not help noticing that there is no evidence that glaciations there has ever much less – or even slightly more – extensive than it was 100 years ago;….”
In my article ‘The Long slow thaw’ that I linked to, were numerous references to Alpine glaciation. Doesn’t look like you have found time to read them yet.
Too much has been written about The Himalayas glaciation to warrant repeating it here.
By the way I’m still hoping to see some references from you as to why we should take any notice of the tree ring material that constitutes a large part of Dr Manns most renowned output.
Incidentally, I am British, and have no insightful knowledge of the American debate you keep referring to, other than that Obama and Romney both look pretty poor candidates to me.
On one quite simplistic level this can be seen as a death-match, nay vital struggle, between science and policy. Which will win, objective truth seeking science, or the ability of human culture to create belief systems inconsonant with facts, and the derivative, usually flawed, policies?
Ironically, unpredictable nature may well officiate the match.
Two things should not be doubted, warming is better than cooling, and cooling is more likely.
If it gets meaningfully warmer in the next decade or two, it will be one of the greatest surprises of my life. Since this would presumably happen despite the well known cooling effect of certain natural climate drivers now taking hold, I’d have to revisit my skepticism. I think a lot of us would.
And yet I have to ask, how many of you true believers would be willing to do the same if it goes the other way?
Sorry this is OT, Peter Gleick about to address the California Water Policy 21 conference. Jut was wondering how long he would lay low, seems it is this long.
..back to more important matters.
Are you sure that the agenda hasn’t been forged?
Look for lots of commas in the wrong places, a writing and layout style very different from the norm in that organisation and a nobody seemingly promoted way beyond his actual influence.
Enqiring minds want to know what the text was of his talk this morning. Here’s how the “Notable Speakers” page billed it:
Peter should have had his invitation withdrawn. That the Conference permits him to make a keynote address implies strongly they are OK with noble cause corruption and deception.
I don’t think the Climate Scientists have it right yet. And I certainly cannot agree to having my tax money spent foolishly.
I am only an Engineer, but I can’t help but think that a little level-headed thought might get us out of the dust bowl….why can’t these scientists devise a robust empirical experiment that will get them the evidence in real life that they get in their models? They have been at it for almost 40 years now, and the actual climate has been resisting their nudges for the past 15 years….what’s the problem?
This is extremely important to me…..if we cut the anthrpogenic CO2 generation to zero today (and forevermore), what will be the consequences? Besides sending me to the poor house?
Having attended Professor Lindzen’s presentation in the House of Commons and found it both informative and persuasive, I was surprised to read Real Climate’s blog stating that his claim that NASA-GISS trend data had been manipulated was based on a false comparison.
In essence, RC argued that he had, either accidentally or deliberately, misled his audience by suggesting that NASA-GISS had manipulated data to increase the apparent warming trend. If I understand correctly, Gavin asserts that the differences shown by Professor Lindzen were in fact based on a comparison between two different GISTEMP records, one of weather station based data and the other based on LOTI (land, ocean) records and was not, as claimed, based on two different presentations of the same data presented at two different times, 2008 and 2012.
I do not know if this question has been debated, elucidated or resolved on this thread (I’ve read only a percentage of the comments posted), but I would greatly appreciate reasoned views as to whether Professor Lindzen has indeed been guilty of the carelessness/ dubiousness or deliberate dishonesty that some of us have been prepared to accept as being pretty well the exclusive province of those who support the ‘consensus’ view of AGW.
Has anybody actually raised this specific issue with him?
In passing, I do not accept the point made by ‘MrE’; the argument is not trivial, it is central and pace Robin Guinier my impression was that Professor Lindzen was indeed suggesting that the differences between the two data sets did imply manipulation.
Trust in the honesty of the scientist is at the core of all aspects of scientific research and Professor Lindzen’s honesty is central on this issue. If he has made an honest mistake when comparing the GISSTEMP data then he should admit it or else rebut RC’s suggestion.
The AGW debate appears bedevilled on all sides with the appearance of cynical, dishonest and self serving argument and data massage and in this environment how can any concerned observer take anything on trust from either side in this benighted controversy?
Hi Tg O’Donnell,
It’s actually rather easy to reproduce the graph that Lindzen made (it took me about 5 minutes in matlab to downlaod all the data and reproduce everything Lindzen/gavin did). It’s also trivial to download the current and old land+ocean datasets from GISTEMP.
There is absolutely no debate that Lindzen screwed something up, or at least that someone else screwed something up and Lindzen incorporated that into his presentation. One commenter (John Kosowski) claims to have privately contacted Lindzen, who admitted it was in error, although Lindzen himself has not done so publicly. He should do so, but regardless, he should have had the investigative integrity to get it right the first time. It’s not a very technical point that involves subtle details that one could understandably screw up…he looked at the completely wrong data, or at least didn’t bother to check what someone else did.
Moreover, he used this as evidence of ‘manipulation’ on the part of NASA, and there’s no indication that he bothered to even check with them first. There’s interesting implications that would be drawn from this if he was right. One would be that the errors in data homogenisation, adjustments, etc are actually larger as you get closer to the modern day (than, say in 1880). This doesn’t make a whole lot of sense. Maybe if Lindzen even bothered to think about that it would have been better, but instead he stuck to the rhetorical tool of making his audience laugh with a good joke.
Regarding the claims that this is a minor part of his presentation, it might be, but he screwed up a lot of other stuff too (or at least gave no indication of other, more popular points of view), as other people have mentioned.
Lindzen made a mistake. He needs to correct the record. GISS have released their code. Their data is also open. They manipulate NOTHING.
They do make some methodological choices that people can take issue with however, these choices do not change the answer in any significant manner.
Look, for 5 years now, ‘skeptical” folks like nick stokes, zeke huasenfather, ron broberg, me, jeffId, RomanM, tamino, and the crew at clear climate code have been looking at GISS, running their code, dissecting their code, producing our own estimations of the land surface record. There is NO manipulation. no fraud. There are some fun to argue about differences, but NOTHING that would overturn radiative physics. The only issues of serious interest are:
1. calculation of uncertainties
2. finding a UHI signal, IF one exists, and removing it if possible.
Scientifically the land record is pretty boring. But there are some fun knarly problems ( homogenization, uhi, uncertainty, etc) that should be looked at.
These investigations won’t disturb the core science.
I disagree more or less completely. First, the uncertainties of using a non representative convenience sample are literally incalculable and easily greater that the estimated changes. The fact that UAH (an actual instrument, not a statistical model) shows no warming 1978-1997, plus also 2001-today, when GISS shows the most, is sufficient to falsify GISS, not to mention falsifying your so-called “core science”.
On a semantic note, some of us regard wholesale interpolation and area averaging as manipulation. And the disturbing warming trend of the adjustments is suspicious, to say the least. The obvious fact that GISS is populated by CAGW zealots, from Hansen on down, also enters in here. GISS is not merely an outlier; it is an outlaw. It should be abolished.
The UAH trend from 2001 to present is greater than the GISS trend. Care to rethink a few things?
Not to mention adjusting the past down and the current temp up, and having had these adjustments happen month after month and year after year. Ongoing adjustments are the worst form of manipulation. Look at New Zealand and the NIWA fiasco to get an idea of what is wrong with this.
When you are wrong you really go off the deep end!
The graph [slide 12 in the Lindzen seminar) was made by Howard Hayden – it was his error that purported to suggest that GISS had manipulated the temperature data. Hayden is no trying to make the error known.
Tg and Chris, It was a mistake at some level. But the main point he was making is correct, namely, that in climate science not only do we not know the future, but we don’t know the past. Basically, that’s unquestionably true. Lindzen used a flawed slide to illustrate the point. He should admit to this but its not a significant point in the overall scheme of things. Far more serious are the errors in Mann et al (including Schmidt)’s defense of the hockey stick. The problem here is more serious because the literature has never been corrected and does not reflect the controversy, except in Annals of Statistics where it is fully vetted. The reason it took 12 years and appeared in a non-climate science journal is telling about the level of corruption in the field. Lindzen’s error is not nearly as serious because it does not affect anything in the literature. And of course, Schmidt et al jump on this error while NEVER correcting the record with regard to their own errors. It’s a real problem and the controversy here is a side show by comparison.
“Lindzen used a flawed slide to illustrate the point. ”
And the Bishop slept with the actress only to prove the point that humans are by their nature sinful.
Not to mention that it wasn’t just one actress, it was the whole cast.
Well, he wanted statistical significance. Wonder what’s in his ‘censored’ file?
Well, he wanted statistical significance.
And Steve Milesworthy used a false syllogism to prove his point.
Tg, this is a puzzling error claim because charts showing that GISS so-called corrections or adjustments have made early periods cooler and late periods warmer have been around for several years. Thus if this is an error it is endemic to the debate (which I doubt) not due to Lindzen. He is merely repeating a common argument.
It is an error, but the error is not originally due to Lindzen. He simply propagated it.
Here is a Fred reply I prepared earlier – http://judithcurry.com/2012/03/06/ams-members-surveyed-on-global-warming/#comment-182931
A PERSONAL JOURNEY
Who Do You Believe?
For a non-scientist, and even a scientist without the benefit of a careful, in-depth examination of the climate models, or an in-depth knowledge regarding the acquisition, processing and presentation of the actual climate data, whom do you believe?
I started out believing the alarmism, except back in the early nineteen-nineties it was not alarmist. There were no forecasts of coming global catastrophe. I had more important things to deal with – my job. It was before the age of the inter-net. In fact, after moving to Singapore in 1991 on what would become a 10-year new job assignment, global warming was totally not on my radar screen.
Fast-forward to 2008, retirement and the age of the inter-net, I serendipitously come across a paper by Dr. David Evans “My Life with the AGO (Australian Greenhouse Office) and Other Reflections” presented to a conference in Australia on 29-30 June 2007. All of a sudden the theory of global warming was back on my radar screen as it never was before. Much of what Dr. Evans wrote was new, interesting and seemed to make sense. Since that time I have spent considerable time reading about the issues in an attempt to cultivate an informed opinion. I now count myself an anthropogenic catastrophic global warming skeptic. It will take a great deal to change my opinion. Following are the major factors, which influenced the opinion I now hold:
1. Feedbacks: Not well understood but vitally important to the climate models’ predictions of catastrophe, and, apparently, quite rarely found in nature. If feedbacks were positive why didn’t the earth experience runaway warming before, during periods when levels of CO2 were apparently higher than today.
2. Medieval Warm Period: If the earth was warmer than today before the advent of fossil fuels, natural variability must be a part of the equation. Warm periods have been favorable to human civilization; cold periods have not. The MWP was a perfect example. What is the perfect climate and why is the perfect climate the climate that existed at some period in our recent past before the burning of fossil fuels?
3. When I read that the EPA has categorized CO2 a pollutant, yet I know that CO2 is an essential element for life on this earth, that plants thrive at much higher levels and sailors in nuclear submarines live with no ill effects at substantially higher levels, I know I am being conned by ideological extremists; just the way I feel I am being conned when I see an article about the dangers of catastrophic global warming with a picture of a plant spewing a white cloud of water vapor knowing that the purpose of the picture was to equate that cloud with carbon dioxide for those who don’t know CO2 is colorless.
4. ClimateGate: The behavior of Phil Jones, Michael Mann and others mentioned in the e-mails shocked me, but when I read the condemnations of the late Hal Lewis’ and other highly placed scientists regarding the behavior exhibited in these e-mails, it has created a high level of cynicism that will be difficult to dispel.
5. The CERN Cloud Experiment: Why was the funding for this experiment for years blocked by the alarmists? While initial results are inconclusive, the work continues and may provide eventually provide clues as to the sun’s influence on our climate. Rather than support and welcome the knowledge that might be gained by this study, the alarmists seem antagonistic and looking to find fault with all aspects. That is not the impartial behavior one would expect from scientists.
6. Follow the Money: My life’s experience has taught me that money corrupts and the amounts involved on the side of the alarmists are just huge, therefore the burden of proof they must provide has to be overwhelming. The skeptics, on the other hand, are not, as far as I can tell, corrupted by money. Most operate on paltry budgets. Eisenhower’s warning in his Farewell Address seems to be prescient and right on the mark. The zeal with which the alarmists have approached the conclusion of a global warming catastrophe has made the skeptics more vociferous in their skepticism.
7. The Atlantic Monthly article “Lies, Damn Lies, and Medical Science” seems easily applicable to climate research, as humans, motivated by money and fame, are involved in both areas of science.
8. On a more personal level, for years I strictly practiced the AMA diet guidelines for a low fat/low cholesterol diet, yet I still developed heart disease. I then read Gary Taubes’ “Good Calories Bad Calories” which chronicled the history of the research up to the present. It was eye opening how the guidelines were adopted based upon what some would say was flawed research, and, even when the research, because of better technology, was finding a more complicated narrative, some of which contradicted the original research upon which the guidelines were established, the reluctance to change the guidelines was palpable. Too many companies and researchers had vested interests in the status quo. In addition, the science of what causes heart disease is far more complicated than the early theories assumed. Climate research today is probably where cholesterol research was 25 to 30 years ago, when the various classes of cholesterol and the mechanisms by which they cause arteriosclerosis were unknown.
9. I have an innate distrust of the wisdom and motivations of government. This is based upon my reading of history and knowledge of huge government screw-ups. Government was the catalyst behind the great dust bowl of the nineteen-thirties, probably the greatest environmental disaster ever faced by this country. The story is chronicled in Timothy Egan’s “The Worst Hard Time”. Encouraged by government subsidies, easy credit and policy, Americans were encouraged to head to the southwest prairie states to plant wheat. That they did by the millions, plowing up millions of acres of prairie sod, a form of grass that had over the millennia developed a resistance to the periodic droughts and winds that frequented this part of America on a regular basis. Well, when the drought and wind came, the wheat quickly died and the rest is history, but not a history that has been told. Then there was the savings and loan crisis, again with government a major player, and the housing bubble of a few years ago, courtesy of the government’s policy of a house for everyone who wanted one, whether they could afford it or not. Gretchen Morganson’s and Josh Rosner’s book “Reckless Endangerment” provides all the gruesome details. Followed by the financial crisis of 2008 the cause of which has been researched and detailed in a new book by Jeffrey Friedman and Wladimir Kraus, “Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation”. The alarmists all seem to want far greater government involvement in another major sector of our economy, energy, and that scares the bejesus out of me.
10. Finally, the theory of anthropogenic climate catastrophe is not some nice to know but what to make of it arcane branch of science with very little immediate practicability. It is hugely important in the here and now because of America’s huge inheritance of fossil fuels, the development of which on federal lands and offshore is being severely hampered by the likely unfounded fear that the burning of these fossil fuels will lead to catastrophe, and because of America’s precarious financial condition, the amelioration of which a substantial part could come from the harvesting of our fossil fuel inheritance.
Ron Abate: I then read Gary Taubes’ “Good Calories Bad Calories” which chronicled the history of the research up to the present.
That is an interesting post of yours, and all of the points are defensible, I would even say “sound”, though briefly put. The Taubes and Ioannides work can induce skepticism about the claims of expertise based on research, most appropriately. I think that it should be as irrelevant to the climate science debates as Naomi Oreskes’ “Merchants of Doubt”. But each relates, as you note, to the issue of “Whom do you trust”. Psychologically it is hard to “trust nobody”.
I personally am unwilling just yet to decide that AGW has been positively disconfirmed by scientific research, but I feel like the tide of research has been running mostly against it in recent years. Prior to about 6 years ago I went along with it. Then an article in Science prompted me to read more deeply. A narration of my journey would read like yours.
Thanks for the complement. For me it’s not about AGW it’s about CAGW (catastrophic) and I think that distinction has to be made over and over. You don’t spend billions on renewables, do away with the existing energy infrastructure in short order with all the economic downsides that it will most likely cause, because of AGW, but you might if it was CAGW. In fact AGW may lead to a better climate for humans as large parts of the northern hemisphere might open up to settlement and agriculture.
Came across this comment again. Excellent stuff.
Thought you might like to look at http://wattsupwiththat.com/2012/02/22/omitted-variable-fraud-vast-evidence-for-solar-climate-driver-rates-one-oblique-sentence-in-ar5/
Omitted Variable Fraud is a core issue.
There’s also some interesting new “strong” medical evidence that 2 shots of alcohol/day for ex-heart attack patients cuts recurrence by 43%!! (More or less than that per day increases risk.)
Some research you just WANT to believe in!
Thanks. I saw that post to which you directed but was unsure about its authenticity. I tried to obtain more information on the author but was unsuccessful. I am trying to be very careful lest I fall into the modus operandi a the alarmists citing stuff that supports their case but is in fact wrong or misleading. Do you know more about the author and why he (an economist) was selected to do a review?
Over the years I have been putting together packages of information that builds that case that the science is young and uncertain and sending these packages to various persons who are leaders and who take with people of influence. My last package was a letter to Charlie Rose with copies to Madame Legarde (IMF), Zeollick (World Bank), Bernanke, Fisher (Dallas Fed), Fareed Zakaria (Time), Gates, Peterson (Peterson Institute), Friedman and Brooks (NYT), Huntsman and Yergin (Cambridge Energy Research Associates). I am trying to get Charlie to interview on his program a skeptic like Judith Curry or Lindzen. If he did so, I think it would open up a whole new discussion as Charlie’s audience is worldwide and well educated. It would bring Judith’s” uncertainty monster” to a segment of the public that is influential and has bought into the alarmism because of their trust in expert authority.
I am also starting to keep links to posts at Anthony Watts and Judith Curry’s blogs and include them in the comments sections of articles in the MSM in the hopes that I can turn the MSM into a vehicle for greater awareness of opposing opinions, new scientific discoveries, and the level of scientific uncertainty. If we alldid this over and over, much more of the general public may become better informed and we may be able to neutralize the bias of the MSM
excellent summary, Ron
You make lot of points and I don’t have the time to address all of them (and I have zero interest in raking over issues such as climategate yet again) but I’ll try and give you (non-expert) answers your first three questions.
1. I agree that feedbacks are very important, although I’m not sure it’s true to say that they are rarely found in nature. We certainly have strong evidence in them playing a big part in past climate change, for example the large changes between glacials and inter-glacials which arose from very small changes in the earth’s orbit. You seem to be assuming that positive feedbacks must necessarily result in runaway warming but this is not the case. Let’s say that the feedback response is 50% of the initial increase in temperature, then an increase of 1C will result in a feedback response of 0.5C, which results in another 0.25C and so on… you will end up with an overall increase about double the initial increase, but not runaway warming (this is a crude explanation and the real process is much more complicated, but I think it’s good enough for this purpose).
2. Yes, there have been times in the past when the earth has been warmer than today for reasons other than human activity. It’s sometimes suggested that climate scientists are either not aware of this or wilfully ignore the fact or pretend it is not so, but this is simply not true, and much effort has been devoted to studying the way climate has changed in the past due to “natural” factors. But over the course of human history, certainly the last 8k years or so, temperatures have been relatively stable and the fact that human civilisation has thrived the way it has proves that current conditions have suited us rather well, although it’s worth pointing out that the places humans have fared best have been temperate zones rather than where there are extremes of hot and cold. Personally, I guess that like most people I would prefer to live in warm climate rather than a cold one, but while I have no particular desire to move to Scandinavia or Alaska I would not fancy living near the equator either and to simply say, as some do, that warmer is always better than colder is beyond simplistic – it all depends on how much warmer or colder and how warm or cold your climate is already. The fact is that global temperatures are already most likely as warm as they have ever been since modern civilisation began and they are going to increase even more. We are entering totally unchartered (for humans) territory. Of course if the climate were cooling at the same rate instead of warming that would be bad as well, no one is saying that warming is worse than cooling per se, but that simply isn’t happening so it’s a moot point.
3. Of course CO2 is an essential element for life on earth, but there are a lot of substances which are essential in small amounts but can cause harm if we get too much, water being the most obvious example – people can die from drinking too much, crops can fail due to floods. Plants can thrive with higher CO2 levels but it also depends on the availability of other nutrients – see Liebig’s Law of the Minimum, and you also have to consider the likely impact on crop production of increased droughts or flooding due to increased temperatures. Then there is the threat posed by ocean acidification due to increased absorbtion of CO2 in the oceans to consider as well. The point about sailors in nuclear submarines is a straw man because no-one is claiming that increased CO2 levels will be a direct threat to human health.
Thanks for taking the time to comment. The only positive feedback that I am aware of is a nuclear reaction Could you name 4 others. It seems to me that if the climate was susceptible to positive feedbacks, global temperature variation would be far more precipitous. That, to my knowledge, has not been the case. It also appears that warming that creates more water vapor and clouds may have a cooling effect dampening the warming of higher CO2 levels. Clouds reflect some of the sun’s rays. The CERN Cloud experiment seeks to determine whether sun spot activity impacts cloud formation, which some believe has a cooling effect.
I’ve lived for many years just north of the equator. With air conditioning it is quite livable, far more so than living in in Canada during the winter. It is my understanding that much of the warming would take place in the places like Canada and Russia, allowing possibly greater settlement and more agriculture. I don’t think, however, that we know enough about how and where the warming would take place, so to say it would be all bad, or catastrophic, is alarmism. I have a hard time thinking that we need to hastily convert our energy industry for just a few degrees, especially when that few degrees might be in places that could use a little warming up.
Yes, too much of anything, even the essentials of life, can be unhealthy, but we don’t classify water a pollutant even though too much of it can cause disaster. So then, why do we classify CO2 a pollutant other than to scare people who don’t understand its importance to life.
I am not a scientist, but when I read and listen to serious, highly qualified scientists — William Happer, Hal Lewis, Freeman Dyson, David Evans to name just a few that come to mind — express skepticism, and then I see the shenanigans at the IPCC, ClimateGate (roundly criticized by Lewis et al), the recent FakeGate (on the day Gleick obtained the documents under false pretenses he turned down a debate invitation), it all sinks to high heaven.
I also don’t like the fact that most of the alarmism is based upon models, because models can be tweeked to give about any answer you want them to give, and, even if you run models up and up,models are notoriously wrong most of the time.
For me the burden of proof is going to be on the alarmists, especially when we have a huge inheritance of fossil fuels that this country sorely needs to develop to create jobs, revenues and wealth and especially when the financial incentives are weighed so heavily in favor of alarmism. When I see some the the skeptics change to alarmists then I will also change my thinking, but the opposite seems to be happening as more research and more data come in.
It is mind boggling that anyone would dispute the manipulation of raw temperature data. This is factual. Here is how it is manipulated:
As one can clearly see from the graph ALL 20th century warming in the instrument record is generated by two adjustments to the raw data – Time of Observation Bias (TOB) and Station History Adjustment Program (SHAP). Somehow we are supposed to believe that circa 1950 drastic change took place all over the world in how min/max temperature readings were obtained. I don’t believe it. The only thing I believe is the satellite record beginning in 1979 which is the only close to having global coverage and the sensitivity to detect tiny trends on the order of a tenth of a degree per decade. And even that satellite record is dodgy and pencil whipped with no small amount of internal controversy among the few gatekeepers.
I think the USHCN chart is for the USA only. Not the whole world, just the USA. The distinctive adjustments for TOB and SHAP are unique to the USA. Globally they don’t make a large difference because the USA constitutes only a small fraction of the global surface area.
Scientific American January 2012
“The Science of the Glory”
“One of the most beautifu phenomenon in meteorology has a surprisingly subtle explanation. Its study also helps to predict the role that clouds will play in climate change.”
Money quote [my emphasis] “The understanding of glories is helping climatologists to improve models of how cloud cover may contribute to or alleviate climate change”
How that amount of uncertainty in clouds (the sign of the feedback is unknown!) managed to come out in writing in the new and diminished Scientific American which I’ve been reading cover to cover for 40 years is a complete mystery. At least they fired that hideously biased editor-in-chief John Rennie a couple years ago.
In your “Idiot Tracker” post you asked five questions regarding aerosol forcing, but forgot to ask the most pertinent question of all: “do we have empirical data based on actual physical observations or reproducible experimentation to support the model-based estimates for aerosol forcing?”
Until this question can be answered with a well-founded “YES”, the other questions you raise are actually meaningless and Lindzen’s conclusion holds,
Apology From Prof. Lindzen for Howard Hayden’s NASA-GISS Data Interpretation Error
Professor Richard Lindzen and repealtheact.org.uk would like to make it known that there was an error in the interpretation of NASA-GISS data by Howard Hayden (see Lindzen’s use of Howard Hayden’s graph seminar slide 12).
The Howard Hayden graph was used in MIT Prof Richard S. Lindzen’s seminar on Reconsidering the Climate Change Act Global Warming: How to approach the science (Climate models and the evidence ?) at the UK House of Commons in Committee Room 14 held on the 22nd of February 2012.
RE: LINDZEN’S SEMINAR SLIDE 12 ERROR TITLED:
“NASA-GISS DATA MANIPULATION: CHANGE IN HISTORICAL DATA” – A GRAPH FROM HOWARD HAYDEN WHO NOW ACKNOWLEDGES:
“I concluded incorrectly that NASA-GISS had manipulated the data. I am making every effort to correct my error.” (Howard Hayden)
Prof. Richard Lindzen’s email to repealtheact.org.uk:
“Please accept my apologies for using the graph from Howard Hayden that purported to suggest that GISS had manipulated the temperature data. I asked Howard to check how he arrived at this conclusion. Here is his response:
“Please accept my sincere apologies for misrepresenting NASA-GISS data. I downloaded temperature data from http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt to make a graph in 2009. About a month ago, I went to the same file to get the more recent points and was surprised to find a considerably different data set. The formatting of the data set was the same, and I did not notice that the heading said that the data referred to meteorological stations only. As a consequence, I concluded incorrectly that NASA-GISS had manipulated the data. I am making every effort to correct my error.”
Lindzen: “It seems to me to have been an innocent error, given that the URL’s were the same…”
[See seminar graph slide 12 error – which Prof. Lindzen has asked to be acknowledged and removed from the PDF].
[See seminar slide 12 error: “NASA-GISS Data Manipulation Change in Historical Data” here]
Prof. Lindzen also stated:
“This doesn’t alter the primary point of the discussion that a few tenths of a degree one way or another is not of primary importance to the science. The public interest in this quantity, however, does make it a matter subject to confirmation bias.
Will this apology and explanation be given to the individuals (especially the MPs and MEPs) who attended the event?
As you can imagine I am really angry about having such a stupid error in the seminar – Hayden should know better and I feel Lindzen should have perhaps doubled checked all data and slides before coming to the ‘Mother of all Parliaments’.
Yes, of course we are sending out emails to MPs and MEPs and have put a post on our website.
In addition I am also trying to track down those who have posted the seminar PDF file to correct the slide 12 error. I have sent a statement to WUWT. I will also have the youtube video re-edited so that mention of the Hayden graph is removed altogether!
Lindzen makes his very serious claims of deliberate fraud by NASA/GISS which turn out to be nothing more than his own gullibility/ incompetence/ confirmation bias.
But when shown up to be false, Lindzen then claims “innocent error”.
Did he ever consider that even if the claim against GISS was true, that is was an “innocent error”? Of course not.
This was a political talk, not a scientific one.
Respectable scientists do not accuse colleagues of fraud in public seminars when they find apparent ‘errors’ they put the effort in to get to the bottom of the ‘error’ or ‘misunderstanding ‘
Good to see Howard Hayden now acknowledged. I’m reminded of the old saying that victory has a thousand fathers; defeat is an orphan.
In re all this discussion of climate models: The more I read about these models the more suspicious I am of them and the more convinced I am that any attempt to “model” an environment where the random varialility of any number of independent variables (let alone their absolute values) exceeds by many orders of magnitude the dependent variable .necessarily produces a corrupy result.
Whatever happened to simple observation and simple arithmetic? I repeat what I’ve said here before: simple observation and eve3n simpler arithmetic prove that man’s activity is an infinitesimal factor in CO2 activity, and CO2 is an infinitesimal dactor in climate change. Deny that, and you’re denying physical facts well attested to in the geological and historical records.
Who needs models when you have this sort of tangible proof that AFW is perverse politics and not science?
Perhaps you’d like to share this simple arithmetic?
Energy in – Energy out = d(S)/dt
Where d(S)/dt is the change in global energy content. Simple first law energy conservation. The simple 1st order differential energy equation describes the system perfectly.
Hence if d(S)dt is +ve then Ein > Eout
Energy in changes a bit – http://lasp.colorado.edu/sorce/data/tsi_data.htm#plots – although the limits are not that well constrained – see for instance http://www.realclimate.org/index.php/archives/2011/08/how-large-were-the-past-changes-in-the-sun/
Energy out is the great unknowable in any but the satellite era – or perhaps by automated telescope. It consists of both emitted IR and reflected SW and the suggestion is that the net flux (IR and SW) covaried with ocean heat content (Wong et al 2006). – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=Wong2006figure7.gif –
‘In summary, although there is independent evidence for decadal changes in TOA radiative fluxes over the last two decades, the evidence is equivocal. Changes in the planetary and tropical TOA radiative fluxes are consistent with independent global ocean heat-storage data, and are expected to be dominated by changes in cloud radiative forcing. To the extent that they are real, they may simply reflect natural low-frequency variability of the climate system.’
So the math is simple and the data is showing the warming caused by cloud cover changes. Hmmm.
Professor Dr. Richard S. Lindzen, Alfred P. Sloan Professor of Meteorology at the Massachusetts Institute of Technology, AGU Fellow since 1969, NAS Member since 1977, winner of numerous awards and medals for meritorious accomplishments in meteorology, has found it necessary to make apologies for his erroneous accusation of “NASA/GISS Data Manipulation” on behalf of one unfortunate Howard Hayden for (inadvertently) screwing up a data comparison.
Is this modicum of forthrightness an indication that Dick Lindzen might actually be inclined to correct the record of his (inadvertently) erroneous past statements about climate change and global warming? Should we be awaiting a forthcoming retraction of his erroneous 1991 claim in QJRMS that “about 98% of the natural greenhouse effect is due to water vapour and stratiform clouds with CO2 contributing less than 2%”? Will he come clean and acknowledge that his unfounded claim that “If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C” is actually erroneous? Will he also acknowledge that he is also in error to claim that “The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.”?
But I would not hold my breath to await any rational or reasonable comments on climate to come from Dick Lindzen. Since the height of his considerable scientific accomplishments, his mind seems to have become impervious to scientific thinking. Lindzen is a prime example of where undisputed expertise in meteorology does not automatically confer an understanding of climate. In order to comment sensibly in another discipline you first have to study and understand the basic issues and physics relevant to that discipline. Sadly, through his own self-delusional actions, his otherwise substantial scientific reputation is getting flushed down the drain.
A perfect example of the positive feedback effect. Let’s see now, the science is evolving. The science is uncertain. New data and new research keep finding new answers and raising new questions. So, tell me, what relevance are statements made 1991. Lindzen admits there has been human induced global warming. He admits that CO2 will increase global temperatures in the future. What he also says is that the warming will not be catastrophic because for it to be the feedbacks would have to be significantly positive, and that they will be is highly uncertain, and most probably wrong. So where is your problem? The error on the chart has absolutely nothing to do with the broad parameters of the message. Lindzen trusted in the work of a colleague and that colleague made a mistake that Lindzen did not pick up. It seems to me that there is a huge amount of trust on the part of the alarmists in the work of other alarmists that is not checked. That’s just human nature. It was not another alarmist who picked up Mann’s Hockey Stick Illusion, was it?
It is more than a bit snarky for you to go after Lindzen while rationalizing Gleick.
I would suggest that whatever impreviousness Lindzen has developed in making this presentation mistake, it pales in comparison to the ethical imprerviousness you and many other AGW believers have developed to ethics, facts and science.
Mr. Chris Colose:
Here’s some of the simple arithmetic:
Fossil fuel burning net retention 10 gt/yr (after immediate absorption)
Animal respiration up to 700 gt/yr (humans alone 3.5 gt)
Mt. St. Helens eruption 1980 30 gt
Tambora eruption 1815 1,800 gt
Annual seasonal CO2 release/return in oceans 180-540 gt/season
Water vapor in atmosphere 30-140 x CO2 content
Solar luminosity variability 3%/k-yr, ~ 500 x effect of changes in atmospheric & oceanic circulation
Before blue-green algae converted it to O2, 1.5 g-yr BC, CO2 was 20% of the atmosphere and the Earth didn’t burn up (and the atmosphere was 50% thicker than at present, reduced by the Permian asteroid impact and the K-T event).
Heat retention effect of CO2 does not increase linearly but declines proportionately with increased concentration – next increment produces less effect than previous increment, and it’s possible there is a meximum possible effect at some point.
Comparisons with Venus – Venus has 250,000 times the CO2 inventory that Earth has, and even when Earth had 20% CO2 in the amosphere, Venus still had 300 times as much CO2 in its atmosphere as Earth. It’s the high atmospheric pressure on Venus (94 x Earth’s) that makes it so hot. Remember what happens when you compress any gas? (E.g., in a diesel engine)
Range of random variation in these other factors exceeds fossil fuel incremental CO2 by st least 3 orders of magnitude – make that 4 to 5 orders of magnitude for the absolute values of these other factors.
Let’s don’t forget previous warming periods well attested in the historical record: Hittite-Mycenean, 1800-1400 BC; height of Roman Empire, 100 BC -300 AD; Medieval Warming Period, 900-1300 AD – all denied by the AVW scaremongers. Zero correlation to CO2 there.
Even if my figures were off by a factor of 10 the conclusion will still be the same: human activity is an infinitesimal factor in CO2 activity, and CO2 is an infinitesimal factor in climate change.
Game, set and match to Wozniak.
Chad, you are clueless.
My view on the following (in the same order).
A Personal Journey
A life long environmentalist; member of the Sierra Club. Well versed in environmental matters while in the Boy Scouts. As a graduate Engineer, many assignments in Industry as an Environmental Engineer and also as an Energy Conservation Engineer. Involvement with OSHA, EPA Clean Air Act and Clean Water Act. Pursued assignment with The Nature Conservancy. While in retirement: pursue meaning of the IPCC AR4. I have spent considerable time trying to understand weather (and climate); my interim conclusion is that the atmospheric/land/ocean weather system is highly complex and it will take a long time to ascertain exact causes for seemingly simple events. I do not see evidence that we are close to understanding this complex system, let alone in a position to formulate plans to change it to our liking. Instead, at this point we should concentrate on adaptation techniques that are cost effective.
All science inquiry must be completed utilizing the Scientific Method. In addition, the scientific studies must be empirical. Computer based studies certainly are useful, but cannot be used as evidence.
1. My understanding of natural feedbacks is that they exist throughout nature and that they are essentially negative, because we do not see things running out of control. Even the ice ages/interglacial cycles may be in “control”, but we have scant information on this. I was extremely impressed with the natural recovery of rivers in the USA when the clean water act was implemented: prior to that we had polluted the rivers so much, some actually burned, while others decimated fish populations. They cleaned themselves up quickly when we started to reduce the pollution.
Atmospheric CO2 is quite something else, since it is not a pollutant. (if it is, at what concentration does it become harmful?)
2. The dispute is not the Medieval Warming Period, per se. Did this affect the entire world, or not….is the question. Actually, there has been a great deal of temperature measurement in the Northern Hemisphere, and not so much in the Southern. In fact, there should have been 71% of atmospheric temperatures measured (above the oceans), but we know that didn’t happen. So, I “trust” actual temperatures as measured by the satellites. Actually, I measured sea water temps for a few years and I don’t “trust” them either, since the thermometer can only be read to within one degree — how can we interpret a 0.1 degree increase over a period of time?
3. Life sustaining substances are helpful within a certain range (like 23% Oxygen in the atmosphere). I suspect the reason for this is due to our adaptability to the substance, which is itself a reflection of our evolution.
4. Climategate was just an expose on the interior workings of human endeavor. Fakegate, however, was a ploy to “paint a picture that the denailists are just as bad”. Unfortunately, that rouse did not work. All of this is beneath true science and should have been jettisoned before it became public. Political motives belong in state houses, not in laboratories.
5. The CERN saga points me to another conclusion: the IPCC should be concerned with all science that impacts its mission, rather than only be concerned with science that supports its conclusion that CO2 has (and continues to) contribute to runaway global warming. Why block any research? The world is funding the CO2 research with $Billions/year while denying funding to CERN even while that research could shed light on the actual role of another source for the higher temperatures. Funding plays a huge role in the establishment of the knowledge base for the science in question.
6. Money has to be spent on research in order to pursue the truth. If, like in this case, a vast amount is spent in one camp (with extremely small amounts being spent to counter their conclusions), I would expect that the funded side would eventually swamp the other side and “win” the debate. Then go on to something else.
With “Global Warming” the funded side has yet to win, as evidenced in the collapse of the Copenhagen Talks, even with the $Billions already spent. Why is this?
7. So, what do you call it? Lies? Damn Lies? Errors? Half-truths? The Scientific Method will sort this out, if used. Unfortunately, I don’t think it has been applied to “Global Warming” research adequately. I would like to see the IPCC AR5 actually show not only the evidence for the proposal (and why it is right) along with the evidence that goes against the proposal (and why it is wrong). Instead, all we get is one side. Where do we look for the other?
8. The Climate System is so complex I do not think we are even at the point where we actually know why some of the variables exist. Take cloud formation as an example….is it always a positive reinforcement to temperature disturbances? Or is it negative? Or, maybe some of both….or maybe it changes from time to time? And, if so, what makes it change?
9. I distrust big government as much as the Founding Fathers did. It is unaccountable to the people.
10. America as founded was a positive force for human Freedom. Stating that CO2 is the cause of great world wide catastrophe is a ruse, used to enforce the people to pay homage to a government system that will enslave us all.
In a extremely complex system that interacts with all living species of the world, there can not be a single element that can affect it in such a degree that no one or nothing can overcome its influence. Sure, I believe that CO2 has influence on other elements of the system, I just don’t think it can cause catastrophic disruption of such a vast system. Sure, I am not an Atmospheric Scientist, but I have been schooled in the sciences and have experience in the same that prepare me for the future.
I am a life long environmentalist and abhor pollution in all of its forms. I just don’t think CO2 is (in this concentration) a pollutant worthy of our consideration at this time. Keep studying the system and alert us when we should be concerned.
1. There are two (perhaps more) ways of defining the feedback for the warming. One of the ways includes all feedbacks, the other leaves the primary feedback, known as Planck response out considering it as something else than feedback. When it’s included the feedback is certainly negative and everybody agrees on that. The discussion on the sign concerns the other feedbacks, and it’s quite normal that there are positive feedbacks that may dominate, when the primary negative feedback is excluded.
5. CERN is laboratory of high energy physics. It has been funded to do research in that area and it’s certainly exceptional that it accepts an expensive experiment that’s not in its research field. Kirkby started to push for his experiment and was given the opportunity to present his argument in an open colloquium in the big auditorium of CERN. His presentation was such that it was sure to raise a controversy. In spite of all these factors his experiment was accepted. Thus claiming that it was particularly denied funding is just very one-sided propaganda. You should not believe everything that is propagated on the net.
No experiment can be initiated rapidly in a research environment such as that of CERN. Getting time slots at accelerators is one issue and all the preparations required to make the experiment succeed are another.
The experiment has produced some results, which seem superficially be what was to be expected. There was no doubt that radiation plays some role in nucleation, that was known very well by all physicists with experience from cloud chambers. Thus the task of the experiment is not to prove or disprove the phenomenon (which had been proven innumerable times) but to get more quantitative results. Whether these results support or contradict theories of Svensmark is not indicated in the early publication, probably it’s too early to judge.
On several other points you also imply that you know various facts. I dare to claim that you don’t know, you have just ended up in believing what the skeptical side tells, and disbelieving what the other side tells. In spite of your environmental background you have made up your mind and when you have done that it has been easy to find supporting “evidence”.
For the record, CLOUD was turned down in Sept. 2000. A later proposal was accepted in 2006, that famously was not ‘interpreted’ with regards to climate – a special no gravity zone I guess. From the minutes:
” The Committee received written reports from two external referees. One referee questions the starting point of the experiment; the observation by one group of a correlation between the sunspot cycle and the low cloud cover. The other referee doubts that the proposed measurements can be related to the conditions in the atmosphere.” (May 2000 minutes)
“However, the Committee considers that the necessity to use the PS as the appropriate ionization facitity is not demonstrated by the proposal.” (Sept 2000 minutes)
*it was not a scheduling problem, or that it took time to get going – it was turned down.
From the original proposal:
“The weak link is the connection between cosmic rays and clouds. This has not been unambiguously established and, moreover, the microphysical mechanism is not understood.”
* so it was not known, and in fact one referee doubted related to the conditions in the atmosphere.
His mistake was taking the scientific interpretation of, “the world is trying to confirm that global warming is due to CO2”.
Paging Pekka. He is on call today, right?
It was turned down, but it took also time to get it running when it was later accepted. That’s what I meant.
The decision was made by CERN decision makers. It’s cleat that it’s not easy to get them to accept a project, whose goals and a major part of methodology are not familiar to them. To get such a project accepted they must be provided with exceptionally good proposal. You listed some specific comments that are to be expected in the handling of such a project proposal, and which confirm my view.
“Thus claiming that it was particularly denied funding is just very one-sided propaganda.”
Confirmed. I’ll let CERN know.
There are the Jeff Grants. And there are the Peter Gleicks.
Thank you for your comments. Pekka, you think I am a skeptic. I have an open mind, but the proponents are not making it easy for me to gain the knowledge I need to understand. I have not yet had the “aha” moment. Dr. Mann once told me to “keep reading and looking” rather than point me to scientific papers that would help me find the answers. With the $Billions being spent on this, I would think that any interested party would have easy access to any pertinent information they want. Not so!
The thing is: I was REALLY high on Wind Power until I thoroughly looked into it. If only a nice cheap source can be found for storing the energy, I think the system would be really slick. Of course, there are still the environmental problems, but technically I think it would be a great addition to the grid. However, without storage it is using the existing grid as the device for levelling out the energy fluctuations. That may be OK for low amounts of energy flow, but when it gets higher (over 5% of the grid?) then it creates huge problems.
Then, I found out the only reason for promoting Wind Power is the CO2 reduction. When you consider that the system must have a fossil fuel sourced backup (to smooth out the fluctuations), then it turns out to be a very expensive solution for reducing CO2. Why not just replace Coal with Natural Gas?
Oh well….I’m trying to find the solutions. Maybe the only solution will be Nuclear Fusion.
I’m a bit late to this thread. Where can I find the updated data that Hayden thought he’d used? Or was it discontinued in favour of the dataset that replaced it?
My thanks to Chris Colose, Steven Mosher, both Davids and others who responded provocatively and comprehensively regarding Professor Lindzen’s claim of data manipulation by NASA GISS.
The information provided by Fay Tuncay and Prof. Lindzen’s apology makes clear how this particular miss-representation occurred and many of us can sympathise. However, in a field as fraught with conflict, contradiction and heated emotion as AGW/CAGW, where large numbers of advocates, evangelists and the politically motivated are continuously alert for ammunition rather than argument, it is evident that all scientists active in the field, especially the most eminent, have a particular duty to check and recheck their facts when accusing others of deliberate misinterpretation or manipulation of data.
We who see ourselves as concerned and intelligent observers of the debate are entitled to expect this quality of care from all sides more especially since a careless attribution of ‘manipulation’ by one person can lead to the dishonourable and unprofessional behavior evidenced by ‘climategate’ or the self-justified but disgraceful and possibly criminal behaviour of a Peter Gleick.
I totally agree with TG O’Donnell, these major errors should not occur in any profession. BTW repealtheact.org.uk has had Part One of the Lindzen seminar re-edited, slide 12 removed and all mention of NASA-GISS has been deleted.
If only the believers were so interested in the mistakes made by the AGW promoters.
In re environmental issues touched on here:
A true environmentalist will not want to spend trillions chasing the CO2 bogeyman. A true environmentalist would want to spend that money on cleaning up pollution, developing cleaner fuels and renewable energy resources, and capital investment in the world’s poorer countries..
AGW scaremongering can only hurt the environment.
Mr. Jeff Grant –
I applaud your thoughtful commentary on AGW and its failings.
AGW is not science – it’s dirty politics.