by Judith Curry
Lindzen’s seminar last week that was presented at the House of Commons may be the most effective seminar he has given on Global Warming.
The pdf of Lindzen’s presentation is found [here]. Some laudatory comments on Lindzen’s talk from unexpected quarters such as Simon Carr of the Independent.
Lets take a closer look at his presentation.
Slide 2:
Stated briefly, I will simply try to clarify what the debate over climate change is really about. It most certainly is not about whether climate is changing: it always is. It is not about whether CO2 is increasing: it clearly is. It is not about whether the increase in CO2, by itself, will lead to some warming: it should. The debate is simply over the matter of how much warming the increase in CO2 can lead to, and the connection of such warming to the innumerable claimed catastrophes. The evidence is that the increase in CO2 will lead to very little warming, and that the connection of this minimal warming (or even significant warming) to the purported catastrophes is also minimal. The arguments on which the catastrophic claims are made are extremely weak – and commonly acknowledged as such. They are sometimes overtly dishonest.
JC comment: well I’m sure that got their attention.
From slide 3:
Here are two statements that are completely agreed on by the IPCC. It is crucial to be aware of their implications.
1. A doubling of CO2, by itself, contributes only about 1C to greenhouse warming. All models project more warming, because, within models, there are positive feedbacks from water vapor and clouds, and these feedbacks are considered by the IPCC to be uncertain.
2. If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.
Given the above, the notion that alarming warming is ‘settled science’ should be offensive to any sentient individual, though to be sure, the above is hardly emphasized by the IPCC.
JC comment: #1 is the conventional thinking, although see previous posts on no feedback sensitivity [here and here]. #2 is an oversimplification of how climate sensitivity is determined in the conventional way; for nonconventional thoughts expressed previously at Climate Etc., see [ here and here].
Slide 4:
- Carbon Dioxide has been increasing
- There is a greenhouse effect
- There has been a doubling of equivalent CO2 over the past 150 years
- There has very probably been about 0.8 C warming in the past 150 years
- Increasing CO2 alone should cause some warming (about 1C for each doubling)
JC comment: “There has been a doubling of equivalent CO2 over the past 150 years” Not exactly sure what that means, perhaps equivalent means also CH4, etc? This does not seem correct. Also, about 1C for each doubling? Apart from what the no feedback sensitivity actually means, this sensitivity is not linear for multiple doublings of CO2
Unfortunately, denial of the facts on the left, has made the public presentation of the science by those promoting alarm much easier. They merely have to defend the trivially true points on the left; declare that it is only a matter of well- known physics; and relegate the real basis for alarm to a peripheral footnote – even as they slyly acknowledge that this basis is subject to great uncertainty.
JC comment: this is a profound statement
Slide 6:
Quite apart from the science itself, there are numerous reasons why an intelligent observer should be suspicious of the presentation of alarm.
- The claim of ‘incontrovertibility.’ Science is never incontrovertible.
- Arguing from ‘authority’ in lieu of scientific reasoning and data or even elementary logic.
- Use of term ‘global warming’ without either definition or quantification.
- Identification of complex phenomena with multiple causes with global warming and even as ‘proof’ of global warming.
- Conflation of existence of climate change with anthropogenic climate change.
JC comment: very good points, althought #4 is not clearly stated
Slide 7:
Some Salient Points:
1. Virtually by definition, nothing in science is ‘incontrovertible’ – especially in a primitive and complex field as climate. ‘Incontrovertibility’ belongs to religion where it is referred to as dogma.
2. As noted, the value of ‘authority’ in a primitive and politicized field like climate is of dubious value – it is essential to deal with the science itself. This may present less challenge to the layman than is commonly supposed.
JC comment: generally good points, but I object to the last sentence in #2. Scientists don’t even know how to deal with the complex climate science adequately.
Slide 10:
3. ‘Global Warming’ refers to an obscure statistical quantity, globally averaged temperature anomaly, the small residue of far larger and mostly uncorrelated local anomalies. This quantity is highly uncertain, but may be on the order of 0.7C over the past 150 years. This quantity is always varying at this level and there have been periods of both warming and cooling on virtually all time scales. On the time scale of from 1 year to 100 years, there is no need for any externally specified forcing. The climate system is never in equilibrium because, among other things, the ocean transports heat between the surface and the depths. To be sure, however, there are other sources of internal variability as well.
Because the quantity we are speaking of is so small, and the error bars are so large, the quantity is easy to abuse in a variety of ways.
JC comments: good points.
Slide 16:
Compares global temperature time series for the periods 1895-1946 with 1957-2008. The trend and variability for the two periods are very similar (which is a strong argument against the unprecedented rate of change), but there is no clear indication that the second period is overall warmer than the first.
Slide 17:
Some take away points of the global mean temperature anomaly record:
- Changes are small (order of several tenths of a degree)
- Changes are not causal but rather the residue of regional changes.
- Changes of the order of several tenths of a degree are always present at virtually all time scales.
- Obsessing on the details of this record is more akin to a spectator sport (or tea leaf reading) than a serious contributor to scientific efforts – at least so far.
JC comment: I don’t understand the second bullet? I disagree with the last bullet; the details of the record in terms interannual and decadal variability are of importance to people. The details obviously aren’t useful in supporting or refuting AGW, but proponents then base their arguments on 50 years of data?
Slide 18:
4. The claims that the earth has been warming, that there is a greenhouse effect, and that man’s activities have contributed to warming, are trivially true and essentially meaningless in terms of alarm.
Nonetheless, they are frequently trotted out as evidence for alarm.
JC comment: this is the key point, and it isn’t made often enough
Slide 19:
Two separate but frequently conflated issues are essential for alarm:
1) The magnitude of warming, and
2) The relation of warming of any magnitude to the projected catastrophe.
Slide 20:
When it comes to unusual climate (which always occurs some place), most claims of evidence for global warming are guilty of the ‘prosecutor’s fallacy.’ For example this confuses the near certainty of the fact that if A shoots B, there will be evidence of gunpowder on A’s hand with the assertion that if C has evidence of gunpowder on his hands then C shot B.
However, with global warming the line of argument is even sillier. It generally amounts to something like if A kicked up some dirt, leaving an indentation in the ground into which a rock fell and B tripped on this rock and bumped into C who was carrying a carton of eggs which fell and broke, then if some broken eggs were found it showed that A had kicked up some dirt. These days we go even further, and decide that the best way to prevent broken eggs is to ban dirt kicking.
JC comment: I this is a very effective argument
Slide 28:
Where do we go from here?
Given that this has become a quasi-religious issue, it is hard to tell. However, my personal hope is that we will return to normative science, and try to understand how the climate actually behaves. Our present approach of dealing with climate as completely specified by a single number, globally averaged surface temperature anomaly, that is forced by another single number, atmospheric CO2 levels, for example, clearly limits real understanding; so does the replacement of theory by model simulation.
JC comment: I agree with the above statement
In point of fact, there has been progress along these lines and none of it demonstrates a prominent role for CO2. It has been possible to account for the cycle of ice ages simply with orbital variations (as was thought to be the case before global warming mania); tests of sensitivity independent of the assumption that warming is due to CO2 (a circular assumption) show sensitivities lower than models show; the resolution of the early faint sun paradox which could not be resolved by greenhouse gases, is readily resolved by clouds acting as negative feedbacks.
JC comment: above statement reflects more certainty than we actually have, IMO
Slides 29-56:
Lindzen’s view of the science of climate, mostly from the perspective of a simple energy balance and feedback model.
Slides 57-58:
You now have some idea of why I think that there won’t be much warming due to CO2, and without significant global warming, it is impossible to tie catastrophes to such warming. Even with significant warming it would have been extremely difficult to make this connection.
Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating.
In the meantime, while I avoid making forecasts for tenths of a degree change in globally averaged temperature anomaly, I am quite willing to state that unprecedented climate catastrophes are not on the horizon though in several thousand years we may return to an ice age.
JC summary: Lindzen’s talk is in two parts. The first part is very effective in pointing out the vacuousness of the defenses of AGW such as the 2010 Science letter signed by 250 members of the NAS and the 2010 letter from Cicerone and Rees.
The second half of the talk is Lindzen’s perspective on the science, which IMO has some good points but is overly simplistic. To Lindzen’s credit, he doesn’t oversell his own perspective (although he seems extremely confident i his own perspective), but states this is “some idea of why I think“. The significance of this is as a “second opinion” and a reasonably well argued perspective, as pointed out in the latest WSJ op-ed (as opposed to appeal to consensus). Lindzen’s perspective is not implausible, as the IPCC perspective is not implausible (in the sense that neither is falsifiable at this point). IMO both the IPCC and Lindzen are overconfident in the assessment of their perspectives; classic “competing certainties”, which means the uncertainty monster is lurking.
The reasons that I think Lindzen’s presentation is so persuasive to public audience are:
1. Lindzen’s persona and appearance, that reeks of scientific gravitas
2. His argument in the first half of the talk is very effective, taking down the public statements by the NAS folk.
3. His scientific argument in the second half of the talk is appealing in that it relies on data and theory (rather than models).
4. Keeping policy and politics out of his scientific argument
Your thoughts?
JC note: I am currently in Boston, visiting MIT, returning to Atlanta Wed nite. Hence my attention to the blog will be somewhat limited during this period. I will try to moderate the comments on this thread for relevance.
Josh was there
http://bishophill.squarespace.com/storage/lindzen_london_scr.jpg
http://bishophill.squarespace.com/storage/thumbnails/902844-16781620-thumbnail.jpg
second link:
http://bishophill.squarespace.com/display/ShowImage?imageUrl=/storage/lindzen_in_london_scr.jpg
J.C: The second half of the talk is Lindzen’s perspective on the science, which IMO has some good points but is overly simplistic.
but Lindzen’s Seminar was at the House of Commons
Yeah, it’s amazing there were any good points at all :)
The most obvious questionable trick that Lindzen made in this presentation is concentrating in several places on the period of 150 years. As nobody thinks that the first half of that period is strongly affected by anthropogenic influence he effectively doubles the denominator and halves the average human contribution. I think this is done by purpose and is dishonest.
Fair point Pekka.
Pukka
According to current wisdom co2 has had an effect since 1750. So looking back 150 years is reasonable especially as that is within the era of global temperature records as noted by giss and and Hadley
Tonyb
Tony –
isn’t Pekka’s point that Lindzen is giving the impression that the Co2 effect is evenly spread out over 150 years? It is something that many of us like to insist is false, with graphs like this –
http://wattsupwiththat.files.wordpress.com/2011/11/tbrown_figure3.png
I think Lindzen’s emphasis is a bit deceptive but very much part of the territory…
Anteros
And when would you suggest for the starting point?
http://www.vukcevic.talktalk.net/CO2-dBz.htm
vukcevic –
A good question to which a sensible answer is that there are caveats to be made for any starting point. Post WW2 has some basis in reason, as it marked quite a significant change in emissions. 2nd half of the 20th century for similar reasons – arbitrary but not cherry-picking (from any point of view)
Nothing is perfect, but I agree with your point that spreading a 0.7C temperature rise over 150 years is at least disingenuous.
Not really. The effect is lagged. The effect is log. And you really have to look at the sum of all forcing.
The simple thing is that you can deduce very little by looking at the temperature series. The science tells you why you cannot deduce the effect by looking at relatively short time series.
We did not figure out that GHGs warm the planet by looking at the temperature series. It’s rather elementary physics
Anteros
My point was and is: there is little correlation between the CO2 and the historical temperature data, that is not to say that the CO2 effect doesn’t exist, but its magnitude is seriously overestimated and it can be for all practical purposes ignored.
As far as temperature correlations are concerned
the best proxy available is the geomagnetic change
based on 400 years records from the great maritime nations around North Atlantic.
All those who look trough a narrow keyhole of science at the evolution of the historical temperatures data are unlike to see and understand complexity of the three main players:
the sun, the earth and the ocean
http://www.vukcevic.talktalk.net/CET-NAP-SSN.htm
Study and understand the complexity of North Atlantic where you will find the true answer. See also:
http://www.vukcevic.talktalk.net/CET-100-150-100.htm
http://www.vukcevic.talktalk.net/CET-NVa.htm
1) If you Google ‘CO2 concentration 150 years graph’, you will find it is a fairly common starting point. It permits apples:apples. (It happens to be a bit past the Dalton Minimum, which means it is a good starting point for a positive trend.)
2) Given this, I suggest you owe Dr. Lindzen an apology for your last word.
3) Further, Dr. Lindzen is to be congratulated for actually giving a starting point (date and CO2 concentration) for Arrhenius’s equation (logarithmic), which depends upon starting and ending concentration. E.g., less bang from the second dose. I have the impression that this is often left out of AGW claims.
The IPCC state that 1750 is year zero with respect to ACO2 and is the last year when the climate system was at ‘equilibrium’.
Even more convenient.
I wonder what caused all the Great Storms of 1570 and later, if the climate system supposed to be at ‘equilibrium’ then?
One has to wander where all that CO2 came from in the early 1700s.
http://www.vukcevic.talktalk.net/CET-100-150-100.gif
Every (even short) period of the cold CETs was followed by a rapid temperature rise. Or maybe Europeans excessively burning fire wood in the cold winters caused subsequent warming. That is a win-win proposition, not only they kept themselves from freezing but insured that all that CO2 kept them warm for next few decades.
I say 3 cheers for CO2.
I would hazard a guess that the introduction of the European Earthworm to the Forests/Prairies caused all sorts of changes in North America. The density of the grassland shot up and scrubb-land became grassland. The Albedo changes would have been quite impressive, and you would get a nice pulse of CO2/CH4 and N2O.
http://www.mendeley.com/research/earthworminduced-n-mineralization-fertilized-grassland-increases-both-n2o-emission-cropn-uptake/
DocMartyn wrote:
qute
[] introduction of the European Earthworm to the Forests/Prairies caused all sorts of changes in North America. []The Albedo changes would have been quite impressive, and you would get a nice pulse of CO2/CH4 and N2O.[]
unquote
And a large dissolved silica pulse into the surrounding seas. More dissolved silica, more diatoms, fewer calcareous phytoplankton species, less CO2 pull down, less light isotope pulldown (diatoms are less isotope discriminatory), a light isotope signal left in the air and CO2 levels rising. Fewer phytos, less DMS, less low level cloud cover, more insolation, warmng. Which all sounds familiar.
Or something else no-one’s thought of.
JF
I took Lindzen’s comparison as straightforward and clear and definitely not dishonest. You have to actually listen to what he said about the graphs to understand what he was getting at.
I disagree.
Nobody thinks the first two thirds of that 150 year period is strongly affected by anthro CO2, yet substantial warming occurred over that period. The period 1890-1945 demonstrated the same rate of warming as the subsequent 1945-2000 period during which anthro effects are supposed to be dominant. If anything, reference to the longer period discounts the magnitude of natural warming and exaggerates the anthro influence.
Shame on Lindzen for giving away the farm :)
What do you suggest might have caused the 1890-1945 warming which didn’t then cause the 1945-2000 warming? Perhaps evidence for the 1945-2000 anthro effects being dominant is simply wrong. (Sensitivity overexaggerated and all that).
The cause of temp rise earlier in the 20th century is estimated to be the sum of solar and CO2 (failry accurate indices), as well as a reduction in volcanic activity (much less solid data). The difference with the current regime is that solar and volcanic hasn’t trended in the direction you’d expect to account for the warming.
Climate sensitivity doesn’t have a place in this comparison, as climate responds to any external forcing, not just CO2. There are slight differences in ‘efficacy’, but that doesn’t impact for the purposes of comparing these two periods.
I also disagree that this would be a ‘trick’ or ‘dishonest’. He doesn’t claim the CO2-effect would be evenly distributed during that period. 150 years is a reasonable starting point since that is the only period where we have even remotely adequate measurements of GMTA. CO2-concentrations also have risen during that period (exponentially), still there is little or virtually no acceleration in the trend of GMTA during that period..
I would like to see Pekka’s response on what the others have responded.
The entire focus on the past “150 years” as important is part of warmist dogma. The point is nonsense PP.
Pekka, your data is off. Atmospheric CO2 started rising quickly after WWII ended. Global temps have not risen that much from 1945 when CO2 really kicked into high gear. Lindzen could have chosen other time scales:
* He could have shown that global temps rose quickly in the 1930s when CO2 was not rising quickly.
* He could have shown global temps declining from 1945 to 1975 when atmospheric CO2 was rising quickly.
Pekka, I think you are being unnecessarily critical and do a disservice to civil discourse by calling him dishonest. The data simply is not on the side of the warmers.
Pekka,
I am not so sure. You say “nobody” thinks this, but the graphical appeal of the hockey stick certainly includes the early 20th century warming. There seem to be a couple of common arguments. One is the IPCC SPM statement, which Fred reminds us of the limitations “most” >50% and the time period of 1950s-2000s. The other is an argument made either implicitly or explicitly by An Inconvenient Truth among others, which is all too happy to plot the CO2 rise along with the temperature rise for at least the entire 20th century.
The carbon emissions from fossil fuels up to 1935 were about 11.6% ot the emissions up to 2010 according to the CDIAC data. The resulting increase of CO2 concentration from what it would have been without the emissions is only slightly larger share of the increase op to now. Thus some 85% of the human influence through carbon has materialized during the second half a the period 1860-2010. Even over this period the influence has been uneven.
I reacted to this, because a couple of slides appeared really misleading. The warming over this 150 year period was indeed discussed on those slides as if it had been uniform and dividing the temperature change over this period by the length of the period would provide a meaningful number. The worst case is on slide 10 where warming of 0.7C is coupled with 150 years. That’s the main reason for my reaction. It can also be noticed that reducing the period from 150 years to 100 years would reduce the denominator by a third but actually increase the temperature change.
How convenient it would have been to use a period of 1000 years for the calculation, if good temperature data were available. The justification for the 150 years is no more justified for discussion of the strength of the human influence.
150 years appears also on page 4 where the claim of doubling of equivalent CO2 seems to be exaggerating the denominator even excluding all aerosol effects. With all GHG’s but excluding even direct aerosol effects the error is not large, but including estimated aerosol effects the error is again close to a factor of two (the estimates for aerosols have large error ranges).
It’s always debatable, how far one can use cherry picking and other tricks to enhance own arguments before being called dishonest becomes fair. My view is that Lindzen exceeded that limit clearly.
Pekka, your point is taken. A couple caveats, one I noted above about the “happy coincidence” of early 20th century warming and CO2 increase, in graphics. The other – effects of land use change? Still, I agree that 0.7/150 is not helpful and if used in the context of “the warming is small, why should we care”, is misleading.
Pekka, nevermind what I said about land use. It does seem though, that radiative forcing from CO2 in say 1935 was a slightly higher % of 2010 than concentration or emissions, due to the logarithmic effect on temperature.
Interestingly, looking at the GISS diagrams for forcings, what really stands out about the early 20th century warming is the apparent contribution of the lack of stratospheric aerosols.
I would still disagree on page 10. He is simply making an assertion of ‘Global Warming’ as a quantity and a correctly reports the observed change in this quantity over 150 years. The slide has nothing to do with the causes or anthropogenic effects. Of course he could say also how much GW we’ve seen over the last 100, 50, or 30 years but I don’t see the trend being a serious point in this slide.
However I might agree on your critique on page 4 where he claims the GHG-effect has increased equivalent of 2xCO2 during 150 years. How many watts per m^2 is the real number? Does he use this number just to keep it simple enough for the audience or does this really affect his conclusions? You say 2xCO2ekv is exaggerated, then you must know the real number?
Juho,
On page 4 Lindzen notes that the numbers are not contested. That must mean that he considers the forcings listed in AR4 WG1 Figure SPM.2 on page 4 (or Figure TS.5 or 2.20, Figure 2.20(B) gives additional information on the sum) to be best estimates. Adding all positive GHG contributions gets close to the forcing of doubling CO2 but not quite to that, but subtracting then both direct and indirect aerosols ends up with half of that.
An example of dishonesty in my eyes, is the claim by many alarmists that the temperature rise has not stopped for the last 12-15 years or so. They use the mathematical trick of using the average temperature for each decade 80’s, 90’s, 2000’s and display them as a bar chart thus showing that the 2000’s were the warmest. They then proclaim this as proof that the warming hasn’t stopped. I can’t help but believe that someone who tries such an approach, take anyone that will fall for such nonsense as fools. I believe it shows just how little they think of the general public. A very sneering attitude.
Lindzen’s graphs attempt to do what many in the debate would like to be able to do but cannot. That is, compare what temps would be if we weren’t emitting CO2 to what they are since we now are. The only way one can even attempt to do that is by using a time period in the past where little CO2 increase was present and compare it to more recent times. Everyone knows this is not perfect, but it the best that can be done, while using actual thermometers to do the measuring instead of proxies. He uses the same X and Y axis scales in both graphs ( avoiding a trick many use), and uses data readily available to anyone (avoiding another common trick). I don’t see this as dishonest just because someone didn’t chose the same time periods I would have. All analysis is biased to some extent or the other. You just have to take the time to understand what someone is trying to say, and then decide if you think they are trying to “pull a fast one” as we say here. I don’t see any dishonesty. Lindzen is very intelligent and knows he is a prime target. I don’t believe he would attempt anything he thought to be dishonest, as he knows that many also intelligent people are looking very closely at everything he does, especially something he states in front of Parliament. They would be all over him.
I have not heard Lindzen say anything dishonest. He doesn’t have to. He would be stupid to do so.
So your point being that the dishonesty is where dismisses the (hugely uncertain?) aerosol effects concluding the net radiative forcing has increased as much as ~3.7Wm-2?
As what I understood from his presentation is, that these aerosol effects play as a ‘fudge factor’ in the GCM:s and that he is just trying to make a point how these affect the feedback analysis’?
Juho,
There’s a 20% difference without aerosols and there’s is certainly some aerosol effect in the same direction. Thus giving the impression that the data is not disputed is worse than misleading.
Pekka Pirila: The carbon emissions from fossil fuels up to 1935 were about 11.6% ot the emissions up to 2010 according to the CDIAC data. The resulting increase of CO2 concentration from what it would have been without the emissions is only slightly larger share of the increase op to now. Thus some 85% of the human influence through carbon has materialized during the second half a the period 1860-2010. Even over this period the influence has been uneven.
I think that it is a nearly hopeless exercise, on present evidence, to choose the “correct” starting date for evaluating the rate of change of the global mean temperature. Each time a model forecast (scenario or whatever) is published, the temperature changes that matter are those that occur subsequently. Starting a temperature graph at the end of the little ice age shows that recent change (post 1975) is not unusual compared to the whole change since LIA. Proponents of AGW and opponents of AGW choose different starting points in order to make points that they believe. Lindzen’s choice is as defensible as any one else’s choice, and certainly as defensible as anyone’s choice to focus on the post 1975 record.
OK I think I got your point about the word ‘uncontested’. He propably should have included more uncertainty in his presentation overall.
(Despite the lack of uncertainty, I still find the reason for his acceptance using the degratory word ‘denier’, quite funny)
150 years seems a natural period to use as the most widely used global temperature data sets only go back to 150 years or so ago. The widely used HadCRUT global temperature series goes back to 1850, GISS and NOAA start a few decades later.
Also, as the first decade or so of the twentieth century appears to have been unusually cold, and bias uncertainties in that period were particularly large (Brohan et al, 2006), using a 150 year period seems more appropriate than, as is often done, only starting from 1900.
What is natural, what is perhaps not quite natural but ok and what is dishonest is often not well defined.
I maintain my view on this case. Fred had a strong view on picture of the WSJ op-ed, but I didn’t see that as he did. Most of the people writing on this site condemn the Hockey Stick pictures, but there are certainly also people who disagree on that.
We can tell our impressions and opinions and argue on them, but for all these three cases it’s possible to present arguments for both sides. That’s possible as long as the data shown is not explicitly erroneous and the issue is about emphasizing the right points and giving the right impression.
I thought Lindzen used 150 years because that’s what the alarmists use. After all, their thesis of Man being a primary influence falls apart if CO2 or temperature rise starts before the Industrial Revolution. He’s not being disingenuous at all, he’s poking holes directly at the argument most often used by the alarmists.
Lindzen represents the sound of reason.
Hi Judy
Were you just looking at the slides?
Ie you don’t quite get all of what he said..
Ie the co2 equivalence DID include methane, etc.
Why not watch the video of it and perhaps reconsider some of this blog post..
2 parts are available at Climate Realists website
Barry – I couldn’t find the video. Do you have a link?
The House of Commons is an important place, where policy is being
made….they need understandable arguments….
Let’s throw the “Likes of the Gleicks” out and get all the Lindzens in….
rejoyce everybody….finally scientific progress!….
JS
You say that, but according to Dellingpole there were only 2 MPs present at Lindzen’s talk. I was disgusted to hear this, but I suppose not really all that surprised.
Robinson: If so, it would be regrettable…..I guess, MPs do not
like a marathon hammer show with 58 slides…..this scares people away….
Better: Short and concise with 15 slides….an more Q&A…..my
opinion….I don’t know why Dick decided otherwise….but slowly but surely…..
JS
If it is like Congress then key staffers cover events like this. They have the technical knowledge.
David – for good or ill MPs have very little money for staffing, especially since a recent expenses scandal. So at a guess the 2 MPs were the two brave souls who have dared to ‘come out’ as openly sceptical of AGW. Sigh.
If Lindzen could produce a short (30 mins or less) version of his talk, with 10 really good and snappy slides that would each be understandable standalone (rather than just a lot of words that he reads out), the impact would be much greater. A picture paints a thousand words.
He has a good ‘narrative’ to tell, but his ponderous way of doing so weakens rather than enhances it.
Dr Curry –
Is there much in the whole climate debate that isn’t relevant to this talk, and therefore this thread?
In slide 19 Lindzen talks about the two necessary ingredients for climate panic – 1) The magnitude of warming and 2) The relation of warming of any magnitude to the projected catastrophe.
I think this misses a third ingredient – the rapidity of warming. Whenever I argue for the extraordinary adaptiveness of both life ingeneral and the human species in particular, I’m invariably told by some doom-endian that it is the speed of the change that’s the problem”, which as far as I know is based on negative imagination and nothing else.
Otherwise, I’m grateful for your comments, especially noting the similarity of confidence between Lindzen’s world view and that of the IPCC (to say nothing of the ultra-alarmists).
On page 11 of the pdf of the talk is a graph of HadCRUT3 global temperature with the year 2002 depicted with a vertical line.
The first thing to note is the temperature spike that occurred in 1998 because of el nino conditions. Note that the global temperature dropped in 1999 to an equivalent level of 1997.
The Mann “hockey stick” posted in the 2001 IPCC TAR ends in 1998 giving the blade of the hockey stick about 50% more additional length and giving the viewers of the Summary for Policy Makers of this report the false perception that the global temperature was increasing far more rapidly than it actually was.
Far more interesting is if you start at 2002 and project back with the best fit straight line which shows the actual warming trend from 1979 to 2002.
If you do the same thing starting at 2002 and drawing the best fit straight line to the end of the data (July 2011 in this case) you will get the cooling trend that started in 2002 and is still continuing today.
If catastrophic warming as predicted by the climate models is actually going to happen this cooling trend will first have to come to an end and none of those promoting AGW are willing or able to make this prediction and until they do so and base it on hard physical evidence instead of fabricated parameters input into climate models; we need to be more concerned about the current global cooling than any global warming!
This is the GISS temperature anomaly and the rate from 1880 to present; rates taken over 16 years. The blue hatches are 16 years of rate averages.
http://i179.photobucket.com/albums/w318/DocMartyn/GISSalltempsandrates.jpg
This is GISS and [CO2] 1881 to 2009 (1).
GISS vs natural log ([CO2]), from which one can get the ‘climate sensitivity’ from the slope (2).
Finally, if one removes the ‘climate sensitivity’, one only has to explain the big pyrimid from 1907 to 1979 (3).
http://i179.photobucket.com/albums/w318/DocMartyn/LNCO2vstemp.jpg
You can get rid of the very recent post-1975 warming using CO2 increase, but you sill cant do anything about the 70 odd years from 1907.
You could probably get a better fit plotting GISS versus the chicken population or the production of green paint.
Any chance to talk with Dr. Lindzen?????
He has an email address listed at mit.edu.
I have emailed him a few times, sometimes with questions, and sometimes to agree with him and what he has said at particular times.
Steve Garcia
I meant Dr. Curry. Not me, not qualified. :-)
I interpret Changes are not causal but rather the residue of regional changes as a way of saying that there is no such a physical thing as a “global temperature” that gets changed by CO2 or other mechanisms: rather, the “global temperature anomaly” is the result of computations involving the temperature changes at a regional level.
So if the world were made of two regions of same area, one with a +5C temp change and the other with a -2C temp change, the “global temperature anomaly” would be +3C even if an increase of +3C has in effect happened nowhere, hence it could not have been “caused” by anything.
Probably correct. Global temp anomaly is a statistic with a huge variance, not a measurement. BEST says something like 30% of stations show cooling. So it is not as though temp is simply being caused to go up.
I also like his point about not focusing on the details, because I see the details as contradictory. Some data sources show warming post 2000 but some show none. Most show warming 1978-1997 but UAH shows none. According to these details we do not know when it has warmed and when not. Of course the question then becomes what exactly is science supposed to explain, if the data is contradictory?
“BEST says something like 30% of stations show cooling”
Luck that all the temperature proxies used in temperature reconstruction depend on the average temperature in 100,000 mile squared grid, rather than the actual temperature in a particular local.
David W.
It has been a long time.
Has a variance for the global temperature anomaly been calculated?
If yes, can you point me to a description of how it was done?
“BEST says something like 30% of stations show cooling.”
many people have misunderstood that chart. you are not the first.
Good points, but I think regional temperature trends are much more complex. The Anarctic, itself land covered by snow and ice and surrounded by mostly ocean, has warmed very little. The Arctic, on the other hand is ice and snow, surrounded mainly by land. The anarctic is far from centers of industry and soot emissions, and soot emissions fall out of the atmosphere fairly quickly, probably not crossing over the equator to any great extenct. The arctic is close to 90% of the world’s industry and receives a lot of the carbon soot fallout. I think the difference in albedo, from soot fallout, plus the positive feedback of albedo change when Ice and snow becomes water, may explain in large part the differences in Arctic and Anarctic temperature trends and by extension the differences in northern hemeisphere and southern hemisphere temperature trend. If I am wrong about this, Dr. Curry and others, give me some scientific studies and data that refute this or bring it into question. I have seldom seen this hypothesis considered, and it has a very strong bearing on the competing roles on CO2 and carbon soot emissions.
A+ question. I hope someone in the know can point to some literature on it.
Simple question, but do the satellites orbiting both poles report very different albedos between the two? If so that could be strong evidence for Doug’s theory. If not, less so.
I am feeling increasingly sorry for the warmists. I always cheer for the underdog.
“A doubling of CO2, by itself, contributes only about 1C to greenhouse warming.”
What’s the evidence for this? I don’t buy it. What does “by itself” mean? By radiation only? If yes, it doesn’t say much about the overall heat transfer and only overall heat transfer can contribute to any temperature change.
Are you saying you doubt that the earth is warming? You question the validity of 6,000 temperature measuring stations?
There has been numerous discussions about UHI, as well as the veracity of adjustments, Ross, yes many of those measurements are suspect.
Cut the Orwellian speak. Do you doubt that Earth is cooling on multi-millennial time scale (~10 ka)?
This is one of the items that skeptics and warmists agree on. It is in the physics of it. I don’t focus on this, but believe that it has to do with the Stefan-Boltzmann law. If I am wrong on that, I am sure someone here will correct me.
Steve Garcia
Well, I disagree. Stefan-Boltzmann law is about radiation (flux of energy radiating from a body) and its dependence on T. The heat transfer between Earth’s surface and atmosphere/space is multimodal – it involves radiation, convection and evaporation.
Okay. I said I might not be correct and asked for someone to set me straight if I had that attribution wrong. Thanks.
Edim,
“A doubling of CO2, by itself, contributes only about 1C to greenhouse warming.”
“By itself” means prior to any theoretical feedbacks. And 1C of warming is not considered problematic. The warmers hypothesize significant and disastrous positive feedbacks from water vapor, etc. – perhaps 3x or more leading to warming of 3-5C. Others, like Dr. Spencer, claim net negative feedbacks which dampen the warming – perhaps 0.5x leading to warming around 0.5C.
Ron,
I am not convinced of this ~1 °C per doubling of CO2. It seems to me that the multimodal heat transfer at the Earth’s surface is not modelled properly. Earth’s surface is free to cool by convection/evaporation, and together they cool the surface more than the surface radiation.
My thoughts? Like a breath of fresh air and makes the likes of Peter Gleick seem like very sad little nutters wandering the streets with a sandwich board advertising the end of the world. And sadly that is just what they are; tiny, sad little human tragedies.
I don’t want to draw attention away from Dr. Lindzen’s talk here, but to add to it by encouraging people to listen to another terrific presentation. Matt Ridley’s talk at http://tiny.cc/wml8j is short and sweet, and packed with info. He is one helluva speaker.
Richard Lindzen is one of my heroes, so I don’t want to detract in any way from his excellent presentation. He is a Rock of Gibraltar as the voice of climate sanity, which he has always been. I’ve written to him on occasion, and he has always been a gentleman and a quiet voice in a discipline gone mad.
Steve Garcia
I was slightly less impressed by this than by Dr. David Evans piece ‘the sceptics case’ (http://wattsupwiththat.com/2012/02/26/the-skeptics-case/).
Both covered much the same ground, but Lindzen’s was too long for politicians and other morons.
If Lindzen and Evans could get together they could probably devise an exposition which every politico and opinion-former in the world should view.
Judy, Slide 6 # 4 may mean the huge mass of peripheral studies which have used the hook of ‘global warming’ for the Gods of Grants, Ice Bear being the Polester Bear.
=============
Oral arguments regarding the US EPA’s Endangerment Finding begin today in Federal Court. The EPA’s science basis is the IPCC. The date for the science basis is: 2007. There has been no science updates. Frozen in time sort of speak.
One of several points for litigation is that EPA did not do its own research to determine its science basis it relies wholly on IPCC. One of Lindzen’s points was that science is evolving and that what is currently known, and in particular the uncertainties, is much more than five years ago; i.e., the more we know, the less certain we should be.
The issue as I see it, that the science is not known sufficiently to enact public policy. Extravagant claims of catastrophe are not matched with anything like that the science is converging on a single likely scenario, rather, with more data, there is greater divergence now than ever before. Witness the current temperature hiatus, no matter what its explanation. The ocean heat content 0 to 700 meters declining. These are but two instances of increasing uncertainty not more certainty.
My own experiences, when I have had data divergence over time rather than convergence, I have had to step back and realize something profound and impactful is missing. Submit to the Journal of Irreproducible Results.
Next hypothesis.
EPA should have looked before it leaped.
:)
Two or three years ago, when I went to EPA’s website for climate change I immediately noted that the only reference they list is the IPCC. I sent in a question asking why this was the case, since in every discipline I’ve gotten a degree in (3), I was taught that one should avoid relying on a single reference.
Still waiting for a response.
I completely disagree that Lindzen’s speech will have any impact outside brief blogospheric discussion. Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues; moreover, he’s been making the same low-sensitivity arguments (in various forms) for over a decade, and progress in the scientific literature and at academic conferences has been moving at rapid paces with virtually no influence from Lindzen. His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimatic constraints (see Knutti and Hegerl for a start).
But it’s easy to see why his speech will have little influence. On many occasions, he steps well outside his expertise, and makes claims which experts in those areas already know full well or are completely wrong. For instance, he delves into planetary climate by talking about the faint young sun. There have been decades of work on this problem, and many subsequent criticisms of his lone paper on why high clouds can explain the faint young sun problem (e.g., (e.g., by Goldblatt and Zahnle). When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases, and the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%) and requires much thicker and colder clouds that are not considered plausible. It’s also based on unwarranted extrapolation from his “iris hypothesis” inferred from modern day observations, which itself has been challenged by a number of papers for being overinflated.
Lindzen also jumps into the Arctic community by letting us know that CO2 can’t imply weak summer temperature amplification. Of course, if he bothered to read the literature (e.g., Mark Serreze has some papers on the seasonality of the ice-albedo feedback), he’d know that this is in fact what models and observations predict, because the Arctic is generally pegged to the freezing point in areas of high melt.
His line that “…is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments” is just too stupid to even acknowledge. Apparently Lindzen doesn’t think we should include such non-CO2 factors? If you include them, then they are artificial adjustments; if you don’t include them, then you’re a warmist that ignored everything non-CO2. How convenient.
Chris Colose –
“His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations”
Except the observations of global temperature. Give it up, Chris. Your models are dead, and nailing them to the f***ing perch won;t make things right anymore.
I’ve sent Chris a dozier on ‘Tokyo Rose’, so he can increase his skills.
Marcus –
Dragging Tokyo Rose in is priceless…
“Hey, G.I.! Warmists are right! The Japanese Emperor has new clothes! Learn how to fly kamakazi, because the Earth’s oceans gonna boil over if you don’t. It up to you to save the planet! Destroying U.S.S. Carbon Dioxide is only true way to salvation! Only Gleik and Jones and Gore tell truth! You big handsome G.I.!” /snarc
Steve Garcia
Chris, you’re young and smart and you obviously have a passion for the field in which you’ve chosen to make your career. So I’d be surprised if your mentors haven’t explained a few things to you about the value of civility in getting your career established. If they did perhaps you should re-examine what they said. You don’t have to brown up to somebody just because of their senior professional status, but if you want to challenge them perhaps the instant gratification of stabbing at them in the blogosphere isn’t the best way to do it. Making enemies in academia is easy enough without sliding into incivility.
You do understand that Tokyo Rose spoke English without an accent, don’t you? She was a native English speaker.
Abusive ad hominem (also called personal abuse or personal attacks) usually involves insulting or belittling one’s opponent in order to attack his claim or invalidate his argument, but can also involve pointing out true character flaws or actions that are irrelevant to the opponent’s argument. This tactic is logically fallacious because insults and negative facts about the opponent’s personal character have nothing to do with the logical merits of the opponent’s arguments or assertions.
//”…but can also involve pointing out true character flaws or actions that are irrelevant to the opponent’s argument. This tactic is logically fallacious…”//
No it’s not, it’s just a statement which may or may not be true. Even though (logically) it may be irrelevant, in practice, it may serve as a template for assessing credibility. It’s appropriate to acknowledge that you don’t want Joe down the street who was a high school drop out to do heart surgery on you, even though that is not a logical argument for why he has/has not the theoretical capacity to do so. Similarly, I’m not saying “Lindzen is wrong because he’s boring and no one likes him” but rather pointing out that he has lost credibility in the community.
Why this is the case is a separate matter, one that I touched upon the surface in my post, but also has been well-documented elsewhere in the literature and is freely available for people to look at.
However, my suspicion is that very few people are interested in an honest investigation of his feedback hypotheses and the subsequent interrogations into their robustness, but rather want to throw potshots at AGW (or me personally).
However, my suspicion is that very few people are interested in an honest investigation of his feedback hypotheses and the subsequent interrogations into their robustness, but rather want to throw potshots at AGW
Take your mouth over to the following discussion, I’ll take to you there.
‘http://tallbloke.wordpress.com/2012/02/25/stephen-wilde-the-myth-of-backradiation/#comments
You big mouths are actually scared of the knowledge sceptics have about feedback hypotheses. I’ve see plenty of semantics from you Chrissy, not much substance.
“he has lost credibility in the community.”
It is the ‘community’ which is losing all credibility. Month by month, year by year, as the data comes in.
Basically, Chris Colose has hijacked this post and made it about HIM. Typical troll behavior, so that the points made by Dr Curry don’t get discussed.
Too many come and see the school-yard name-calling and decide not to participate in the discussion which isn’t rational, just he said she said.
And then mission accomplished: Don’t let there BE a discussion on the facts.
Steve Garcia
Marcus [7:45 pm] “Marquess of Queensberry Rules have been thrown out.
You can thank Gleick and his supporters for the rest of us taking the gloves off.”
Oy VEY. Your guy defrauds, and your side says the rules of engagement have been broken, so you get to break out the Brown Shirts?
Just how does THAT figure? Are you committable?
Geez Louise.
Steve Garcia
But since – due to its rampant dishonesty and political bias – “the community” has rightly lost virtually all credibility – this is if anything a recommendation.
To Chris Colose:
Lindzen has his own approach of some existing, somewhat halved or lowered
climate sensitivity…..let him have his views, we do not have to agree….
….Important is that the Likes of the Gleicks are kept out of repeating
their global CAGW Warmist nonsense….this were much worse than your
worry about his type of approach….important is that Skeptics of all
colors get into the House of Commons….and by and by, the Warmist
Gleicks will disappear from climate science…
JS
Chrissy,
I very confidently predict that you will never attain 8% of the knowledge and relevance of Dr. Lindzen. You don’t speak for the scientific community, you little twit. You are a consensus scientist wannabe. Now get out of here before somebody roughs you up and takes your little plastic sheriff’s badge.
The problem Don, the other 92% of his knowledge imparted to unwitting Students at St Albany’s is rhetoric.
It’s interesting to see the fall back to ad hominems when I point out several of the flaws in Lindzen’s arguments.
Really, what is the point in opening up discussion to people incapable of reason?
Your pointing out was ad hominem. You got back what you deserved. You are not here to discuss, but to scold. Take it elsewhere, junior. We already have our fair share of trolls. By the way, where’s josh? He looks an awful lot like gleicko, from the wire rimmed goggles down to the Birkenstock sandals. I wonder if there is a connection.
Read your post, disappointed by the reaction too. Looking forward to a more specific rebuttal on the points you make.
To be fair, I’m guessing many didn’t read past, “Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues”. That may be your opinion, but it sets the tone too. Re-reading, the first half of each paragraph was fairly derogatory, the second half worth pursuing. I’m not suggesting that justifies the reaction, just pointing it out in case you weren’t aware how your own tone sounds to someone outside the debate.
robin,
They are a very angry lot. They are losing, and desperate. And we don’t have to be nice to them. Go over to RealClimate and the other colose friendly blogs and see how they treat deniers.
Nor have you been very courteous to Dr. Curry of late. Presumably she’s lost all ‘credibility in the community’ (ie: not one of us, team Team) as well.
Eventually the number of scientists excluded from the ‘community’ will be larger than the ‘community’, and then what’s left of the ‘community’ can go and commune with only itself. Just like they already do at RC.
Marquess of Queensberry Rules have been thrown out.
You can thank Gleick and his supporters for the rest of us taking the gloves off.
robin,
Why does Lindzen get a free pass, time after time, as the years pass by and he continues to talk nonsense? I pointed out, even if superficially, a couple of of Lindzen’s scientific issues (and even some references people could pursue further). This is true even if you don’t like my tone (which I think is well deserved). I’m not particularly interested in making everyone happy. If people don’t want to have just a bit of investigative integrity, I don’t see why I need to supply all the scientific answers here, but if people have legitimate questions on what I said I’d be glad to pursue them.
Regardless of whether you like my approach or not, the ultimate end result is that this will be of virtually no significance in the scientific community, and of only temporary interest in blogs and amongst people who don’t know better. Much like most blog discussions.
Like a Warmer is going to do anything but but push Warmerism. Blah, blah, blah.
Andrew
cui bono,
Generally, I am very nice to people, and I don’t get hostility in me based on disagreement; it comes when I think that individual has lost the personal integrity to do objective science, familiarize themselves with what they talk about, and acknowledge criticisms of their work should they be valid. Same if they are just going to talk about the science. My problem with Lindzen is not that he proposed a negative feedback ‘iris’ hypothesis; in fact this was a legitimate submission to the literature that promoted a lot of discussion in the academic community. It encouraged many subsequent theoretical analysis and observational analysis with better datasets than Lindzen had available, along with people who specialized in those observational products.
The problem began with Lindzen’s responses to those criticisms, which indicated that he had an unmoving stance on his position, even when others had shown that his proposed effect was greatly exaggerated, or even of the wrong sign. Even worse, were many of his indefensible statements in op-eds, talks, etc.
With regard to Judith Curry, I originally liked what she wanted to do on this blog, such as discussing and expanding upon the uncertainties in climate science. Now, it has become a forum for glorifying any half-baked idea that is apparently “interesting.” Moreover, I think Judith Curry has significantly expanded the scope of what is ‘uncertain’ without actually familiarizing herself with the current science on those topics (such as solar-climate effects), even to the point of making things up. She’s free to run her blog how she wants, and people are free to like/not like it. I think that it is counterproductive to her original goal; you cannot improve understanding if you keep having to go back to basic textbook stuff and explaining why every nonsensical argument someone put on their blog is nonsensical.
Thank you, Chris, for a courteous reply.
One of the reasons I like Dr. Curry’s blog is that it tries to question matters which you regard as “basic textbook stuff”. For example, the feedback multiplication. This is in the textbooks to be 3, but there seems no justification for this other than it really was the number Hansen and co first thought up back in the 1980s.
Lindzen and others have a radically different figure, and those of us at the sidelines can’t help but notice that the models, which echo the threefold feedback, are not doing very well recently. Yet question this magic number, for whatever reason, and merry hell breaks loose.
As for “current thinking” – whether Dr. Curry is on top of every twist and turn that tries to explain the increasingly glaring discrepencies betwixt models and nature I couldn’t say, but ‘current thinking’ is just that – ‘current’. It will change, and if you follow it slavishly, it will lead you a merry dance.
Read some more about the history of science, especially the blind alleys and cul-de-sacs, the lumiferious ether and coal-fired suns, and you’ll get the idea. Science: always work in progress, and sometimes back to the drawing board. Or in Gleicks case, go to jail, go directly to jail, do not pass go…
cui bono,
Your statements are really the reason why it’s tough to take these conversations seriously. There have been countless papers and entire reports dedicated to the sensitivity issue, yet you claim that it’s all something Hansen made up with a simple model 20-30 years ago. Either you’re trying to trick me, or you’re just unaware of the multitude of papers on the subject. In the first case, it’s pointless, and in the second case, you need to show that you want to learn more. I am rather familiar with the science of climate sensitivity and the current methodologies used to assess it, and Lindzen’s has time and time again failed the test of being robust.
The point about models is equally bad, since very few people who question the models on these blogs have even read about models or know what they are comparing. They haven’t consulted the people who build the models and have written extensively on them, or have improved on them over time. Usually, they are just very broad statements that give no indication as to what variable or timeframe or statistic (and in what model) they are even talking about. It’s tough to respond to such vague statements when entire reports have been written on modeling, where they are useful, which results are robust, what needs improvement, etc.
Don,
Not much refraction in them thar Gleick goggles.
My, my, pretentious and projectile.
Chris Colose ……”.Really, what is the point in opening up discussion to people incapable of reason?”
Funny Chris that is exactly what I think of you. I have never seen you give an inch in your dogmatic theology and consider that you may be wrong on any of the issues.
What do you think of the Evans paper referenced in other comments. I think it is a very concise and coherent piece. Of course I am sure you do not
Chris,
OK, then, back to the snark.
I know enough to know there are many scientists who do not agree with a *3 multiplication. Some present good reasosn for believing it is < 1. If you don't know this, you are seriously living in a though-tight compartment.
I don't want to consult with the numerous people who constructed the models (clever though they undoubtedly are). I am an 'end-user' of the models, and all I have to do is sit back and see whether they are getting things right. Looking at their projections vs. reality, they just aren't.
You're asking me to disassemble a plasma TV and marvel at the thought and precision that went into constructing it. I, as a customer, and who incidentally paid for it, want to know why it isn't bloody working!
Sadly it is now 2:15am here, so I'll retire to dream of models. Of a different kind….
Chris, I don’t have the expertise to debate you on the science, nor would I indulge in personal attacks, especially when I have no basis to do so. So I would hope that this thread reverts to a more moderate and considered approach than is apparent in what I’ve read so far.
Chris, I’m sure that you realize that Tamsin has ablog just starting, that she hopes to take us neophytes to a better understanding about how models are constructed and what we may be able to learn from them. So if you need a quick reference to help people get up to speed on the modelling thang, just send them to –
http://allmodelsarewrong.com/
Thanks, Chris, for the context.
Indeed Lindzen has been making the same argument about low sensitivity for a number of years. I remember being able to spot the flaws in his analysis of recent temperatures versus forcing changes some years ago (one can fit the analysis on the back of a fag packet).
Now matter how much you care to bad mouth Lindzen. He is indisputably correct in the predominate fact.
‘All models are wrong’
Chris I agree completely with your comments here – both on the scientific flaws in Lindzen’s talk (which I have yet to see much discussion on here – not a surprise), and on your disappointment in the route Judy’s blog has taken. I check in every so often out of curiosity as an EAS/Ga Tech grad, but am unimpressed. It’s good to see you commenting – there are people out there that appreciate your thoughts. Keep it up.
Chris,
It is interesting that in your sophomoric pretense you are outraged when your rude, unprofessional and childish behavior is returned to you.
Chris Colose: entire reports have been written on modeling, where they are useful, which results are robust, what needs improvement, etc.
Which are the three most recent best such reports?
‘His line … is just too stupid to even acknowledge’.
And yet you do.
Chris, which paleoclimate constraints?
http://www.realclimate.org/index.php/archives/2012/02/global-temperatures-volcanic-eruptions-and-trees-that-didnt-bark/
Based on the paleoclimate constraints of the northern hemisphere after considering the volcanic impact not only on the little ice age but through the 20th century, a considerable amount of warming would appear to be expected unless, ice age is the norm.
The majority of the post 1950 warming is in the northern high latitudes which have considerable volcanic impact from northern high latitude volcanoes. If fact, if one wanted to, they could make an excellent case that the “unknown” aerosol factor is VEI 4 and 5 eruptions primarily in the northern high latitudes with the equatorial impact mainly due to large eruptions. Might have something to do with albedo sensitivity differences and land use changes.
So the IPCC CO2 attribution of likely, as in 50% or greater, is looking shakier all the time.
Of course if you want to switch to the southern hemisphere just for grins, you could compare this paleo recon, http://www.ncdc.noaa.gov/paleo/pubs/neukom2010/neukom2010.html to the Southern high latitude temperatures and find that paleo data is nearly as noisy as Antarctic temperature data.
http://i122.photobucket.com/albums/o252/captdallas2/GISSAntarcticversusSouthAmericanTemperatureReconstruction.png
Kinda funny how in the southern hemisphere the dip circa 1860 is bigger than the 1816 dip. 1902 has a pretty good dip too. Of course, it is only tree rings.
”When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases, and the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%) and requires much thicker and colder clouds that are not considered plausible.”
I’ll give you a hand Chris.
An active sun alters the vertical temperature profile of the atmosphere especially at the poles so that the polar air masses shrink horizontally whilst the polar vortex intensifies vertically and the jets become more zonal. That results in less global cloudiness and more solar energy into the oceans. El Nino becomes stronger relative to La Nina and the troposphere warms.
A less active sun does the opposite. Fits the observations perfectly.
Reading K&H08 tells me that paleoclimate evidence is not independent from modern evidence because it uses the same models to separate CO2 feedback from others (mainly dust and albedo). The other problem is that the base climate state is different and our weather pattern changes will be different. It means we could have higher or lower sensitivity than that calculated from the paleo data, but probably not the same.
Chris Close writes more unsupportable conclusions.
He writes: Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues
Chris- What is the basis for your claim? Seems like an unsupportable hope on your part.
Chris writes- “His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimate constraints (see Knutti and Hegerl for a start).
Chris- your statement is untruthful. Observations do not support your opinion of high sensitivity and you know when you are being honest that the paleoclimate record is only marginally reliable. Referencing someone’s paper is meaningless when it makes claims on the paleoclimate record that are overstating the reliability of that record.
Chris,
Thank you for demonstrating the definition of Sophomoric.
http://www.merriam-webster.com/dictionary/sophomoric
Hmmmm…….wannabe grad student vs. professor? Rude arrogant young blowhard vs. experience and wisdom? A tough call. Not.
Dr. Colossal, your last paragraph @ 6:05 PM is simply a mischaracterization of what Richard said, and then you descend from error into abuse.
==========
Chris, with respect to the ‘faint sun’. problem, you do know that the switch from a reducing to an oxidizing atmosphere began with the evolution of water splitting rhodobacter, about two billion years ago.
Do you know what the albedo of the planet was when the oceans were full of transition metal salts and the land was covered in metal sulphides.
You think CO2 was the major cause of the Earth having liquid water, dispute the Earth having a completely different biotica, surface absorbance characteristics, ocean optical properties and very different types of clouds..
You then wonder why the mainstream CAGW promoters are held in such contempt.
His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations …
“Still it moves” and the temperatures refuse to follow the exaggerated sensitivity that others know “better”…
I completely disagree that Lindzen’s speech will have any impact outside brief blogospheric discussion. Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues;
Open with the ad hom. Typical of you lot. The rest is likely ad pop jibberish, but I won’t know.
Chris, take a stroll over to the ‘Gleick’s Testimony’ thread to see how Andy Lacis handles a discussion with those who have an opposing view. Let him be a mentor to you. Here’s the start of his post….
http://judithcurry.com/2012/02/26/gleicks-testimony-on-threats-to-the-integrity-of-science/#comment-177569
Read all the replies made and the way he handles them. Take some notes too.
John Carpenter –
To my dismay, I’ve been following this for most of the day.
Chris is stuck in his mindset and has no capacity to hear anything that didn’t come from Hansen, Gore, Mann or CRU or any of their followers. By his definition, anyone that disagrees is wrong – end of discussion. Any fact that does not fit his understanding is misguided and erroneously derived. Anybody here who engages with him is talking to a brick wall. He is incapable of give and take and attempting to come to a mutual understanding. Those who disagree with him are, to him, only dumb clucks who never learned how to think properly and who must be educated by he who knows all.
Steve Garcia
Steve, Chris take a lesson from Dylans ‘My Back Pages’
‘I was so much older then, I’m younger than that now’
He’ll understand that line in another 10 to 15 years if he is able to examine himself in a critical way, otherwise we just have another arrogant SOB looking to climb the ranks.
John carpenter,
Chris has that immunity to facts that only youthful arrogance can permit.
By the way, here is a nice tweet exchange between gleick and friends that puts context on his forgery:
“Copner (Comment #92133)
February 28th, 2012 at 11:06 am
In case anybody missed it, a couple of threads back, I posted this retrospectively hilarious tweet sequence.
Gleick was even warned (although not specifically as regards document forgery), that it wasn’t wise to use the phrase “anti-climate”
Got to laugh.
——————————————————————
Nate Lloyd @macbuckets
@PeterGleick @stephenfry When you use terms like “anti-climate” you give the game away. #ScienceIsPolitics
3:41 PM – 30 Jan 12 via TweetCaster for Android · Details
——————————————————————
Peter Gleick Peter Gleick @PeterGleick
@macbuckets @stephenfry Yes, “anti-science” might be better. Or worse. But #WSJ isn’t anti ALL science. Just climate science, apparently.
9:21 PM – 30 Jan 12 via web · Details
this is from Lucia’s blackboard, by the way.
Chris, You are getting a little bit cranky. You should not take Lindzen out of context. I’ve heard him several times and his ideas are much more qualified than you state.
On the faint sun paradox, Lindzen points out that CO2 is an impossible hypothesis, saying that it requires 3 bars of CO2. His paper merely asked a question, viz., could you explain it with high thin cloud just in the tropics. To my knowledge there is never a claim that this was the sole or even the main mechanism.
On the aerosols, he has some references to the literature quoting modelers. You must admit that a forcing that has an error bar equal to 200% of the median value and a possible value of close to 0 is pretty arbitrary. Those are IPCC numbers incidently. So, how do you think the modelers set these numbers?
On the arctic, you know of course that most parts are not ice covered during the summer, so that the alleged rapid ice melting cannot be a big factor at least in July and August. In any case, Lindzen says “CO2 is not obviously a factor during the summer.”
On the sensitivity, you must look at the IPCC AR4 summary forcing chart to see that total anthropogenic forcings neglecting aerosols are above 3 W/m2 I believe. He is trying to estimate the sensitivity to a doubling of CO2, i.e., 3.7 W/m2.
David – Yo are confusing Chris with someone who listens.
Steve Garcia
When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases,
The faint sun paradox is as solar irriadiance has increased,the earth t has cooled GHG are a constraint not an explanation.
Chris,
Seriously?? You are going to lecture Dr. Curry on what “most of the scientific community” thinks? I truly wish you get a scientific education someday because you don’t have one yet. Every scientist knows expert opinion is worthless. Data is what matters and Lindzen has the data on his side.
Look at the data sometime, Chris. You will get an education.
“His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimatic constraints (see Knutti and Hegerl for a start). ”
Then what IS the climate sensitivity, and how was it found to be that? Mind you, a small summary will be OK. I live under the impression that climate sensitivity is very hard to determine from paleohistoric sources. I’m willing to hear what is wrong with Lindzens theories but I would like to hear more than “read so and so” as an argument. If it is so very clear you should be able to summarize it for me.
Mind you, while we are at it: Why is climate sensitivity a constant? That baffles me, as a layman. If I start to think about it climate sensitivity in an ice age could be completely different than in between ice ages, due to the fact that currently changes in ice cover result in far less albedo changes than during an ice age.
peeke,
In not so brief summary (unfortunately this does not do any justice, which is why I asked people to read a few papers):
1) Equilibrium climate sensitivity is very likely between 2-4.5 deg Celsius per 2xCO2, which unfortunately is not a narrow estimate, but values much smaller or much larger than that broad constraint have consistently failed a number of tests
2) There are a number of ways that have been developed to look at the sensitivity issue. People have looked at the 20th century observed record, the response to volcanic eruptions, the solar cycle, the response of the net radiation budget to SST changes, etc. People have also looked at paleoclimate records from a number of different time periods, including the last millennium, the Last glacial Maximum, the Eocene, etc. Some of these things are useful at cutting off the low end estimate of sensitivity but not the high end. The response to volcanic eruptions for example rule out very low sensitivity values but, on their own, cannot rule out very large values. Others give rather broad constraints and you need to combine different lines of evidence to come up with a plausible range that can simultaneously satisfy a number of events within the degree of uncertainty in observation/proxy data, etc. Unfortunately, no single method can give a unique value of sensitivity for a number of different reasons (see below)
3) Observational evidence alone cannot constrain climate sensitivity. This is because we do not know the total radiative forcing over the industrial era, and the rate of ocean heat uptake is questionable. This gives a distribution of sensitivity values that are all consistent with the observed climate. There have been a number of methods, for example, multi-model ensembles that sample the parameter and structural uncertainty across models and use observations or paleoclimate as a constraint to accept or reject sensitivity values which are possible. This is where further research needs to be developed, as it combines a lot of information at the model-obs-paleo interface and samples a large range of uncertainty and possible asymmetry between LGM sensitivity and 2xCO2 sensitivity for example.
4) Lindzen has proposed a variety of negative feedback ideas- in the early 90s, he thought water vapor feedback could be negative, which he no longer defends. In 2001, he published his IRIS hypothesis. It was plausible, and was well accepted by the academic community and investigated by cloud physicists, etc. I consider Lindzen more a theoretician than an observational specialist, so the people more familiar with the obs. products looked into it more, and different observational datasets were produced since that time to examine (e.g., CERES). A number of factors of the IRIS hypothesis, including incorrect radiative properties have been pointed out, and others have examined Lindzen’s observations of varied high cloud amount with SST in more detail and concluded that it responds more to changes in subtropical clouds than to changes in tropical convection, which reflects a meteorological forcing than an SST forcing (so that even if SST were fixed, Lindzen would still observe an anti-correlation upon which his theory his built).
5) More recently, Lindzen and Spencer (among others) have looked at variations in the TOA energy balance and its relationship to SST changes, which in theory reflects the efficiency of the Planck restoring feedback. See my theoretical treatment of the water vapor feedback and runaway greenhouse for a jist of the principle
http://skepticalscience.com/radiation.html
However, deviations between trends in global mean SST and TOA radiation on decadal timescales are very large, and a number of people have shown that this reflects ENSO variability, as opposed to a forced trend over the timescale of a decade or shorter. This also requires using a short and discontinuous satellite data, and the analysis needs to be of global scale. Simple models with no realistic ocean, no El Niño, and no hydrological cycle (as in Spencer and Braswell) make this approach even more unsuitable.
6) I agree paleoclimate data are subject to limitations but several intervals in the past (like the LGM) have large signal to noise ratio because of the magnitude of change and forcing, and even within the error bars, a very low sensitivity cannot be considered an artifact of proxy interpretation. There are three fundamental ways climate sensitivity is derived from paleo-data: pure observations, observations with multi-model ensembles, or physics perturbed ensemble method using a single climate model. Only the first one must inherently assume the same sensitivity between one climate state to the other. I also agree that it is unlikely climate sensitivity is a constant, although for the LGM, the surface albedo feedback doesn’t necessarily need to be different because the ice sheets are treated as a forcing. Kohler et al 2010 is a good reference on this.
and the rate of ocean heat uptake is questionable.
Is it questionable because the Argo floats are show only a very small amount of ‘heat uptake’ or is it questionable because the argo floats seem to disagree markedly with ‘previous assumptions’ about ‘ocean heat uptake’.
I’m always interested why a scientist would ‘question the data’ when it doesn’t fit a theory.
Even the renowned Dr Hansen has concluded the ‘missing ocean heat’ doesn’t exist and decided that the impact of aerosol’s is much greater then previously thought.
http://www.columbia.edu/~jeh1/mailings/2011/20110415_EnergyImbalancePaper.pdf
There are very good reasons to question the data (either ocean heat content measurements from older buoys, initial ARGO measurements, or satellite-derived products) and a number of people are working on that issue in great detail. It’s also appropriate to examine the models, and many of the AR4-generation ones tended to mix heat into the deep ocean too efficiently (I don’t know if this has changed for the CMIP5 generation models for the AR5). This has no effect on equilibrium climate sensitivity, but instead determines the expected observed warming at any point in time during the perturbed (and changing) state.
But the ‘missing heat’ is, in fact, a difference between two observational datasets and has nothing to do with theoretical considerations (i.e., the apparent inconsistency between satellite and in situ ocean measurements). The difference is not considered statistically significant however (see Loeb et al., 2012, Nat. Geo). Other forcings will impact this too.
Chris Colose: 1) Equilibrium climate sensitivity is very likely between 2-4.5 deg Celsius per 2xCO2, which unfortunately is not a narrow estimate, but values much smaller or much larger than that broad constraint have consistently failed a number of tests
Do you mean “steady state climate sensitivity” instead of “equilibrium sensitivity”? This terminological mistake occurs a lot in these discussions, and though I think it’s usually benign, it isn’t always clear whether the writer really means “equilibrium” or “steady state”. As long as the sun is providing energy and there is a net flow of radiation in (short wave) and out (long wave), then the appropriate concept is “steady state” (though even that is only approximate.)
Full thermodynamical equilibrium is not the only type of equilibrium. Various partial equilibriums represent also perfectly legitimate uses of the word, when the meaning is stated or clear from context as it is here.
It is certainly true that people err often by picking facts related to thermodynamic equilibrium and apply them to the stationary Earth system, but that’s not a problem for the definition of equilibrium climate sensitivity.
@Chris colose
“I agree paleoclimate data are subject to limitations but several intervals in the past (like the LGM) have large signal to noise ratio because of the magnitude of change and forcing, and even within the error bars, a very low sensitivity cannot be considered an artifact of proxy interpretation.”
Why not? I mean, really it can’t, or do you consider it unlikely?
I remember Lindzen trying to prove a low sensitivity and make an error. The result some people mentioned when correcting that error was 1K/doubling. That is suspiciously close to no feedback at all.
harrywr2 said: “I’m always interested why a scientist would ‘question the data’ when it doesn’t fit a theory.”
At the end of a seminar, I once heard a theorist snark: “Once again, the data is rejected by the theory.”
What he meant is that theorists have no end of questions to put to experimenters and empiricists, and the theorists can usually think of some objection to the experimental protocol or the methods of measurement and/or analysis of naturally occurring data. That can turn into regressive, defensive science if it happens too much. On the other hand, if you are a data worker, you have to live with it to a great extent. But a progressive scientific program isn’t one that has to be throwing up objections to hypothesis failures more often than it is celebrating victories for novel predictions.
This is just an instance of the Duhem-Quine problem, but a really important one.
Chris Colose, you wrote:
“The point about models is equally bad, since very few people who question the models on these blogs have even read about models or know what they are comparing.” Would you accept the following criticism, made in the last few years, of the lack of good evidence as to global circulation models having reasonable predictive capabilities and embodying realistic climate sensitivities?
“Much of the work has focused on evaluating the models’ ability to simulate the annual mean state, the seasonal cycle, and the inter-annual variability of the climate system, since good data is available for evaluating these aspects of the climate system. However good simulations of these aspects do not guarantee a good prediction. For example, Stainforth et al. (2005) have shown that many different combinations of uncertain model sub-grid scale parameters can lead to good simulations of global mean surface temperature, but do not lead to a robust result for the model’s climate sensitivity.
A different test of a climate model’s capabilities that comes closer to actually testing its predictive capability on the century time scale is to compare its simulation of changes in the 20th century with observed changes. A particularly common test has been to compare observed changes in global mean surface temperature with model simulations using estimates of the changes in the 20th century forcings. The comparison often looks good, and this has led to statements such as: ”…the global temperature trend over the past century …. can be modelled with high skill when both human and natural factors that influence climate are included” (Randall et al., 2007). However the great uncertainties that affect the simulated trend (e.g., climate sensitivity, rate of heat uptake by the deep-ocean, and aerosol forcing strength) make this a highly dubious statement. For example, a model with a relatively high climate sensitivity can simulate the 20th century climate changes reasonably well if it also has a strong aerosol cooling and/or too much ocean heat uptake. Depending on the forcing scenario in the future, such models would generally give very different projections from one that had all those factors correct.”
As you are no doubt aware, the “Randall et al., 2007” study that the above-quoted statement referred to as “highly dubious” constitutes the complete Chapter 8 “Climate Models and Their Evaluation” of IPCC AR4 WG1.
Chris,
Regarding your point about Richard Lindzen losing credibility among climate scientists – how would you respond to the issue of climate scientists losing credibility with the public?
The claims of effects from climate change are driving that loss of credibility, along with cries of persecution (funded by the evil fossil fuel industry) by some climate scientists. Whatever else you think of Dr Lindzen, he is on target with regard to this part of the debate. I could be Joe down the street and still see the failed science in studies like the recent one on Andean birds, where the conclusion of the researchers was that many of these populations may be at risk due to climate. The basis for this conclusion? Declining populations? Nope. Try that the range of their habitat had not shifted to the degree predicted by models. The researchers were surprised by this, even though they could document changes in temperature and other factors. So, because the birds were obviously too stupid to notice the threat of global warming, they were doomed because they weren’t moving fast enough.
Guess I’m lucky I stopped at a Masters and didn’t stick with becoming a “climate scientist”. Because in this instance I would have questioned a) the model and b) my hypothesis and assumptions, before I hit upon the conclusion that the birds are not adapting fast enough and are therefore at risk. This is exactly the sort of “science” that global warming / climate change has spawned. It has people like Dr Andy Lacis, who is far smarter than I am, making statements about how we “know” that as more CO2 and water vapor get taken up by the atmosphere, the system has increasing energy and therefore leads to more extreme climate events. Feel like directing me to the reseach which as identified the mechanisms by which this occurs? Or how about studies on the frequency and intensity of storms? I haven’t found any for the former and most of what I’ve found on the latter pretty much say the opposite.
Chris,
This is more emotive than scientific, and you say a number of things that are simply not true.
1)
You claim falsely that there have been “many subsequent criticisms of [Lindzen’s] lone paper [on the Faint Young Sun Paradox (FYSP)]”, and you cite Goldblatt and Zahnle 2011 (GZ11) as if it is one example out of many. In fact, GZ11 is the only paper that has disputed Rondanelli and Lindzen 2010 (RL10), and their criticisms have been answered. And surprisingly, you make no mention of the fact that Rosing et al. 2010 (No climate paradox under the faint early Sun, Nature, 464, 744–747, 2010. 3579) have also argued along similar lines as Rondanelli and Lindzen.
Here are all the articles that cite RL10 on the FYSP.
– Abe, Y. A. Abe-Ouchi, N.H. Sleep, and K.J. Zahnle, 2011: Habitable Zone Limits for Dry Planets, Astrobiology. 11(5): 443-460. doi:10.1089/ast.2010.0545.
– Goldblatt, C. and K.J. Zahnle, 2011: Clouds and the Faint Young Sun Paradox, Clim. Past, 7, 203–25 220, doi:10.5194/cp-7-203-2011, 2011. 3578
– Rondanelli, R. and R.S. Lindzen: 2011, Comment on “Clouds and the Faint Young Sun Paradox” by Goldblatt and Zahnle (2011), Clim. Past Discuss., 7, 3577–3582.
– Hasenkopf CA, Freedman MA, Beaver MR, Toon OB, Tolbert MA, 2011: Potential Climatic Impact of Organic Haze on Early Earth, Astrobiology, 11(2):135-49.
– Hessler, A. M., 2011: Earth’s Earliest Climate. Nature Education Knowledge 2(12):6
– Fairén, A, J. Haqq-Misra and C.P. McKay, 2012: Reduced albedo on early Mars does not solve the climate paradox under a faint young Sun, Astronomy & Astrophysics, doi:10.1051/0004-6361/201118527.
Then there is the interactive discussion at Clim. Past. Discuss
http://www.clim-past-discuss.net/7/3577/2011/cpd-7-3577-2011-discussion.html
RC C1795: ‘Review of: Comment by Rondanelli & Lindzen on “Clouds and the Faint Young Sun Paradox” by Goldblatt & Zahnle (2011).’, Itay Halevy, 10 Nov 2011
RC C1837: ‘Review of Rondanelli and Lindzen comment’, Jim Kasting, 11 Nov 2011
SC C2120: ‘Reply to Comment on “Clouds and the Faint Young Sun Paradox” by Goldblatt and Zahnle (2011)’, Colin Goldblatt, 22 Dec 2011
EC C2123: ‘Editor’s comment’, André Paul, 23 Dec 2011
AC C2435: ‘Interactive comment on “Comment on “Clouds and the faint young sun paradox” by Goldblatt and Zahnle” by R. Rondanelli and R. S. Lindzen’, Roberto Rondanelli, 18 Jan 2012.
I have read all these papers and the only authors who criticise RL10 are Goldblatt and Zahnle.
2) You claim, echoing GZ, “the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%)”. However, Rondanelli and Lindzen, in their response, points out that GZ have simply misunderstood the claim.
3) You claim that “Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues”. I wonder if you would share how you know this? Are claiming to have personally spoken to ‘most’ in the scientific community? Or are you repeating rumour? Or are you just making it up, as with point (1) above? It is easy to look at Lindzen’s most recent publications (published since 2010 say) and confirm that most of his results have been accepted by the community, including a number of papers on understanding aspects of atmospheric aerosols, problems simulating the atmospheric tides in GCMs.
4) Your comments on the Arctic make it sound as though this is all settled science when in fact there is a controversy in the literature right now, and Lindzen is not the only participant.
5) You claim, “His line that ‘…is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments’ is just too stupid to even acknowledge.”
This is where you really ought to be careful. There is Kiehl 2007, Knutti 2008, Schwartz et al. 2010, Huybers 2010 and all of this has been cited and acknowledged in the AR5 ZOD, at least. Lindzen’s point essentially stands. There is also a paper by some of Lindzen’s MIT colleagues on the same matter. It may be less that Lindzen is “stupid” and more that you need to do a bit more reading.
Further to Alex Harvey’s point 5) [contra Colose] on model tuning, it would be well worth the reader’s time to return to an older thread on this blog, “CO2 no-feedback sensitivity: Part II” and review Richard S. Courtney’s absolute evisceration of Fred Moolten on this same issue. The relevant portion begins about 1/2 or a bit more down the thread with Courtney’s comment @ 12/15/2010 – 5:44 p.m.
Thanks Alex for sound science instead of Chris’ rhetoric. Evidence wins.
Chris
Lindzen threw down the gauntlet in Slide 16
Dare you take up his challenge?
If you do have the courage to take up his challenge, perhaps you can enlighten us as to the difference between those temperatures.
Then we welcome you erudite pontification on how the massive increase in CO2 during the second half of the century contributed to the difference between the two records, but not to the major increase seen in both records.
Shall we await with bated breath?
Or return to real science?
While you are contemplating the massive warming that will be caused by the poor using coal to warm themselves and cook their food, perhaps you could consider the probability that there will even be an increase in light crude oil production in the forseeable future. See
“The World Oil Supply: Looming Crisis or New Abundance?” The video of the University Of Wisconsin February 17, 2012 is now online. Ex-Shell CEO & Peak Oil Researcher Face Off over America’s Energy Future. Posted at “Citizens for Affordable Energy”
“Gasoline will hit $5 per gallon this year predicts John Hofmeister, former president of Shell Oil Company,”
Perhaps you could explain the underlying economics as to why an abundance of oil will cause the price of gasoline to hit record levels.
Futhermore, Jeff Brown (aka westexas) and Sam Foucher document how the global Available Net (oil) Exports after China and India have already down 13% since 2005. Extrapolating current trends suggests NO available Net oil Exports in 19 years.
How do you support catastrophic anthropogenic global warming (CAGW) by cutting US oil consumption in about half within 20 years at current trends?
Chris,
Lindzen said:
as opposed to: “shown to be consistent with observed warming by including known and quantified additional negative forcings from aerosols and solar variability”
The fact that you put such a radically different spin on it says a lot more about your bias than anything else.
Tip: If someone, particularly someone highly educated and experienced, appears to say something incredibly stupid, first check that you’ve heard them correctly before sprouting off. Failure to do so could result in acute embarrassment on your part.
As to Dr. Lindzen’s examples (slide 20), I think these are examples of the fallacy of “Affirming the Consequent”, a rhetorical device. Dr. Lindzen may be trying to point out that rhetorical fallacies have no place in scientific debate. (IMO, they are common in politics.)
If A then B, B, therefore A .
(Post in haste; edit at leisure.)
The fallacy takes the form If A then B, B, therefore A.
It is my understanding that If AND ONLY If A then B, B, therefore A is acceptable.
Pooh, Dixie, you say:
That’s correct. The “AND ONLY” part is effectively the same as adding, “If B, A.”
AGU President’s message 27 February 2012
We must remain committed to scientific integrity
In doing so he compromised AGU’s credibility as a scientific society, weakened the public’s trust in scientists, and produced fresh fuel for the unproductive and seemingly endless ideological firestorm surrounding the reality of the Earth’s changing climate.”
Birds of a feather ………
Hands up, how many of you believe that the PLANET is warmer by 0,8C now than 150y ago?!
1] if the troposphere warms up by 0,8C – would expand INSTANTLY by 100m. Cannot expand down into the soil – but expands upwards by 100m, into the stratosphere. That extra volume of oxygen + nitrogen can intercept appropriate EXTRA amount of coldness in 3,5 seconds to equalize – it takes few minutes that EXTRA coldness to fall down to the ground and equalize – then instantly O+N shrink to previous volume. Because if it stayed expanded for a whole day (24h) would have intercepted / redirected down enough extra coldness, to freeze ALL tropical rivers and lakes. I live in the tropics, trust me, I’m the most honest person on the planet; the rivers and lakes are not frozen = therefore: extra heat in the troposphere is not cumulative B] for the last 162y, not enough extra heat has accumulated to boil one chicken egg!!! Q: does it take 150y for oxygen + nitrogen to expand after warmed extra – or expansion is INSTANT?!?!?!
2] the amount of data available from 1850 is less than 0,00000000000001% ESSENTIAL, to know the correct temperature. Comparing one unknown with another unknown is the ”mother of all lies”
TRUTH: ”big city island heat” now exist, between 0,5C – 3C, depends on the size of individual big city has grown. That has made the air in those cities to expand > increased the troposphere upwards by 4-5m. That extra volume wastes the extra heat / intercepts extra coldness and is redirecting it down to the surface. Because the surface outside those ”big cities” is much larger. (including the surface of the oceans) – the extra coldness redirected made to be COLDER by 0,00000001C, outside the big cities. Overall temperature in the troposphere is exactly the same today as it was 1850. Unless the laws of physics are abolished by the governments and UN, Global warming is 101% lie!!! My formula is correct: EX>AE>ECI (Extra Heat > Atmosphere Expands > Extra Coldness Intercepts)
Lowering the GLOBAL warming from few degrees to 0,8C is same as massaging the truth with the middle finger, instead off with the whole left hand. It’s the ”kicking and screaming on the way to the confession box”’ Not only Lindzen, but every Climate Activist will be asked the question: -”why were you avoiding / ignoring Stefan’s formulas”? Most prominent will be asked in a court of law / under oath. It’s prudent to prepare answers for that question now. Because the other 101 questions will follow.
Duh. Additional volume is not additional mass. UR babbling and confuzed.
Brian H | February 27, 2012 at 8:10 pm | Reply
Duh. Additional volume is not additional mass. UR babbling and confuzed.
Brian, no need for confusion. When you get warmer – instantly spread and stick your arm in a bucket of ice – you are same volume / SAME MASS – but the extra heat released by your arm and swapped for extra coldness will equalize the temperature in your body. You would be imitating what the troposphere does. Stick to the laws of physics – you can’t go wrong. Where the troposphere expands upwards, when gets warmer for any reason; is much colder than ice in your bucket. Cheers
“coldness”? physics? Man, yuze confuze. There is no such thing as “coldness”. Only heat, in varying degrees from 0 therms on up. It spreads by various means. Eventually it will be spread evenly everywhere, at which point nothing more will happen. Ever.
;p
Brian H….. You are proving my point again; that you Swindlers have NOTHING solid – only looking for salvation in confusion. Engineers that produce refrigerators and stoves; they don’t need to say: ”your freezer is 230k warm; or ”your oven should be turned to 310k (Kelvin) to make a roast. Only Desperadoes in shonky climatology cannot read / understand the gauge in their fridges and ovens.
I had already to defend myself from a similar bull-artist like you, by pointing to him that is no darkness, only lack of photons; but normal people call it ”dark” at night… Brian, when you come up with drivel; it’s a real proof; that you are scared from the truth – you are suffering from ”truth phobia” 2] when you pick on my misspelling; it’s your own admission that: all my proofs are correct. Thanks for your approval, Brian
@stefanthedenier trust me, I’m the most honest person on the planet;
Then you should have called yourself Diogenes instead of “the denier.” Logic requires both the honest and the dishonest to deny that they are dishonest. Can you imagine either an honest or dishonest person saying “I am a liar?” That’s the celebrated Liar Paradox!
Vaughan Pratt | March 1, 2012 at 2:35 am | ”no honest or dishonest person say that he is a liar”
Wrong, Vaughan, WRONG!!! When a person states that he knows exactly ”the GLOBAL temperature” —that is admission that he is a liar. Because nobody has ever monitored the GLOBAL temp; on one small hill are 600 different variations in temp and change every 10minutes. Planet’s temp is not same everywhere as in human body. When one states that the planet is warmer year by ”0,02C, that is shouting loud and clear that the psycho is a shameless con artist / dishonest / liar. Their / yours ”admissions IN WRITING that they / you are a liar” are numerous. I’m writing a book about swindlers like you.
P.s everything I prove, can be replicated / proven, right now. Small example: The hottest point is always closest to the ground – when gets warmer, for any reason – VERTICAL WINDS INCREASE!!! You talk about thermodynamics / convection – but don’t include Stefan’s / my formulas; because they prove beyond any reasonable doubt; that you people are lying INTENTIONALLY. I can prove most of the Swindler’s lies are lies; the only wrong proven about my proofs is that I misspell and have limited English vocabulary. Picking on my misspelling, is Swindler’s admission that all the rest I have is correct
because they prove beyond any reasonable doubt; that you people are lying INTENTIONALLY
Stefan, don’t get angry, get rich. If your method of establishing a person’s intent really does work as you claim, you’d better patent it. The justice system would be your first serious customer. Establishing intent in a court of law has long been an outstanding open problem, usually left to a jury to argue over.
You’ll go far in this world with an invention like that. Got any others up your sleeve?
@ Vaughan Pratt | March 5, 2012 at 12:07 pm | Stefan, don’t get angry, get rich.
Vaughn, see, you can tell truth; your tongue didn’t brake of. I seldom argue against truth, no matter how bad that truth is. But you are my friend, I will make an exception. Beware from people that have being fleeced, when they find out the truth – they will make you to look funny without testicles. Just be careful and… don’t book accommodation on Mt. Ararat; but in central American jungle; where they can’t find you. Cheers
Wouldn’t existing satellite measurements provide either additional support for or refute your idea?
Rob Starkey | March 1, 2012 at 7:24 pm |
Rob, satellites takes “occasionally” ”TWO DIMENSIONAL” infrared photos. Satellite will not tell the difference between: ”if is 1,2m layer of 20C temp and below of 500m layer of 16C – OR if is 500m layer of 20C and 10mm of 16C. Using satellites for monitoring global warmth is the mother of all lies. For a start, regarding climate, NASA is only above IPCC, in the phony GLOBAL warming sewer.
I presume that is no need to explain to you that: 500m layer of air can contain more heat than 15cm. But for NASA that is a taboo… . Because: if is everything good ahead – NASA’s budget goes down by half —- if big catastrophes are ahead – NASA’s budget quadruples instead… they are not stupid…
Anteros | February 27, 2012 at 6:38 pm |
vukcevic –
” Post WW2 has some basis in reason, as it marked quite a significant change in emissions. 2nd half of the 20th century for similar reasons – arbitrary but not cherry-picking (from any point of view)”
Hmmm … It seems to me that the emissions from the international military-industrial complex must have been significant during WWII.
I have no insight as regards the amount of GHGs produced by high explosive material and related colateral damage but it must be significant compared to the post-WWII era.
I was thinking the very same thing while I was typing “Post WW2” but not having any info either, I let it pass :).
However, I suppose in the back of my mind was the graph of tonyb’s that gives the impression that WW2 had less of an emissions impact than we guessed – http://wattsupwiththat.files.wordpress.com/2011/11/tbrown_figure3.png
I think one reason sceptics lean to 1940ish and warmists choose 1950 is because of the peak of temperature around 1940. Not very edifying in either case but that’s life..
Bombed out factories stop producing CO2 when the fires go out. Countries at war don’t trade. Military production robs domestic industry of man power and scarce raw materials. Total economic output drops even as war production increases.
JJ – “Countries at war don’t trade.”
What a silly, silly comment. You cannot possibly be serious. During war factories are more active than ever. You have NO concept or knowledge of history, to make an ill-informed statement like that. That is like saying, “Black is actually white.” Wars are won by logistics. Supplies – bullets, cannons, planes, tanks, rifles, uniforms, helmets, bombs – whoever doesn’t manufacture or buy those LOSES. Ever heard of Lend-Lease? Unlimited submarine warfare? The Lusitania? England – and Russia, too – would have lost WWII without American supply ships. As in manufacturing. As in factories. Ever heard of Strategic bombing? It was the effort to STOP the German factories. KZ Dachau had about 200 satellite camps – all factories, and all situated to avoid bombing. Factories, factories, factories. Whichever side keeps theirs going, they usually win.
Steve Garcia
Steve Garcia,
I have heard of lend lease. I have also heard of rationing. I understand that GM and BMW were making lots of trucks and tanks during the war years. I also understand that they weren’t making many cars during that same period. I also understand that rationing ended and car production dramatically increased after the war.
I understand that WWII was in large part an industrial competition. I also understand that this means that materials production, transport, and manufacturing were therefore high value targets. I have heard of strategic bombing. I also wonder why you think it was not effective at the assigned task. I also wonder how much CO2 Dresden produced, once the embers cooled.
Two posters wonder aloud about the consensus regarding CO2 levels ca WWII. I offered a potential explanation for that more or less accepted fact. Without so much as a single fact or figure in support, you launch into a nasty diatribe replete with name calling.
What a vile, petty little man.
Not a vile and petty man. You made a completely incorrect statement,
which showed complete ignorance of the facts you now say you knew.
Your ignorant statement deserved no respect. If you knew those facts you should never have stated what you did. If you don’t want people pointing out your errors, please refrain from making them.
Vile and petty? Here is your reply to Pierre:
You have a pathetic double standard. You can dish it out, but you can’t take it.
Steve Garcia
JJ, I think you ought to read Steve Garcia’s comment again. You claim it was filled (replete) with name calling, and yet, it doesn’t even have a single instance of such. The closest he comes to calling you names is when he says:
You could argue he mocked you, but there is no name calling. Heck, most of that is discussing your comment, not you. The only thing he said about you was you “have NO concept or knowledge of history,” and that isn’t name calling. Heck, it isn’t even really insulting you given how wrong your comment was. Speaking of, you seem to now acknowledge that trading does happen in countries at war, meaning you acknowledge your comment was wrong.
Anyway, I highly recommend you reread his comment rather than seemingly agree with his point while calling him a “vile, petty little man” when he hasn’t called you anything.
Brandon –
I did not see your reply when I responded to JJ. Thanks. I specifically did NOT call him names. In my response, I used the word “ignorant” in the dictionary meaning of the term, as in not having knowledge about a thing.
I did not see a retraction of his “not knowledgeable” statement. though he now claims he does know those things. I am actually glad he acknowledged the real history and that he is not as lacking as I’d thought. But like you said, he didn’t admit his error.
I do love these kinds of blogs, where I can find reasonable people who things can be discussed with. I recall one on here where hundreds of comments by “disagreers” were made and all were very respectful.
In fact, JJ, I had no intention of insulting you, even if you were some dumb cluck. But I could not let your statement pass without comment. It was just flat out wrong. So, my sincere apologies.
But when you make a really incorrect statement, don’t be surprised if someone calls you out on it. And you be cool, too.
Peace to both of you.
Steve Garcia
What a silly, silly comment. You cannot possibly be serious.
What a silly silly silly comment. You are obviously not serious.
Here is a known fact from the geologic record. The last time CO2 was as high as it is now was 20 million years ago and sea level was 200 feet higher than today. We have no idea how fast the ice is going to melt, but we do know that sea level has had instances of very rapid rise in the last 15,000 years. We do know that the rate of seal level rise is now accelerating. Is pointing out these facts alarmist?
Something doesn’t add up, rossi. Is the extra 200 feet of sea level hiding in the deep ocean bottoms, along with the missing heat?
Don, It’s worse than we thought.
We can’t account for the missing 200 feet of sea water, and it’s a travesty that we can’t.
Maybe it’s in the deep ocean, hiding with the missing heat?
If you’re prepared to look closely, I think you’ll find both the 200 feet of sea level and all the missing heat in Kevin Trenberth’s underpants.
No good Anteros. ‘Felicity’ had are really good look and couldn’t find any heat down there either.
I think it has actually gone to the nether.
Gary M: Best line I’ve read all day!
Well, stating an untruth is alarmist. Have a look at the record – the slope of sea level rise has been constant for a very, very long time, but has actually *decreased* in recent years, not increased. See
http://sealevel.colorado.edu/
You may be looking at a less authentic source, or maybe none at all.
Probably GISS-adjusted sea-level, now welling over the Himalayas.
Yes, doubling CO2 is taking us back to Cretaceous levels (pre 65 million years ago). There were no ice caps in the Cretaceous, it was all in increased sea level. Furthermore, a second doubling (barely possible by using all fossil fuels) gets us to Jurassic, and a third doubling (not possible thankfully without the help of volcanoes) gets us to Triassic. These were increasingly warm periods in paleoclimate. I find this very interesting as a lesson on where we stand in the long view.
Your POV doesn’t take into account, nor even raise awareness of, other potential climate factors that could have led to warmer conditions 65M years ago. It is not at all a given that CO2 was the culprit!
You may want to start with plate tectonics: Two events in the past 35M years have very likely contributed to substantial planetary cooling, namely the establishment of the Drake Passage (30-35M yrs ago) and more recently the closing off of Isthmus of Panama (5M yrs ago). Both events have impacted ocean circulation patterns and each is suspected of contributing to the formation of the North and South polar ice caps, respectively.
Until you account for these two events (and possibly others), it is pretty difficult to assign blame to Cretaceous CO2 levels for the lack of polar ice caps back then.
You may also also be aware of Jan Veizer’s benthic foramina studies, which indicate tropical sea surface temperatures over the Phanerozoic era (past 500M yrs or so) have been remarkably stable and do not appear to be correlated to the geological record of prevailing CO2 concentrations at the time (eg. GEOCARB III).
Paul, the global temperature cannot be affected by just ocean circulations, and the Cretaceous was warm enough to prevent ice caps which had not existed since the Permian 250 million years ago, which coincidentally was the last time CO2 had low values like now, and just prior to the probably volcanically-induced climate change leading to the Permian-Triassic extinction event and the high Triassic CO2 levels. The sun was even a little cooler in these previous periods, so higher CO2 alone won’t explain all the differences and may underestimate them if the sun isn’t accounted for.
If a change to the ocean circulation patterns leads to the establishment of a permanent last ice sheet (as happened with the one that formed over Antarctica), this impacts the overall planetary bond albedo, and that will have an impact on the global temperature. Indeed, one can see significant global cooling visible in the geological record just around the time of Antarctica’s initial glaciation.
It isn’t just about ocean circulation: Ice caps tend to form in the presence of large land masses at the poles. When the continents are not near the poles, we have seen little in the way of glaciation take place.
However, we did see a deep glaciation occur some 300M yrs ago (the “Gondwanan Ice Age”), when CO2 levels were some 15 times current levels. :-)
that should read: “permanent _large_ ice sheet” in the first para of my reply above. Sorry!
Actually, it was the Ordovician Glaciation I was thinking of (420M yrs ago) where CO2 levels were approximately 15 times today’s… mea culpa!
Coincided with a large super continent at the South pole…
@ Jim D | February 27, 2012 at 11:25 pm |: Paul, the global temperature cannot be affected by just ocean circulations,
Jim D, GLOBAL temperature doesn’t get effected by ocean circulation, global temp is always the same. Ocean circulation effects / CONTROLS the climate. When circulation increases / decreases = improves / deteriorates the climate on many places. Nothing to do with the PHONY global warming. I’m glad that I can be of some help for you; unfortunately, as a ”closed parashoot brains”’ medical help is more appropriate for you
The sun only needs to be a few percent fainter, as it was in previous glaciations, to permit ice at the poles even with ten times as much CO2. The Ordovician was also possibly after a period of declining CO2 implying cooling, but temperature and CO2 estimates back then are a bit fuzzy.
@ Jim D | February 28, 2012 at 1:03 am
Jim D, the sun doesn’t get fainter – look at the size of it. They tell you that the sun gets fainter; but that is misleading. Yesterday was created some big sun-flares; sunlight comes here in 8 minutes, no delays – you will see that is not going to be warmer. Only damages to some electronics; but because the temp is controlled by O+N expanding / shrinking INSTANTLY in change of temp – expanding when warmed / shrinking when cooled extra = overall is same temp always. Stop worrying!!! Tell this to people that brainwashed you – make them to worry.
They are lying, because they have many megaphones like you / they know that megaphone need only battery, but no brains necessary . Be happy, let the big Swindlers worry
Does this mean that I may be able to take the kids to see real dinosaurs in a park?
Paul, you are 90% correct; Jim D is 101% WRONG!!!
For a start, for Jim D to state that: ”the sun was little bit cooler 500m years ago”… the man doesn’t know what the word ”shame” means, it’s a symbol of power madness… Yes Paul, H2O controls the climatic changes 100%, on many different ways. Big changes happened in closing the gap between south / north Americas, opening Bering straights, opening Gibraltar straights. Is not just the opening / closing by itself. But that changes the directions of currents, which effects changes places far, far away. The shonky science in the past never used facts and common sense – they were pining the climatic changes on solar activity, which is 101% wrong… then they become CO2 + methane molesters / jihadists.
All you need is to compare Brazil V Sahara’s climate. For the jihadists those two places have SAME climate… because of same amount of CO2, same solar / galactic influences. Most of the ”climatologist” are a big city swindlers… Paul, if you drive from east to west coast of USA, or Australia; in one week you will encounter 50 different climates. Is it that 50 GLOBAL warming happened in that week, or was it more or less HO2 present in particular area?! People that cannot understand that: climate becomes more extreme without water / milder temperature CLIMATE, with lots of water present; are ”premeditated mas murderers” They are blaming ”water vapor” for the phony GLOBAL warming. If the truth is known, by building extra dams to prevent floods and droughts; not only loss of lives would be prevented; but extra water on the land = less dry heat created, more moisture in the air is for regular rain + day / night temp closer + more raw material for renewal of ice on polar caps and glaciers and lots of other benefits.
Their massive drivel silences the truth, but the truth always wins on the end. It’s all on my website and in my book. Sophisticated swindlers cannot change the laws of physics. There is NO such a thing as GLOBAL warming, or GLOBAL ICE AGE. When part of the planet gets warmer than normal – other part MUST get INSTANTLY colder than normal. Oxygen + nitrogen regulate the temperature overall to be the same every day of every month, year and millenia. Proven already ”beyond any reasonable doubt”
Sounds like strong evidence against a CO2-temperature link. CO2 up but no sea, see.
How many years of declining sea level are required to falsify the claim that sea level rise is accelerating? Most of the true believers have stopped making the sea level claim because it just calls attention to the recent measurements that sea level has been falling, not rising at an accelerating rate.
Cagw_skeptic99 –
You should know by now that nothing in all this is *ever* falsifiable. Nor designed to be. If sea level doesn’t rise, this will be explained by tweaking the Models, blessings and peace be upon them.
Well, I have been informed, all the water that’s missing is apparently flooding Australia. Apparently, as Australia is at the bottom of the Earth the rainwater doesn’t flow back to the sea, but sits around doing nothing, like evaporating or anything.
Is pointing out these facts alarmist? Yes, Ross. as you’ve dont it, yes it is. Read Lindzen’s remarks. Pointing out sea level rise is trivially true but making the leap from that to 200 feet is alarmist.
Ross you are wrong about the sea level rising rate is accelerating. In the Houston paper it is decelerating and others concur with that
Ross,
Why do you think the two are connected?
Something doesn’t add up, rossi. Is the extra 200 feet of sea level hiding in the deep ocean bottoms, along with the missing heat?
Obviously not, as your tone indicates.
But has it occurred to you that it could hide on top of Ellesmere Island, Greenland, and Antarctica? Do you have a physics-based reason why this could not happen?
This material is largely recycled from previous talks, so we don’t have anything new to address in it. Lindzen stays clear of the last 30 years for good reason. Had he calculated how much warming his 1 C sensitivity would have given, it would have been less than half of what was observed. He then would have had to say where he thought the rest came from, which he has no idea of, at least that he has spoken about. For 1900-2000, his expected warming would have been near 0.35 C, only half of what actually occurred, even with the negative effect of aerosols that he doesn’t believe in (somewhat in a minority there). When he says that evidence suggests lower sensitivity, he is referring to his own study of tropical west Pacific clouds during ENSO cycles with cloud-forcing changes that he infers apply globally to the CO2 effect somehow. He found this inference hard to publish: tropical Pacific clouds – fair to present but disputed by other later studies, global application to CO2 – a step too far.
His last sentence is ironic, in that it expresses certainty in his own low-warming prediction, despite his earlier caution about listening to people who say things are incontrovertible, presumably meaning besides himself. Apart from that, global warming is incontrovertible in the temperature record and that is the only sense where I have seen the term applied in an official statement by any scientists.
Jim, See my response to Colose upthread. Basically, Lindzen is looking at all forcings excluding the very uncertain aerosols. In AR4, the summary forcing diagram shows those forcings to be north of 3W/m2, pretty close to the alleged value for a doubling of CO2, the magic sensitivity forcings. I would urge you to see in the same diagram for the error bar on the aerosol forcings. Lindzen also has some references to modeling papers where the aerosol “adjustment” is explained along Lindzen’s lines.
The aerosols are uncertain but centered on -1.5 W/m2. How does he justify ignoring a first-order term like this? Just acknowledging a central value throws his sensitivity out of the window. He might even have heard of global dimming which occurred during the greatest part of the aerosol growth from 1950-1980. It just seems irrational to say no effect without reasoning it out.
The global dimming / brightening produces a number of problems for both the models in the AR4 such as Romanou 2007. eg Ohmura 2009.
Global solar irradiance showed a significant fluctuation
during the last 90 years. It increased from 1920 to
1940–1950s, thereafter it decreased toward late 1980s. In
early 1990s 75% of the glob indicated the increasing trend
of solar irradiance, while the remaining area continued to
lose solar radiation. The regions with continued dimming
are located in areas with high aerosol content. The magnitudes
of the variation are estimated at +12,_8 and +8Wm_2,
for the first brightening, for the dimming and the recent
brightening periods, respectively.
Observations from surface actinometric stations in the south pacific have a number of confounding attributes. eg Wild
evidence for a decrease of SD from the 1950s to 1990 and a recovery thereafter was also found on the Southern Hemisphere at the majority of 207 sites in New Zealand and on South Pacific IslandsLiley, 2009].
Liley [2009] pointed out that the dimming and brightening observed in New Zealand is unlikely related to the directaerosol effect, since aerosol optical depth measurementsshowed too little aerosol to explain the changes. On the basis of sunshine duration measurements he argued that increasing and decreasing cloudiness could have caused
dimming and brightening at the New Zealand sites.
Hatzianastassiou 2011 show that observations in the 21st century also constrain the so called understanding eg
An overall global dimming (based on coastal, land and ocean pixels) is found to have taken place on the Earth under all-sky conditions, from 2001 to 2006, arising from a stronger solar dimming in the SH (delta SSR = -3.84 W m-2 or -0.64 W m-2/yr) and a slight dimming in NH (delta SSR = -0.65 W m-2 or -0.11 W m-2/yr), thus exhibiting a strong inter-hemispherical difference. Dimming is observed over land and ocean in the SH, and over oceans in the NH, whereas a slight brightening occurred over NH land, with the SSR tendencies being larger in the SH than in the NH land and ocean areas.
The Southern Hemisphere has undergone significant dimming due to a larger increase in cloud cover than in NH, which has dominated the slight dimming from increased aerosols. The indicated SSR dimming of the Southern Hemisphere at the beginning of this century demonstrates that much remains to be learned about the responsible
physical processes and climatic role of cloud and aerosol feedbacks.
The largest aerosol effect is that on cloud albedo, especially over oceans, so perhaps it makes sense that it is seen in the SH. Anyway, why doesn’t Lindzen talk about any of this?
The aerosols over ocean in the SH are mostly natural ie biological,over land where we have good optical resolution eg lidar the counts are mostly negligible ie insignificant for antropogenic contribution.eg Lilley 2009
Jim D; You are not just WRONG, but back to front on everything also.
1] ”Aerosols” are used exclusively for confusing Smarties like you. Aerosols have no influence on temperature. Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen. Aerosols cannot warm the stratosphere / stratospheric temperature is always the same. Because the diameter of the earth’s orbit is 30 light minutes big. The velocity the earth travels trough that coldness is incomprehensible for the shonky science only. Whatever you have learned from the book for brainwashing, is preventing you to see anything regarding climate in proper prospective.
If you clear the mud from your head; you will be able to see clearly the ”drivel with confidence” by people like you, Brandon Shollenberger and others are saying, that wouldn’t make sense to an earthworm. Nothing personal, just friendly advise; try to think for yourself, instead off using the crap dished by the propaganda establishment, or the book for brainwashing (created by amateur climatologist / geologist, in the past 100y) Jim, CO2 absorbs more heat than O+N, but CO2 absorbs much more coldness than O+N at night. THOSE 2 FACTORS CANCEL EACH OTHER!!!! Only Flat Earther believes that is 24h sunlight on the whole planet, think about it… Some day you will have to justify the lies lies that you are spreading, even though others invented them.
stephanthedenier,
+1
Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen.
Hard to imagine anyone more clueless about atmospheric pollution.
Los Angeles had its brown cloud in the 1960s and into the 70s. Now India and China have their brown clouds. These are nowhere near the stratosphere. Check your facts first.
@ Vaughan Pratt | March 1, 2012 at 2:45 am |
Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen.
Hard to imagine anyone more clueless about atmospheric pollution.
Los Angeles had its brown cloud in the 1960s and into the 70s. Now India and China have their brown clouds. These are nowhere near the stratosphere. Check your facts first.
Vaughn, the brown cloud is from SOOTH, CO (carbon-monoxide) SO2; NOT from aerosol, helium, ozone. Not efficient burning of fossil fuel, because of depleted oxygen in those mentioned areas by you. Proves that badmouthing of creation of new methane, is one of the biggest crime. Only creation of NEW methane reverses the damages / IMPROVES THE OXYGEN LEVEL IN THE ATMOSPHERE. All proven already. You Vaughan are a big part of that crime, jihadists against CO2, CH4. I hope you will get appropriate penalties, for what you deserve, not more or less
Campaign to Repeal the Climate Change Act – Prof Richard S. Lindzen Seminar (Global Warming: How to approach the science) held at the House of Commons Committee Rooms Westminster, London on the 22nd February 2012
http://www.youtube.com/watch?feature=player_embedded&v=Wy50yaBIDPE
THE CLIMATE CHANGE ACT RECONSIDERED – PART 2 of 2 (credits re-edited)A Public meeting held in the UK House of Commons
http://www.youtube.com/watch?v=HpvJbBgYF4E
Part 2 of the House of Commons session that Lindzen participated had the relevant energy policy discussions.
Paul, Just to note that the 2nd video “Climate Change Act Reconsidered-2.mov” was from the previous meeting at Westminster on 30th November 2011 and not the Prof Lindzen meeting.
For the CO2 advocates out there – the evidence? I was at both, and so have first hand observation.
Hi Simon (http://twitter.com/#!/SConwaySmith/), I think that it is also worth mentioning that it was Professor Ian Plimer who gave the science presentation at that Nov. meeting. I’m disappointed that I missed that one and made sure of going to the Feb. one (were you there too?).
On that twitter page you suggested to Climate Resistance (I hate it when people hide behind those false names) “ .. May be worth sending the report to the Sky Dragon authors (John O’Sullivan, Tim Ball) & others etc. for a full review .. ”. If that report was a pseudo-scientific one (like the IPCC’s SPMs) then in my opinion that would be an appropriate place for it. Having had close dealings with John and his “Slayers” regarding their publishing company Principia Scientific International (PSI) since Dec. 2010, that is not the place where sound scientific arguments originate from. Extensive comments about that subject can be found on Professor Curry’s “Letter to the sky dragons” thread (http://judithcurry.com/2011/10/15/letter-to-the-dragon-slayers/) which has now notched up 1,177 comments, but none from you – perhaps you missed that thread but its worth a visit.
In your comment of 15th May 2011 @ 4:41 AM EDT on the Climate Realists blog (http://climaterealists.com/index.php/contact.php?id=7457&page=2) you said “ .. As they say, “follow the money” .. ”. Climate Realists has close associations with the “Slayers” and PSI through co-founder Hans Schreuder (PSI’s Chief Finance Officer (http://climaterealists.com/bios.php) and “follow the money” might give a hint as to what the “Slayers” are about (http://www.gofundme.com/1v39s and http://wattsupwiththat.com/2011/04/08/help-asked-for-dr-tim-ball-in-legal-battle-with-dr-mann/).
There’s much more on that subject on “Letter to the sky dragons” (e.g. see http://judithcurry.com/2011/10/15/letter-to-the-dragon-slayers/#comment-134549 – it’s long but worth spending time on)
BTW, what is the origin of the false name ilma630 ?
Best regards, Pete Ridley
Ross Cann,
Your comment: “that the rate of seal level rise is now accelerating”, is very interesting. This is so especially since the sea level in fact was dropping the last few years, and now is only slightly rising the most recent period. The average rate over the last 5 or so years is near zero. How do you get an acceleration out of that?
Leonard; Because of the movement of the tectonic plates – they buckle = same place appears that water is raising – other places as some atolls / islands are sinking. We should be grateful for it. Why?
If that wasn’t happening in the past… high erosion from the hills by water and winds – not one speck of dust by now would have being dry!!! .My conservative calculation says: ”there is enough water on the planet, to cover ALL the soil by 1,9km of water. Not because of CO2 or any phony GLOBAL warming; but because of the amount of water. In my book I have 3,5 pages on that subject. The only reason we have dry lands is because other places are sinking – that gives ideas to people of ”sea raising / falling”
Another phenomena: 80% of all the water in every sea and ocean combined is below 4C. Water below 4C, when is warming up – it shrinks; when gets colder, expands. Experiment: put a bottle of seawater at 4C in your freezer – by the time cools by few degrees; the bottle will explode. Then fill a bottle with seawater at zero degrees and warm it by 3-4 degrees – the water will shrink by 5-6% – if that was a 1km deep ocean – should shrink a lot, WHEN THE WATER IS GETTING WARMER!!! They are not just wrong, but are back to front on most of the subjects. When part of a tectonic plate is sinking – gives an illusion that the seawater is raising. Illusion is not a science – but is used by lots of shonky scientists as factual
The problem as I see it is that, to coin a cliche, the devil is in the details. If you give a short presentation in general terms to a non-scientific audience, you can prove just about anything you want, with no-one to say you’re wrong. The reason that Lindzen’s perspective is not widely accepted within climate science resides in details that are not in the talk, and which an audience unfamiliar with climate data would be unable to judge in any case.
Although I wouldn’t be vehement about it, I tend to agree with Chris Colose and Jim D on many of the specifics. I find it particularly unfortunate that Lindzen seems to cling to the argument that aerosol cooling is just a fudge factor invented to make the mainstream arguments fit the observations, an argument that sometimes seems to have achieved mythical status in some blogosphere commentary. We don’t know everything about aerosols during the twentieth century, but we do know a lot, including evidence for their potent “global dimming” effects from about 1950 through the late 1970s. The legitimate grounds for discussion and disagreement would remain within the boundaries of the magnitude of aerosol effects, but not with the claim those effects were negligible or non-existent.
This has actually been discussed fairly extensively in many past and recent threads, and interested readers might want to go back to look at those discussions and visit the relevant references to studies by Martin Wild, Gregory and Forster, and analyses by Isaac Held, among others. The relationship of aerosol forcing to model projections has also been discussed, and in addition to the above items, AR4 WG1 Chapter 9 is worth revisiting, even though the language is sometimes dense and ambiguous. There are uncertainties to be sure, but not at the level implied by Lindzen.
Fred –
Are you saying that Lindzen shouldn’t give talks at the House of Commons?
Is it much different from Al Gore touring the world giving talks to 10’s if not 100’s of thousands of people – with ‘information’ that Rich Muller at least describes as all either misleading, exaggerated or wrong?
Lindzen is giving his opinion – it’s a talk, not a scientific paper, and if as Chris Colose so obnoxiously puts it, his ideas are “too stupid to acknowledge” then how is this talk going to affect anything?
Anteros – I’m not saying Lindzen shouldn’t talk to the House of Commons.
On the other hand, by posting his talk here, Dr. Curry appears to be making its scientific content the focus for discussion, and her comments seem to support that inference. This troubles me because the scientific content of the talk was too general (as I mentioned above) to be dissected as a scientific talk as opposed to a politically oriented one, and so what we’re left with are simply the topics in the talk, for us to discuss on some other basis than what Lindzen said to the House of Commons.
That’s fine, except for a few problems. First, the number of topics was far too great to address adequately in a blog thread. Second, and more important in my opinion, there exist numerous informative scientific publications addressing each of these topics, any of which would be a good starting point for fruitful discussion. It therefore concerns me that this blog recently appears often to be using news articles and public talks as a basis for climate science discussion rather than scientific communications replete with specific data. The latter do come up at times, but disappointingly seldom recently, in my perception.
The use of news articles and talks generally provokes a great deal of arguing, but i believe more actual understanding would emerge if we started with published articles or other legitimate sources of data such as material presented at meetings, and occasionally, Internet content from individuals not involved in partisan controversy. Dozens of potential starting points are published every week, so there’s no dearth of material for serious discussion, if serious discussion is a goal here in preference to argumentation.
Fred –
I take your point and don’t really disagree with you.
However, what you’d like to see on this blog is, I think, very different from
that desired by the majority. There are plenty of places where the recent literature of climate science can be discussed – I don’t think that is what Dr Curry’s blog is about, for the most part.
It’s more to do with multiple different approaches and perspectives – perhaps many of which you find a bit trivial or superficial. It has its merits, though, for people looking to explore things other than consensus climate science.
I think you’re right about Lindzen’s talk – it was perhaps too broad and discursive to be a useful blog subject – though I’m glad to have the links. Perhaps it would have been better for Dr Curry to excerpt 3 or 4 of the most interesting/convincing/contentious points and examined them in some detail. Or even just one!
I think maybe I’m nearer the other end of the spectrum to yourself – I tend to get most interested here when the subject moves towards the history and philosophy of science and perhaps the psychology of our beliefs about the future. Less specifically of the ‘Climate’, and more of the ‘Etc’, even though I do have an interest in quite a few of the pure science topics.
Anteros – Thanks for your thoughtful response. I agree that it’s reasonable to have a mix of technical and non-technical subjects here, even if I personally would like to see the proportion shifted a bit more to the technical, which are currently in the minority. My only real complaint involves the use of non-scientific sources to launch a discussion of scientific topics at the technical level. If we’re going to discuss how aerosol forcing is handled, for example (a technical topic), I would prefer to start with recent data on this issue rather than a casual (and I believe inaccurate) remark by Lindzen suggesting that aerosols have been used simply to make predictions match observations. The same applies to newspaper articles as the basis for claims that models haven’t predicted recent temperature trends – a topic more complicated than implied by the news article.
If the topic is philosophical or social, then sources relevant to that topic, including the popular media, would be perfectly reasonable.
Anteros, you say:
I disagree. Lindzen’s talk was definitely broad, and it would be unreasonable to expect anyone to respond to everything he said in a single comment. However, it would be easy to respond to an individual point, or even several points, Curry highlighted. That it would be practically impossible to discuss everything Lindzen covered at the same time in no way makes it impossible to have valuable discussions of things he said.
Anteros –
Coining a strategy from the warmistas (playing the authority card), Lindzen doesn’t just have AN opinion. As the Department head at MIT, his opinion is magnitudes more than even an informed opinion, much less merely “his opinion.”
If the warmistas pretend like the Climate Head at MIT doesn’t have an opinion worth listening to, it say a lot more about the warmistas and their closed minds than it does about Lindzen. Yet they’ve tried to marginalize him since day one, pretending among themselves that they don’t see him or hear him. I imagine they all have their eyes closed and fingers in their ears and are going “Lalalalalalala” to keep from hearing him.
Steve Garcia
Fred, Lindzen has a reference for the aerosol adjustment factor remark. It was made by a modeler in a refereed paper. Lindzen is good at this kind of thing, paying careful attention to detail. You should consider it as a technique for finding out the truth behind the science.
Fred,
I got the distinct impression that Dr Curry pretty much skipped over the detailed science part of his presentation. The point under discussion I get has to do with how the debate is being framed, in particular the “doom” aspects that get promoted.
Fred,
I share your wish for a higher proportion of technical discussion on this blog. But I’m afraid that I have a more cynical view than you of the (ab)use of aerosol forcing by climate scientists who wish to claim the climate sensitivity is relatively high.
For instance, Hansen concluded last year that AOGCMs were mixing heat into the ocean too fast, and that must mean that their aerosol forcings were too low, since otherwise they would have produced too high a twentieth century warming. He ignored the possibility that the models’ climate sensitivity might be too high, because he is certain that about 3C is the correct value. What happens next? Surprise, surprise, the GISS model aerosol forcings are changed to make them much more negative – now rising to -2.4 W/m^2in 2010!
And a peer reviewed study a year or two ago found a significant negative correlation between GCMs’ climate sensitivity and their aerosol forcings. The clearly implied that modellers were altering their aerosol forcings so aas to bring their model projections into line with other models (and hence the IPCC 2-4.5C sensitivity range).
Aerosol forcing has in fact been tightly constrained by studies that estimated it simultaneously with climate sensitivity, using temperature measurements for several latitude bands. For instance, based on multiple temperature measurements, Forest et al. (2006) estimated total (direct + indirect) anthropogenic aerosol forcing in the 1980s (when it was probably at its highest) as -0.5 W/m^2, with a 5-95% range of -0.75 to -0.13 W/m^2 (relative to pre 1860 levels). That is way lower than the total forcing, of the order of -1.5 W/m^2 or higher, usually claimed by those who believe that climate sensitivity is high (3C or above). Even the IPCC’s own best estimate of total aerosol forcing is only -1.0 W/m^2 (change from 1750 to 2005, Fig. 2.20 AR4 WG1 report).
“And a peer reviewed study a year or two ago found a significant negative correlation between GCMs’ climate sensitivity and their aerosol forcings. The clearly implied that modellers were altering their aerosol forcings so aas to bring their model projections into line with other models (and hence the IPCC 2-4.5C sensitivity range).”
Nic – I don’t think you understand how models are constructed. The study I assume you probably refer to was Kiehl 2007 (not one or two years ago), and there is no reason to believe that it has anything to do with modeler’s altering their forcings to match observations. For more on this, please see my earlier comment on the claims about aerosols as “fudge factors”. However, for a more expert source, you should consult people who actually construct models for a living. One of them, Andy Lacis, has been commenting in this blog, and you can also contact Gavin Schmidt or read the RC description of model construction.
On the other hand, if you have direct knowledge of aerosol forcing beting prescribed in a model (as opposed to derived), and as a means of making the projections “come out right”, you should post the evidence in detail. I think you will find that to be a myth. There are still problems with getting aerosols right, but repeating myths about their use to match projections with observations won’t help solve them.
I don’t think this has anything to do with how “cynical” one is, but simply about how knowledgeable one is in knowing how models are made.
Fred, You are right about the Kiehl study: I should have said a few years ago, not a year or two ago. A minor point.
I realise that in many cases aerosol forcings are derived internally in GCMs; I didn’t imply otherwise. But the derived forcings can be changed by altering the relevant adjustable parameters, to make the model results more in line with what the modeller thinks they should be. I have not seen anything to convince me that it is a myth that this is done. To quote from Bender (2008) “A note on the effect of GCM tuning on climate sensitivity”:
“At present, climate models are tuned to achieve agreement
with observations. This means that parameter values that
are weakly restricted by observations are adjusted to generate
good agreement with observations for those parameters that are
better restricted…”
That statement fits the aerosol forcing case perfectly.Naturally, modellers would rationalise and defend adjustments that they make.
Nic – Two comments. First, as you point out, parameters are often tuned to observations within the limits of the underlying physics, but this is to ensure that they correctly simulate climate in its control state, without any imposed forcing from CO2 or other variable. The climate must correctly simulate seasonality, latitudinal differences, air and ocean circulation, and other attributes. Having done that, the modeler then “forces” the climate with the factor interest, e.g CO2., and asks how well it simulates the trend in comparison with observations. If it does well, that’s good. If it doesn’t, that’s too bad, but the model is not then tuned to match the trend, either by changing aerosol forcing or other inputs.. The notion that models are tuned to make their simulations “come out right” is one of those enduring myths that keeps surfacing like the Loch Ness Monster, no matter how many times it’s shot down.
The above refers to simulations used as projections. Models can also be used to better define the values of parameters of interest, by “inverse modeling”, in which various values of the parameters are tested to determine which best permits the model to match observations. Note, though, that a model simulation performed for this purpose is not then cited as an example of how well models make projections. The model simulations referred to, for example, in the projections cited in AR4 WG1 Chapter 9 are examples of forward rather than inverse modeling.
Second, and probably more important, the people to ask if you want further confirmation of this don’t include me, with my outsider’s knowledge, but rather the ones who construct climate models for a living. In particular, you should contact Gavin Schmidt, because Gavin is now accustomed to hearing this claim, and to explaining how aerosols are actually incorporated into the models, along with links to actual model details. I’m sure there are others who could do the same, but I’m most familiar with Gavin’s explanation on this topic.
Fred and Nik, As someone who is very familiar with turbulence models, the process is not that objective. Terms are added all the time to better correlate with specific cases. Tuning to match the current climate almost guarantees worsening the correlation in some other situations. The problem is that with the aerosol forcing 1.5+-1.1 W/m2, there is no rational way to set it except based on matching observations. This is what Lindzen is refering to.
I would argue that any other method is even worse.
David – It’s important to distinguish the parameter tuning needed to establish a good simulation of control climates – something done routinely – from tuning designed to make a simulated trend better match the observed trend. The latter isn’t done.* In particular, aerosol forcing isn’t adjusted to make the simulations better match observations. Again, I think Gavin Schmidt would respond to direct inquiries on this matter – they needn’t be on RC or part of some ongoing debate. I also know there’s some archived RC material on this, but I don’t remember exactly where to find it.
*More precisely, in previous discussions, including one with Dr. Curry on a different blog, he stated that he is unaware of any models where that has been done, and that includes the GISS models he has worked on.
It is good when discussions evolve to a technical level like this-
I agree with Fred that it is important to distinguish between forward and inverse approaches to aerosol understanding, and the implications of each to, say, attribution. For example, the claim by the Curry and Webster uncertainty paper that inverse aerosol estimates represented a circular argument to the attribution problem was just wrong.
However, there is in fact a large degree of inverse correlation between model estimates of climate sensitivity and aerosol forcing, at least up to CMIP3 generation models; there are some multiple interpretations in the literature, but because of possible conditioning of model ensembles to historical climate change, it is not appropriate to view the agreement in simulated and observed time-evolution of global surface temperature as a formal attribution. This, however, was not the basis for attribution in AR4, and this point will be emphasized even more in AR5. Formal attribution doesn’t concern the amplitude of simulated change, but the patterns between various forcings in time and space. Amplitudes are determined by regressions and model tuning has no significant impact on the detectability of a variety of forcings.
The distinction between cases used for parameter setting and actual “real” simulations is artificial and perhaps exists only in the minds of the modelers themselves. From an operational point of view, over time more data accumulates and the number of “tuning” cases increases. The problem then becomes more and more challenging because in the case of turbulence modeling the models are in fact much poorer than the “users” of those models realize, or more accurately poorer than they are willing to admit. I see no evidence that its any difference in climate science. There are only 2 possibilities:
1. You add more terms and thus more tunable parameters to be able to fit more data.
2. You accept a high level of error for cases other than those you used for tuning.
I can’t show you here the data on turbulence models. There are literally hundreds, not counting many forms of each one. The better modelers set their constants based on cases where analytical solutions are available, for example an infinite flat plate in incompressible flow with no pressure gradients, a very special case. Relatively small differences in parameters can make differences of 20% in total forces, a very large difference, for even simple cases that are different than the “tuning” cases. The problem for climate science is that there are no simple cases for which analytical solutions are available. There is no alternative but to admit that your subgrid model parameters MUST depend on numerical artifacts and parameters, not a good situation. This is yet another form of circular reasoning.
Climate models must use turbulence models, perhaps called subgrid models by the modelers, but the range of scales is much larger than for example for an aerodynamic problem. The prospect that they are even remotely accurate is nil. But yet Gavin Schmidt said to me that he had “never heard of Reynolds’ averaging as a significant source of error.” That’s totally understandable, its not his field, so one would expect him to rely on people outside of climate science. Who are they?
My question to you Fred and you Chris, is what other technique would be appropriate for setting the aerosol forcings as a function of time? Should it be based on prejudice or a desire to make the sensitivity turn out a particular way?
In any case, if the modelers are actually aware of the facts and data, they would realize that the subgrid models (and aerosol models can be considered one of these) can have a huge impact and are in fact pretty badly wrong if you stray far from the cases used to set them. The kind of tuning Lindzen talks about is far more “scientific” than the alternatives in my view. Basically, more and more data enables you to hopefully expand the range of applicability of the subgrid models. However, there is no guarantee of this. In other regimes, the assumption that the terms should be combined linearly has little justification and is based more on hope than science. How do you know the functional form of the terms is correct? The answer to this is usually that in a particular case, data seems to be reasonably accurately matched using this functional form. There are sometimes simple analytic theories that can be used, usually of very limited applicability.
Anyway, I get tired of people who have no knowledge of subgrid models talking about them and how they are used and tuned. Climate scientists so far as I have been able to determine are merely users of these models and don’t understand their underlying “theory” such as it is.
By the way, how is tuning an aerosol model any different than tuning the forcing scenario? They seem equivalent to me.
‘Extensive experience over several decades shows that computational atmospheric and oceanic simulation (AOS) models can be devised to plausibly mimic the space–time patterns and system functioning in nature. Such simulations provide fuller depictions than those provided by deductive mathematical analysis and measurement (because of limitations in technique and instrumental-sampling capability, respectively), albeit with less certainty about their truth.
AOS models are widely used for weather, general circulation, and climate, as well as for many more isolated or idealized phenomena: flow instabilities, vortices, internal gravity waves, clouds, turbulence, and biogeochemical and other material processes. However, their solutions are rarely demonstrated to be quantitatively accurate compared with nature. Because AOS models are intended to yield multifaceted depictions of natural regimes, their partial inaccuracies occur even after deliberate tuning of discretionary parameters to force model accuracy in a few particular measures (e.g., radiative balance for the top of the atmosphere; horizontal mass flux in the Antarctic Circumpolar Current).’
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full
These are hollow men we are ‘debating’ – TS Eliot
….
Between the idea
And the reality
Between the motion
And the act
Falls the Shadow
For Thine is the Kingdom
Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow
Life is very long
Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow
It seems an impossible task to bring these people into a reasoned discourse – it is all shadow. It is a descent into madness – and of course they can’t see it. So why debate? We need to talk past these people and address the market place of ideas directly.
We should be confident because we are right and they are just hollow men with an empty narrative.
Robert I Ellison
Chief Hydrologist
One other thing Chris and Fred, Subgrid models of turbulence use the doctrine that turbulent fluid has an effective viscosity higher than the laminar fluid, in some cases substantially higher. Thus, the models add dissipation, the ever present devil destroying accuracy of simulations. In fact, or course, the subgrid models are too dissipative, resulting in excessive damping of the dynamics. Rather like the leapfrog filter used in climate models that adds deadly dissipation to correct a well known issue with the leapfrog method, well known since I was a graduate school (and that was a long time ago). There is ample evidence that the models use the very best methods of the 1960’s. Pekka agrees about this incidently. Controling dissipation is critical to accuracy in any numerical simulation. Chris, I suggest you look up Runge-Kutta and Backward Differentiation schemes so you can straighten out the modelers. The problem here is that excessive dissipation produces exactly the outcome that Schmidt claims is the validation of the “doctrine of the attractor”, viz., that the models are totally wrong when integrated for a week, but if integrated for 100 years give a climate that “looks reasonable” and always seem to get the same statistics. This is circular reasoning if I have ever seen it.
Some of David Young’s recent comments are interesting, and I’m always glad to learn from his expertise in fluid dynamics. On the other hand, the topic has strayed a bit from the original. To get back to that, it’s simply worth noting that models aren’t tuned, by aerosol forcing adjustments or anything else, to make their projected trends match observed trends. The notion that Lindzen seems to have promoted that aerosol adjustments are used as “fudge factors” is incorrect.
It will be worth getting further input from Andy Lacis or Gavin Schmidt, if they stop by, because they can not only describe the details of how aerosol forcing is in fact handled by models, but can also link to descriptions of model architecture to reinforce the point. In the meantime, if I can find an earlier discussion of this topic by Gavin, I’ll link to it.
Fred, The question is how is “tuning the aerosol forcings” any different that “tuning the aerosol subgrid model”? I claim they are probably mathematically equivalent. Regardless of the modeler’s rationalizations, mathematics is correct. If you allow me to tune the aerosol model, I can generate any forcing you want. If that weren’t the case, the subgrid model would be wrong. Bear in mind that the error bar is 200% of the median value. If you allow me to tune the constants in a turbulence model, I can get virtually any answer you want. You know Fred, you are using words that describe what the process the modelers go through, not the mathematical effects of what they are doing.
David – I guess I don’t really understand your point. The point I was making is that once the models are run and generate a trend, the modeler doesn’t go back and tweak parameters so that if it’s run again, it will match the trend better. In other words, it isn’t tuned to make it “come out right”. I thought I had made that clear, but maybe I didn’t.
When it comes to constructing the model, a number of tunable parameters are adjusted so that the model can simulate the control climate – seasons, latitudinal differences, as well as some of the fluid dynamics you’re familiar with (although Gavin points out the tunable number is small). These adjustments must remain within the boundaries of what is physically and observationally plausible. However, this doesn’t guarantee that the model will simulate a CO2 forcing well, nor does it tell the modeler what the climate sensitivity of the model will be, and in fact, there is no way for the modeler to make the sensitivity come out to be some desired value.
The result is that the modeler can’t dictate how skillful the model will be in predicting trends, and if it isn’t skillful, the modeler can’t do further tuning to fix that. Lindzen’s suggestion that models adjust aerosol forcing to make the modeled trends match the observed trends is false.
I do, however, suggest, that further discussion would benefit from input coming from people who construct models for a living.
Fred,
There’s a lot of discussion by Gavin in the comments in this thread, particularly in his comments to Judith Curry
http://www.collide-a-scape.com/2010/08/03/the-curry-agonistes/
Also, maybe I’m belaboring the point, but remember that Kiehl showed that for good trend simulation, the ratio of model climate sensitivity to aerosol forcing should remain within certain limits. However, as I mentioned, the modeler has no idea what climate sensitivity will emerge from his/her model. Even if the modeler wanted to fit aerosol forcing to sensitivity, he or she wouldn’t know how to do it.
But again, I’m hoping for some input from Gavin, Andy, or others.
Fred, You are missing the point. The distinction between tuning runs and “real” runs is totally artificial. If modelers are doing their job, for which I as a taxpayer am paying them a lot of money, they are constantly including more cases in their tuning runs. If they aren’t, they are using unscientific prejudice to set parameters. Trust me on this, subgrid models are a pseudo-scientific area where rigor is left behind and dogma prevails. The results are only to be believed within the range of the tuning runs.
The problem with models and parameters is more then a tuning issue. Even if tuned to several observed variables as James McWilliams notes above – the Navier-Stokes continue to diverge into the future for which observations of course do not exist.
http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view¤t=sensitivedependence.gif
This is because – with the best will in the world – the input parameters are not constrained sufficiently to constain exponential divergence in plausible solutions. We don’t know enough to the precision required to constrain the equations. This is the deterministic chaotic nature of the equations – an understanding of which is needed to understand climate models.
The extent of divergence – or irreducible imprecision in the terms of McWilliams – can only be estimated from a systematically designed family of models. That is the models are run repeatedly with various combinations of feasible initial and boundary conditions – and the possible combinations of feasible formulations is a very large number thus systematic evaluation of irreducible imprecision is lacking in practice.
Consequently the plausibility of the solution is determined on the basis of – wait for it – ‘a posteriori solution behavior’. That’s right folks – they pull it out of their arses.
Robert I Ellison
Chief Hydrologist
It’s getting late and my wine glass needs refilling with my Januik Cabernet, 23rd best wine in the world in 2011 according to Wine Spectator. Let me just say that a good reference here is Wilcox’s book on turbulence modeling. All the problems are laid out. The problem here is that practitioners of “colorful fluid dynamics” or “continuous fraud and deceit” or “climate modeling” are usually totally ignorant of these considerations.
Chris – Thanks for the link. It was exactly what I was searching for. You can start at about comment 334 to read the exchange between Gavin and Judith Curry. Anyone reading that and not convinced that there is absolutely no tuning to make model trends match observations must I believe have a mind set in cement. The tuning is a myth, and I would hope that exchange will settle it in the eyes of open-minded readers, because the myth is one that is often repeated in the blogosphere, and gets in the way of legitimate discussions of model design and the role of aerosols..
Chief, Your quotes from the literature are very illuminating. I think you and I agree about most of the important points. Now if we could just get the “pissants” to see the light.
Best,
However, there is in fact a large degree of inverse correlation between model estimates of climate sensitivity and aerosol forcing, at least up to CMIP3 generation models; there are some multiple interpretations in the literature, but because of possible conditioning of model ensembles to historical climate change, it is not appropriate to view the agreement in simulated and observed time-evolution of global surface temperature as a formal attribution.
The CMIP3 models were incorrect Ohmura 2009 ie early surface brightening .
The models have a wide spread under both clear and all sky conditions.The reduction (non interactive) fail to capture obderved decadeal obsevations due to reduced DOF.or as seen in independent surface observations.
Wild and Schmuki 2011 are critical of the models and thier application.eg
The inability of climate models to simulate the full extent of decadal-scale variability is not just seen in SSR as documented in the present study, but also in other simulated climate elements such as the tropical top of atmosphere radiation budget (Wielicki et al. 2002), tropical precipitation (Allan and Soden 2007), the hydrological cycle in general (Wild and Liepert 2010), soil moisture (Li et al. 2007) and surface temperature/diurnal temperature range (Wild 2009b). Of course these elements may not be entirely independent, and misrepresentation of decadal variations in one of these, such as the SSR discussed here, may strongly impact the simulation of others. Further work is necessary to disentangle to what extent these underestimated decadal variations are due to an underestimation of
forced or unforced climate variability.
The inability of current GCMs to reproduce observed
decadal scale variations does not imply that climate change
scenarios (which typically target at more extended timescales)
are biased. On these longer, multi-decadal to centennial
timescales comparison with observations show
good agreement where feasible, despite suppressed decadal
variations (e.g. IPCC 2007; Wild 2009b). However, the
shortcomings discussed here may have implications for
shorter-term climate projections up to a few decades ahead
where these strong decadal variations may dominate.
Indeed Gavins and Chris arguments are incorrect,the difficulties with the surface radiation budget,has seen the concerns from a number to suggest changes the recommendations include (from GEB) include
• The prominent picture of the Global Energy Balance in the IPCC report needs substantial revision. Particularly the surface flux estimates need to be revisited, and uncertainty ranges should be added to all components.
• A continued and expanded operation and maintenance of a well calibrated network of long term surface radiation stations is required to provide direct observations and anchor sites for satellite-derived products and climate model validation, as well as for the detection of important changes in the radiation fields either not detectable by satellites or anticipated by models. The basic measurements include the four primary components (up and down, longwave and shortwave irradiance) with high temporal resolution (minute values) and known accuracy (BSRN accuracy standards).
• These high accuracy observation sites should be expanded to under-represented regions of the globe (such as many low latitude areas) and particularly oceans where alternate or modified observational strategies might be necessary
Fred, I looked at that thread between Gavin and Judith. I must say that this definition of tuning is quite narrow. It is clear that tuning was not done to match the surface temperature record, nor to get a particular sensitivity, but in comment 338, Gavin does say that they do try to match the -1 W/m2 aerosol indirect (cloud) effect based on Hansen’s median estimate of this effect. This matching might be regarded as a tuning of some sort.
maksimovich
The AR4 models typically underestimated the degree of decadal surface solar radiation variations, probably largely due to uncertainties in global emission inventories and indirect effects on clouds
‘A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere temperatures and winds, cloud heights, precipitation, and sea level pressure. Data–model comparisons continue, however, to highlight persistent problems in the marine stratocumulus regions.’ Schmidt et al 2006
The models need to sucessfully mimic nature – this is especially the case fundamental physics are uncertain or measurement limitations exist – clouds and sulphates for instance. The fundamental principle of modelling is to make successful comparisons with empirical data – and that occurs by way of adjusment of parametised inputs. It is typical warminista nonsense to suggest otherwise.
While affirming that models are a perfectly reasonable means of exploring the physics of the system – that by no means implies they can mimic such a complex system as Earth’s climate from first principles. Or that they have any worth at all in prediction for the reasons given above.
Robert I Ellison
Chief Hydrologist
Chris Colose,
I note also in Schmidt et al 2006 a reference to persistent problems in the marine stratocumulous regions. Most amusing.
Robert I Ellision
Cheif Hydrologist
Chris Colose.
Indeed Hatzianastassiou 2011 found in the 21st century the SH SSR = -3.84 W m-2 or -0.64 W m-2/yr the NH -0.11 W m-2/yr.
As the clouds are the pre dominent problem in the SH ,confidence is low in Hansons assumptions.
@ Chief Hydrologist | February 28, 2012 at 9:56 pm |
Chief, does it say in those models: which horse is going to win the Melbourne cup in 2100?. Would be much easier to predict the cup winner than the exact climate in 82y from now. Because many more factors influence the CONSTANTLY changing climate.
Fred, I note that the Lord Gavin has not come to your rescue on this thread despite your desparate pleas. Gavin’s a smart guy, but he has sold his soul to the idea of “communication of science”, a jealous god who generally rips to shreds his votives. I would suggest that there are a lot of other scientists who understand models at least as well as he does. Not that I claim to be superior to him, but you know science is about testing your mettle against other scientists. By the way, we need you to weigh in on the Judith’s latest post on models. Fred, where are you?
Actually Fred, Petr Chylek has done some good work on aerosols and shown them to have far less cooling impact than the IPCC would like to admit. The data is on Lindzen’s side.
Chylek has done great work in the Arctic as well – http://www.lanl.gov/source/orgs/ees/ees14/pdfs/09Chlylek.pdf
This one suggests that mixed black carbon and sulphate increases the warming.
Warming influenced by the ratio of black carbon to sulphate and the black-carbon source
M. V. Ramana1, V. Ramanathan1*, Y. Feng1, S-C. Yoon2, S-W. Kim2, G. R. Carmichael3and J. J. Schauer4
Fred,
Please see my response to Chris Colose:
http://judithcurry.com/2012/02/27/lindzens-seminar-at-the-house-of-commons/#comment-178867
Also, I would like to ask for your opinion about the aerosol question. Lindzen is quoted above by Judith as saying,
“If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.”
Judith says that this “is an oversimplification of how climate sensitivity is determined in the conventional way”. But is it? How can climate sensitivity be estimated without estimates of aerosol and solar forcing entering at some point?
The AR5 ZOD Chapter 10 says,
“The analysis of individual forcings is important, because only if forcings are estimated individually, can fortuitous cancellation of errors be avoided. Such a cancellation of errors between climate sensitivity and the magnitude of the sulphate forcing in models may have led to an underestimated spread of climate model simulations of the 20th century (Kiehl, 2007; Knutti, 2008)”.
Later,
“Knutti (2008) and others argue that the agreement between observed 20th century global mean temperature and temperature changes simulated in response to anthropogenic and natural forcings, should not in itself be taken as an attribution of global mean temperature change to human influence. Kiehl et al. (2007), Knutti (2008) and Huybers (2010) identify correlations between forcings and feedbacks across ensembles of earlier generation climate models which they argue are suggestive that parameter values in the models have been chosen in order to reproduce 20th century climate change. For example Kiehl et al. (2007) finds that models with a larger sulphate aerosol forcing tend to have a higher climate sensitivity, such that the spread of their simulated 20th century temperature changes is reduced. Stainforth et al. (2005) find that the spread of climate sensitivity in the CMIP3 models is smaller than the spread derived by perturbing parameters across plausible ranges in a single model, even after applying simple constraints based on the models’ mean climate. Schwartz et al. (2007) demonstrate that the range of simulated warming in the CMIP3 models is smaller than would be implied by the uncertainty in radiative forcing.”
“Since in standard detection and attribution analyses the amplitude of the responses to various forcings is estimated by regression, the possible tuning of models to reproduce 20th century global mean temperature changes will have almost no effect on the detectability of the various forcings. Similarly this will have almost no effect on estimates of future warming constrained using a regression of observed climate change onto simulated historical changes. The spatial and temporal patterns of temperature changes simulated in response to the various forcings would be hard to tune in a model development setting, and it is these which form the basis of most detection and attribution analyses. Nonetheless, these results do suggest some caution in interpreting simulated and observed forced responses of consistent magnitude as positive evidence of model fidelity, since there is some evidence that this might arise partly from conditioning the model ensemble using historical observations of climate change (Huybers, 2010; Knutti, 2008).”
While it is obvious that analysis of individual forcings is important, I fail to see how it defends against the bias of the researchers to find an answer within the canonical IPCC range (2 – 4.5 K). (Cue for someone here to tell me that IPCC scientists don’t have a bias. :)) Because, there is still a huge range of values in the literature to choose from.
The Knutti (2008) paper argues that Kiehl (2007) has probably shown that the aerosol forcing is weaker than previously expected, although Knutti fails to draw the obvious conclusion, i.e. that this would imply lower climate sensitivity; the IPCC ZOD in turn fails to mention Knutti’s opinion at all. Huybers (2010) goes even further in suggesting that there is evidence that compensation between various feedbacks in the models may be the result of tuning during model development to find sensitivity within the expected range. Or to quote Peter Huybers,
“More plausible is that model development and evaluation leads to an implicit tuning of the parameters, as suggested by Cess et al. (1996). As another example, of the 414 stable model versions Stainforth et al. (2005) analyzed, six versions yielded a negative climate sensitivity. Those six versions were apparently subjected to greater scrutiny and were excluded because of nonphysical interactions between the model’s mixed layer ocean and tropical clouds. Scrutinizing models that fall outside of an expected range of behavior, while reasonable
from a model development perspective, makes them less likely to be included in an ensemble of results and, therefore, is apt to limit the spread of a model ensemble. In this sense, the covariance between the CMIP3 model feedbacks may be symptomatic of the uneven
treatment of outlying model results.”
In a very recent paper (Schwartz, 2012) it says,
“Examination of the relation between the values of Str [transient sensitivity] and Seq [equilibrium sensitivity] determined by this analysis and the twentieth century climate forcing used to infer the sensitivity from the observed increase in GMST [global mean surface temperature] … shows distinct anticorrelation; that is, a low forcing yields a high sensitivity, and vice versa. … The anticorrelation between inferred equilibrium sensitivity and forcing found here indicates that the only way that Earth’s equilibrium climate sensitivity could be as great as the central value of the IPCC estimate, ΔT2× = 3 K, would be for the total forcing (recall that the forcing corresponds to the period 1900 – 1990) to be about 0.8 W m-2. Such a low forcing, which is at the low end of the IPCC “very likely” range, would require a rather large negative aerosol forcing to offset the forcing, by the well mixed greenhouse gases…”.
Schwartz goes on to look at why related studies found much higher climate sensitivities. These studies were Gregory and Forster (2008) and Padilla et al. (2011). He writes,
“The sensitivities determined in those studies are somewhat to substantially greater than the values determined for the forcing data sets examined here …. Correspondingly, the total forcings over the
twentieth century employed in these analyses were lower to considerably lower…”.
In the case of Gregory and Forster, who find a climate sensitivity of 3.5 K, he points out that they used a forcing data set that was even lower than the low end of ‘very likely’ range in the IPCC AR4.
So, I fail to see how Lindzen’s point is not perfectly valid and supported by the literature.
Good post, with a good coverage of the literature.
Alex – You raise a number of points that might be addressed individually, but here I’ll only address the “tuning” issue, because it seems to be a source of many misconceptions. I’ll also repost the link to the collide-a-scape page where Gavin Schmidt and Judith Curry discuss it. The most relevant comments are from about 334 to 378. The bottom line is that there is no tuning of models to make their trend simulations match observational data.
There have been suggestions that perhaps there was not explicit tuning, but rather a subtle, implicit form of tuning based on parameter choices made during model construction. For example, could the GISS modelers, faced with more than one realistic choice regarding aerosols, have picked the one they judged most likely to make their trend simulations best match observations? Unless, Gavin is not telling the truth, the answer is no. There is no explicit tuning and no implicit tuning.
This doesn’t mean of course that modelers don’t make choices that affect model performance. What that discussion I linked to says is that those choices are based on a judgment of what choice best fits the available data, and not on what choice the modeler guesses might make the model trends “come out right”. Gavin gives specific examples of the sources used for aerosols in the GISS models. In addition, as I mentioned earlier, any attempt to guess would probably be unproductive, because making a particular parameter choice rarely gives modelers a clue as to how the model will behave in general. Modelers can’t make tweaks to have climate sensitivity come out the way they want, and since good model skill at trends requires a good balance between sensitivity and forcings, they therefore can’t tune the model to achieve that balance.
The Lindzen suggestion that aerosol adjustments are fudge factors is either false or the modelers are lying. I don’t think either party is lying, but it appears that Lindzen isn’t telling the truth.
Fred,
I really don’t believe that there are any models as complex and of a nature similar to the big climate models without implicit tuning. Anyone who is claiming otherwise without strong qualifications is telling untruths. Certainly very many model builders have not understood that but exactly those are most likely to draw erroneous conclusions concerning the effects of implicit tuning.
Pekka – Gavin Schmidt says there is no implicit tuning. If you disagree, you should write to him to explain why he is wrong, and if he responds, share the response with us.
It appears from the link I cited that there is no implicit tuning designed to improve the model simulations of trends. Until contrary evidence is presented, I have to assume that the experts who design models for a living know what they are talking about, and that claims for tuning are therefore wrong. Parameter choices done to get the best fit to existing climates are not tunings of this type.
Finally, in the dialog cited, there is a reference to a Hansen et al 2007 paper on forcings that includes a small section on inverse modeling of some aerosol choices. It appears that different levels of aerosol forcing in that model had only very minor effects on trend performance.
Fred,
My view is based on very generic thinking of the processes used in creating large models. Every single choice that the modelers do having any idea of its influence on the outcome involves implicit tuning. It is well known in many fields that the ultimate influence of these innumerable choices is large and that it’s essentially impossible to tell what all effects it has. What I know about the climate models tells clearly that they must be influenced by these issues more than models in many other fields where the issue is severe enough.
The simple well known fact that there are many different models with significantly differing amounts of forcing by aerosols which agree better in final outcome tells that estimating the effect of aerosols is one of those things that cannot be based on success of final results until there are non-disputable explicit and independent reasons to tell that all models with the “wrong” aerosol effect are irrelevant anyway.
There may be a point beyond which no subjective input is out into the models. For the stages of work beyond this point it may be possible to say that there’s no implicit tuning. Up to that it’s always present, but putting enough effort in studying the arguments and consequences of the subjective choices it may be possible to get some rough hold on the size of the resulting uncertainty. Claiming that the problem does not exist is equivalent to admitting that all is open and unknown.
Fred, “Cloud feedbacks were identified as a major source of uncertainty in climate model simulations of climate change more than 20 years ago and still remain so. In attempting to simulate the climate of the past century, climate modelers have been forced to adjust direct aerosol forcing in their models to compensate for climate sensitivity due to cloud feedbacks.”
http://www.atmos.washington.edu/~ackerman/Barcelona.html
You should straighten this guy out, he is teaching non-sense :)
Fred,
I add one piece of more specific evidence (although I don’t remember the exact reference). Some time ago a paper was discussed here, where Hadley Center modelers discussed, how they are trying to gradually “de-tune” their models, i.e. get rid of many types of tuning that has gone in to improve performance and replace that by more equations based on fundamentals. The told, how that will worsen the agreement with some existing data or over short term, but they must do that, because the tuning may have a worse effect on the reliability of long term projections. This is work in progress and takes long to be completed. Even then much tuning will certainly remain.
Dallas – the misconception that aerosol forcing is adjusted to make the model simulations perform better is widespread, which is why it has achieved the status of myth in many quarters. The sources I linked to and the discussions show that the it’s a false claim. Either that or the experts who do this for a living are making false statements. Given that they provide direct evidence for the means they actually use to address aerosols, which doesn’t involve choices based on how they will affect modeled trends, I expect they are telling the truth.
Some of the confusion arises because modelers do make choices. It’s just that they don’t make them with an eye to how they will affect the ability of the model to simulate temperature trends.
While there many web myths in circulation, I think it’s unfortunate that someone like Lindzen would help perpetuate this one. I believe this reflects careless thinking on his part rather than deliberate deception, but it’s unhelpful in any case.
I also believe that since none of us here is nearly as knowledgeable about this as the modelers I’ve mentioned, it would be useful to have further input from them on the topic. The dialog I linked to, however, is a reasonable substitute in the meantime.
Fred I think is hung up on a semantic difference that is required for “communicating” in a way that makes things seem not circular. The desired semantic effect outways what every modeler knows. Not tuning parameters would be scientific malpractice. Fred, the errors are large because the problem is tremendousky complex. Without tuning, we would off by orders of magnitude. I’ve explained it as clearly as I can. As you Fred are fond of saying, base statements on the literature, NOT blog posts. Wilcox’s book on turbulence is an excellent place to start.
David – I did look up the literature to confirm Gavin’s statement. But again, since he knows more about this than you, I, or others who don’t construct climate models, his statements are a good starting point, with the literature as further reinforcement..
There’s nothing semantic about it, David. Either the models are tuned by adjusting aerosols and other variables to improve their trend simulations, or they aren’t. It appears that they aren’t. They are tuned to get the basic starting climate right, but once that’s done, the model either does or doesn’t perform well on simulating trends, and if it doesn’t, it’s not tuned to make it do better.
Fred, I am not sure there is a misconception. While the models are not tuned on the fly, they are initially tuned to better match observation. The 1910 to 1940 period required strong aerosol and solar “tuning” which we have discussed in the past. Gavin stated that the 1910-1940 period was mainly solar and reduced volcanic aerosols. That is the assumption they made while setting up the model. I even noticed that positive aerosols, black carbon, was used at the end of that period.
As Pekka said, some assumptions have to be made since there are unknowns, which is effectively “tuning”, adjusting, tweaking or any other similar term. It is just part of the process.
I am not particularly sure why this is an issue. Skeptics just generally consider that the aerosol adjustments or estimates if you will, are over stated relative to the cloud feedback and CO2 forcing.
While the models are not tuned on the fly, they are initially tuned to better match observation.
No, they are not, if by observation, you mean the expected temperature trend. They are not tuned in order to get that right, which is one of the main points Gavin Schmidt makes, along with references to back it up.
Fred,
I’m not proposing that the models would be tuned adjusting aerosols, but I describe something which might well have happened. This is certainly highly simplified, but the basic idea is fully realistic.
1. Based on earlier analyses and their ideas of the most likely properties of the climate system they conclude that a rather large influence of aerosols is likely and that the climate sensitivity is also relatively relatively large.
2. When that has been concluded the input assumptions concerning aerosols a chosen and other subjective choices are done consistently.
3. The resulting model behaves essentially in agreement with expectations and additional tuning of the model makes this agreement even better.
4. When this model is used in further testing it gives results which are largely confirmatory.
The point is that there was already a lot of knowledge available at the time of the first step and that the modelers did certain choices at that step. They could have done such other choices that have never been studied and it’s quite possible that the later steps would have been as successful, but the resulting model still quite different and the role of aerosols as well different. A fundamental problem is that it’s impossible to prove generally that no other set of original choices would not lead to successful further steps.
‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full
This exchange of comments is growing rather long, and I’m not sure much more will come out of it without further input from professional climate model designers. They (or at least Gavin) state that when models are designed, the choices regarding aerosols and other relevant parameters are made on the basis of physics and the observed properties and concentrations of the aerosols, and are not based on assumptions about how the choices will affect the ability of the models to simulate temperature trends. In other words, the modeler does NOT say, “well if I input this level of forcing, the aerosol effect won’t be sufficient to make the model perform well, and so I’ll choose one of the other available options because I know we need substantial aerosol forcing to get the simulations right.”.
Unless the modelers are falsifying what they actually do, none of the tuning in models to get starting climates right is done with the object of making the simulation of trends from CO2 forcing come out right. It’s simply done as the best fit to the physics and observed climate properties (not trends).
This appears to invalidate claims by Lindzen and others that aerosol adjustments are used as fudge factors to improve model performance, but not being a climate modeler, I can’t add much to how the modelers describe what they do.
I’ll look forward to anything the modelers have to say here, but also repeat the recommendation to review the collide-a-scape dialog where this is discussed.
Fred, I think there is a subtle point being missed here. While Gavin may not call it tuning, is was assumed that 1910 to 1940 was natural and due to the change in volcanic aerosols, man made aerosols and increasing solar, which at that time was based on the older Lean, Holt and Wang solar reconstructions. The initial estimates of those factors are what we consider tuning. Times have changed and the data quality has changed, so the initial estimates have changed. So should modelers “adjust” to improve the model output or just assume that they hit it right the first time?
I think GISS has a new paper on solar cycle impact in the northern hemisphere http://wattsupwiththat.com/2012/02/29/giss-finally-concedes-a-significant-role-for-the-sun-in-climate/
One of the points is that natural internal oscillation amplify solar forcing changes. If that is true, it is not included in the the models to my knowledge, should the models be adjusted to consider the natural internal oscillations which may amplify solar variability impact on surface temperature?
Dallas – the 1910 -1940 warming involved declining volcanism, some solar increases, and a significant contribution from CO2 and other anthropogenic ghgs. I don’t think this is relevant to alleged tuning of current GCMs to make their trend simulations accurate, which appears to be a misconception. It’s probable that some inverse modeling may have been done for that interval to get a better handle on the forcings, but that’s a different subject.
There are two possibilities concerning the aerosols.
The first is that their role is well understood based on empirical data and physics. Thus their influence is known without the help of the models. Then they can be included in the model based on this information.
The other possibility is that aerosols are not understood that well and modelers are forced to make assumptions on their properites. They know already, how their choices will affect the resulting climate sensitivity when the model is developed to agree with temperature history of latest decades. Thus they make assumptions that they know to determine largely the climate sensitivity in their model.
Based on what is generally stated about the level of understanding, which of these choices is closer to the truth?
Are there really any more possibilities? If there are, I’m unable to figure out, what they could be.
Parameter choices done to get the best fit to existing climates are not tunings of this type.
That’s fair enough. The Standard Model of the particle zoo had I think 19 particles a couple of decades ago, and grew to 23 a decade later, no idea what it is now. Data increases parameters and theory decreases them again so it could have gone either up or down a parameter or three in last decade.
What would be a reasonable number of parameters for a model of long term global land-sea climate, defined as everything slower than the solar cycles, both TSI and magnetic or Hale? (I’m assuming all parameters on which the model depends are counted; e.g. if changing the acceleration due to gravity, radius of the Earth, etc changes the model’s behavior then those should be counted.)
And should “reasonable” be a function of unexplained variance (uv = 1 – r2)? An excellent question for our resident statistician. Matt, is there some general rule in statistics for a reasonable number of parameters as a function of anything including unexplained variance (uv = 1 – r2), or uv as a function of number of parameters?
If one could get the uv for such a model down to 1% with 6 parameters all told, .1% with 9 parameters, and .01% with 12 parameters (so 3 parameters for each decimal place of accuracy of fit), I’d call that a perfectly reasonable number to count as fitting. If the current models have 20 parameters or more however then unless it’s giving 6 decimal places of precision I’d be more inclined to call it tuning.
Where does Gavin Schmidt draw the line between reasonable fitting and unreasonable tuning?
Pekka – Modelers make choices, but unless they are not telling the truth, they don’t make those choices based on how they want climate sensitivity or temperature trends to come out, but on how their choices best fit the physics and existing observations.
Specifically regarding sensitivity, I don’t understand your point. How will a choice about aerosols affect the climate sensitivity to CO2 doubling that emerges from a model? Aerosols are not a significant feedback on CO2-mediated warming as far as I know, at least when Charney feedbacks are considered. For Earth System Sensitivity over multiple millennia, aerosols may play a feedback role, but that isn’t part of the standard sensitivity estimates.
Fred, You are repeating yourself. Tuning is essential, even of “forcings”, even in simple aerodynamic simulations. Pekka is right that implicit tuning is both necessary and standard practice. The only reason to say otherwise is for “communication of certainty”. You use as large a tuning suite as possible and hope for the best.
Fred,
I didn’t repeat one point from my earlier comment here.
Again I describe, how things easily proceed. I don’t make specific claims about the extent they have influenced climate models. The mechanisms are, however, very common and any claim about their small role should be based on specifically on knowledge about these stages of work.
I’m also led to simplify the argument the more I’m forced to explain, what I mean.
The point is that the modelers must tune the model in the next steps to get it working reasonably well and that they have many opportunities for that.
Thus choosing little aerosol influence the natural level of temperature is higher around 1960 and less climate sensitivity is needed. Thus the additional tuning creates a model of lesser sensitivity. The opposite is true, if the aerosols are assumed to have a strong influence. The tuning that I discuss above is not considered tuning, because it is done at an early stage of development as it’s known already at that point that the model will fail otherwise.
The statements about the absence of tuning refer to latest stages of working with the models. Many choices are done earlier at stages where the need of these choices is noticed. At the very early stage the modelers have more freedom of choice as there are still many more opportunities to compensate for many of their consequences. Sometimes these early choices are done without much knowledge about their influence, but here we have choices whose consequences were already largely known. In such cases the choices are often made in a way that ultimately confirms what the modelers believe to be true even if that belief is not on strong basis.
What we know about the differences between different climate modes with respect to the role of aerosols appears to confirm that this is not only a theoretical worry, but a real problem in assessing the reliability of climate models.
David – You seem to setting up a straw man. Everyone agrees tuning is necessary and is performed. The point is that models aren’t tuned to make their simulations of trends come out right, contrary to what Lindzen alleges. I do hope one or more modelers stops by to confirm this, because it will dispel a myth about aerosols as “fudge factors”. Unless the modelers are deliberately falsifying what they do, aerosols aren’t adjusted – they aren’t fudge factors.
Pekka – My earlier point remains. I don’t see how making choices about aerosols affects the climate sensitivity to CO2 that emerges from models. In fact, if you read what Hansen and others say about this, they point out that their model has a particular sensitivity (e.g., 2.7 C), and that changing aerosols affects projected temperature. It doesn’t affect the climate sensitivity to CO2., which doesn’t depend on aerosols.
I have to say that I believe the modelers have the last word on this, and although I can’t continue here to restate what they say, they are worth listening to.
Fred
What they say is correct for their present models. What I claim is that making a different choice at an early stage in the process leads trough normal tuning practices lead to a different model.
None of the statements of the modelers that you have told addresses this fundamental issue of model development. They all appear to apply only to situations where all those choices have already been made.
I wrote an independent comment before coming to this subthread. There I mentioned specifically that one of my main worries is related to the apparent ignorance of this issue.
In spite of these worries I’m not as skeptical of the model results as some others including Judith, if I have interpreted her writings correctly. Most certainly I would like to have more information on these issues and I really hope that the modelers don’t avoid discussing them limiting their comments to those applying only to the present models rather than discussing also, how different the models might be, if the development history had been different. (Different with respect to early assumptions and subsequent implicit tuning.)
I have to depart for a while. If anyone wants to go back to read the dialog between Gavin and Judith Curry, I think they will find that Gavin describes parameter choices in models as unrelated to how they will affect the ability of the models to simulate observed temperature responses to CO2 (or other forcings). Instead, the choices are based on the relevant physics and the properties of the particular item (e.g. aerosols). According to him, models are neither “retuned” after a run to make them perform better, nor are they “pretuned” before being tested with the goal of making them perform well.
If anyone has contrary evidence to indicate that he and others are not telling the truth, or that tuning for the purposes I mention has somehow “crept in unnoticed” at some stage, it hasn’t yet been presented here. I conclude that the Lindzen claim that aerosols are adjusted to make the model simulations come out right is false, but if some expert modelers can contribute further to this discussion, I’ll look forward to it.
Fred, Pekka’s description is correct. Tuning should be done as problems arise or new data is available. Not tuning based on your ideas about outcomes is foolish. The assertion you make is not credible, and if true would make me consider hiring a new batch of modelers.
Hi David – I think you may still be feeling the effects of last night’s Cabernet. Seriously, please read what i’ve written (and what Gavin has written). It shows that aerosols are not adjusted as fudge factors, unless you think he is deliberately telling an untruth. I don’t see much ambiguity in that claim, but readers should judge for themselves rather than merely going by the comments in this thread.
Fred,
Equations based on physics are the starting point for the models. The equations include conservation laws and other fundamental equations like those describing the radiative interactions, thermodynamics and fluid dynamics. It’s, however, not possible to solve anything realistic without additional input like various parameterizations of processes of smaller spatial scale or otherwise not covered by the fundamental equations. Furthermore the discretization and related issues of numerical methods influence also the outcome
Due to all these extra factors the modelers must do very many choices and they do them in a way that is expected to lead to best model based on their professional judgment. What the choices will be depends then on the situation where they are made. If earlier choices have led too far in one direction the later ones are made to compensate for that. Therefore changing the assumptions on aerosols at an early stage will influence later choices on other points. The choices affect each other in any normal model development process, but how much and how they affect varies widely and depends on the goals and nature of the model development project.
Returning after a few hours away, and rereading my comments, I should apologize for the short-tempered tone of some of them. I think I was motivated by the sense that I was defending climate modelers against attacks on their veracity rather than simply describing my own views. None of us here knows as much about how climate models are designed as do the people who design them professionally, nor does Richard Lindzen. I take the modelers at their word when they state that the parameter choices (tuning) they make are designed for purposes that don’t include helping climate projections match observations. Also, in general, I take people at their word about their intentions in the absence of good evidence to think otherwise. Gavin was pretty unequivocal about the basis for parameter choices, and readers should visit his comments rather than make judgments based on mine.
I think Pekka, as always, has tried to consider all possible explanations for what goes on, and that’s appreciated. David Young is appreciated for his comments on fluid dynamics among other things, but here I thought he was being too resistant to the possibility that the modelers were acting appropriately, even though David is undoubtedly correct in some of his other concerns about model design.
Finally, even if Pekka is right about possible subtle biases creeping into the data on which the modelers subsequently base their choices, there is no question in my mind that Lindzen’s claim to the effect that aerosol forcing is adjusted as a “fudge factor” is false, and he should know better than to keep repeating it.
Fred, Pekka and I have done complex modeling. Climate models are largely complex fluid dynamics models. You are getting lost in semantics. Whether Gavin’s very narrow assertion is true is independent of the truth of the assertion that aerosol models and forcings are ” an essentially arbitrary adjustment factor.”. When models are built, and as they evolve, tuning is done. The aerosol forcing is 0.4 – 2.7 W/m2. How would a modeler make a choice based “solely on physics?”. Someone claiming that is not very trurhful or not very bright.
This whole attempt to discredit Lindzen seems to be too emotional to be purely scientific. It is quite possible that both he and Schmidt are roughly right. Schmidt is however leaving out infornation people who do modeling know to be important.
David – I think you’re way way out of your depth if you think you can compare your knowledge of model construction to that of Gavin Schmidt. It truly makes you look foolish, and that won’t happen if you stay within your limits. I’m more patient than Gavin, but I can see why the people at RC might become exasperated enough to want to see no more of you despite the fact that you could say something useful.
Pekka has raised some interesting points. I”m not completely convinced by them, but his perspective is always worth considering. On the other hand, Lindzen’s notion of aerosol adjustments as fudge factors is unequivocally false, unless you think the modelers are lying. There’s no evidence they are, and good evidence they aren’t.
If I were you, I would stop digging.
Fred, You are right back to the ignorant authority citing grumpy Fred. You are ignorant of fluid dynamics and subgrid models. But I’m sure Schmidt knows more about turbukence. Get a life Fred. If Gavin has read Wilcox, I’ll stand corrected. If not then you are very impolite in assuming others knowledge is as limited as your own.
Pekka – You state, “It’s, however, not possible to solve anything realistic without additional input like various parameterizations of processes of smaller spatial scale or otherwise not covered by the fundamental equations.”
You also say, “If earlier choices have led too far in one direction the later ones are made to compensate for that. Therefore changing the assumptions on aerosols at an early stage will influence later choices on other points.”
I don’t think anyone disagrees with the first point, but I’m not sure whether you had something specific in mind with the second. Do you have an example of that happening in the way aerosol data have been handled?
It seems to me that some of the preceding discussion has been occurring at cross purposes. It’s argued that parameter choices must be made and will affect model performance. It’s also argued (e.g., by Gavin) that those choices are made independent of how they will affect the ability of model simulations to match observed trends.
These two arguments are not in conflict, but the second falsifies Lindzen’s claim that aerosol forcing is adjusted as a “fudge factor” to make the simulations come out right. Even if some “assumptions… at an early stage” might have been made differently, Lindzen is still wrong in claiming aerosol forcing is adjusted for the purpose he claims, as long as neither assumptions, parameter choices, or anything else affecting model simulations are made with the goal of influencing those simulations in a desired direction. Because Lindzen has a reputation as a respected scientist, for him to make false claims strikes me as more irresponsible than the same claims coming from people with no name recognition.
I’d still like to hear more from Gavin, Andy, or others, because they know much more about their intentions and much more about climate design than you, David Young, I, Lindzen, or other relevant individuals, but the dialog in the collide-a-scape link cited above gives us a good idea of what they are likely to say. Here is the collide-a-scape link again, with the relevant discussion at about 334 to 378.
Guys, If is any help Gavin said that the models were not adjust to fit observation “before 2000” and are not adjusted to match trends, but average conditions. They do get adjusted though.
“Some of the most interesting conclusions of the study include those relating to the Arctic. For example, we estimate that black carbon contributed 0.9 +/- 0.5ºC to 1890-2007 Arctic warming (which has been 1.9ºC total), making BC potentially a very large fraction of the overall warming there. We also estimated that aerosols in total contributed 1.1 +/- 0.8ºC to the 1976-2007 Arctic warming. This latter aerosol contribution to Arctic warming results from both increasing BC and decreasing sulfate, and as both were happening at once their contributions cannot be easily separated (unlike several earlier time periods we analyzed, when one increased while the other remained fairly constant). Though the uncertainty ranges are quite large, it can be useful to remember that the 95% confidence level conventionally used by scientists is not the only criteria that may be of interest. As the total observed Arctic warming during 1976-2007 was 1.5 +/- 0.3ºC, our results can be portrayed in many ways: there is about a 95% chance that aerosols contributed at least 15% to net Arctic warming over the past 3 decades, there is a 50% chance that they contributed about 70% or more, etc.”
http://www.realclimate.org/index.php/archives/2009/04/yet-more-aerosols-comment-on-shindell-and-faluvegi/#more-672
Hmmm? 1.1C +/-0.8C of warming in the Arctic from 1976-2007 possibly due to positive aerosol forcing, that might tend to de-emphasize CO2 radiant forcing a touch. I seem to recall with the exception of the Arctic, sensitivity to CO2 is rather small other than the mid latitude agricultural belt.
As Fred said, Lindzen “was” a respected scientist at one time. I wonder if he really has lost his mojo and gone Emeritus?
“Gavin said that the models were not adjust to fit observation “before 2000″ and are not adjusted to match trends, but average conditions. They do get adjusted though.”
Dallas – I think you succinctly stated the critical point. I bolded it to make clear the distinction between the different things adjustments are designed to match. “Average conditions” refers to the average climate behavior in the absence of a forced trend – i.e., the control climate. An example would be parameter choices made to ensure the seasons come out right, that the Sahara desert is dry, that the monsoons come on schedule, etc.
The post-1976 role of aerosols is somewhat unclear, but there is good evidence that declining cooling aerosols (e.g., sulfates) played, a role, with perhaps black carbon also contributing (but probably not too much if overall aerosols were decreasing). In any case, this is one of the reasons why it’s difficult to make attributions for the post-1976 interval. Post-1950 is clearer in supporting the dominant role of anthropogenic ghgs.
Fred, Of course you won’t respond to me directly, but your last emotional response is full of misrepresentations and ignorance. First, I left RC, they didn’t leave me. Just ask Vaughan, Pekka, or MattStatt why they post here and not at RC. It’s because RC is a hypocritical place, censoring people they disagree with while posting very vile stuff from the peanut gallery. Also, RC is trying to control the message, that’s their explicit purpose. What’s the point of posting there? Why aren’t you posting there Fred?
Your assertions about my knowledge are odd. You are in fact far more ignorant of models of fluid dynamics (and climate is a very complex one of these) than Pekka or I. Whether Schmidt knows more than Pekka and I, I’m not completely sure. He is somewhat knowledgable about the fundamentals, more so than most climate scientists. However, he has made some comments that are clearly wrong, even though perhaps they were not well considered. One that I recall was the claim that he had never heard anyone say that there were significant errors associated with Reynolds’ averaging. I understand why you haven’t gone to some of the references I have suggested and that’s OK Fred, even you have a contribution to make, but you should really stop the Gleick like temper tantrums and impugning of people’s knowledge.
On the substance, it is quite possible for Schmidt’s statement to be technically true and for Lindzen’s statement to be operationally true. The easiest way to resolve this is for someone to tell me how you would set the aerosol subgrid model constants and forcings based on physics when the range of uncertainty is huge. Bear in mind that aersol forcings vary a lot over time. The only scientific way to set them is to try to match some data that you have more confidence in, whether that is current climate, hindcasting, or whether they give a “realistic” sensitivity, etc. For the novices in the field, that’s called “tuning”. Virtually any other data is more accurate than aerosol forcing numbers, which are essentially unknown.
Further up in the thread there were numerous citations from the literature about some of the problems with the models by Chief and others. You of course ignored them, preferring to try to claim that Lindzen knows nothing about modeling, another claim based on ignorance.
Let me repeat the basic point about models in a concise form (in contrast to your typically long winded convoluted posts). All complex models require many choices in their construction. The better modelers make different choices and add terms when there are problems or new data comes to light. In virtually all cases of complex subgrid models, there are parameters that are essentially arbitrary and are “tuned” to match data. There is nothing at all wrong with this. It is the best we can do. In some cases, different parameters are used for different modeling situations. The fact that you make such a fuss to deny it is a bad sign Fred.
“Whether Schmidt knows more than Pekka and I [about climate models], I’m not completely sure.”
David – If you’re not sure, I guess you’re the only one.
Ok, so let me get this straight. You have no response to the substance but are into grumpy insulting Fred mode. Your Gleick is showing!! You of course must fraudulently insert words into my sentence that were not there. Fred, did you send that Heartland strategy memo to Gleick? Fred, you are getting desperate and you are really being a jerk.
@Fred Pekka – Modelers make choices, but unless they are not telling the truth, they don’t make those choices based on how they want climate sensitivity or temperature trends to come out, but on how their choices best fit the physics and existing observations.
Fortunately for physics, Fred, there exist physicists that don’t think like you. (There are also physicists that do, but they play a rather different role and are unlikely Nobel material.)
A great example is Planck’s law for black body radiation. In 1900 Planck was confronted with two conflicting laws, each based on physics, namely the Rayleigh-Jeans law that worked great at low frequencies of radiation, and the Wien law that worked great at high frequencies.
Each law tended to infinity in the domain where the other law tended to zero. For laws of physics, that’s seriously messed up. If that’s not obvious to you then you shouldn’t be theorizing about radiation physics. There is no possible way of using least squares fitting to reconcile two laws that are inconsistent to that degree!
Planck had to invent something outside the known physics in order to reconcile these two absurdly inconsistent laws. Eventually he came up with a really cute little formula that brought the two laws together, but that had no physical explanation.
He then developed a version of statistical mechanics that explained his formula. In due course this explanation became the accepted physics underlying what was going on.
The key point here is that the formula came before the physics, the formula being Planck’s law. Planck did not simply fit to known physics, he invented physics.
Substitute geophysics for physics and we have the Atlantic Multidecadal Oscillation, AMO. Unless the geophysical reasons underlying phenomena like the AMO are clear, modelers are winging it when they try to incorporate the AMO into their model. The idea that it is based on known geophysics is ludicrous. Only until we understand the AMO’s mechanism can we say it is based on known geophysics. Until then there are all sorts of possible geophysical explanations, and any model that commits to one of them is simply flying on a wing and a prayer.
This is no small point given that the amplitude of the AMO oscillations is on the order of a tenth of a degree. In the grand scheme of long-term climate change, that amplitude can drown out a host of other thermal phenomena that we’d love to be able to see.
Fred, Gavin’s tuning to average instead of to trend has more to do with the type of model than a hard and fast rule. You may have missed it, but the IPCC discusses tuning in AR4 and it depends on the model complexity.
I included the bit on Arctic aerosol forcing, because that impacts the average which Gavin would tune his model to. The models would also be tuned to the lack of radiant forcing in the Antarctic and the tropics.
http://i122.photobucket.com/albums/o252/captdallas2/polesandtropicsRSS.png Or at least should be, since they are or the verge of being falsified.
Since positive aerosol forcing is partially responsible for near 2C of warming in the Arctic, it is a tuning issue, because it is an issue.
Did Gavin happen to mention that Antarctic polar amplification is non existent and the warming in the Antarctic shown in GISStemp is likely an artifact of smearing? I doubt he would bring that up, but it appears to be one of the next shoes to fall which might require some more “tuning”.
@David Young: All complex models require many choices in their construction. The better modelers make different choices and add terms when there are problems or new data comes to light.
But David, that was how the Ptolemaic theory evolved. Astronomers kept adding terms as new data (planets, longer observations) came to light.
The best modelers look for opportunities to simplify the model at hand, as the Copernican theory demonstrated for the Ptolemaic theory. Planck’s law demonstrated something similar for black body radiation, displacing what was at risk of evolving (as radiation physics matured) into a blend of Wien’s law at high frequencies, the Rayleigh-Jeans law at low, and an ad hoc piece in the middle that could have smoothly connected them to make a “Ptolemaic Planck’s law” had not Planck found his uniform law just as applied radiation physics was starting to feel the need.
Complexity can easily be an illusion. Sine waves, commonly encountered in nature, are specified by their period, phase, and amplitude, three parameters. And sums of waves also arise naturally. If you add three sine waves together the result can easily look inscrutably complex over any period shorter than the least common multiple of their three periods. That multiple will be finite when the periods are rational, but can be extremely large compared to the individual periods. For example lcm(13/15, 11/10, 7/6) = 1001 which is 858 times the longest period, 7/6, whence one must wait through many hundreds of cycles of the components to even start to detect any periodicity. Yet this seemingly non-periodic sum is modeled with only 9 parameters, and small rationals at that! Science might go for years modeling such a curve with 20 or 30 parameters while not getting as good predictive power as with the simpler and more accurate 9-parameter model.
Furthermore once you’ve found the minimum number of parameters, there is a much greater chance that each term of the sum will correspond to a natural phenomenon, possibly unrecognized before, than if you artificially force every term of a 20-parameter model to the Procrustean bed of some known phenomenon.
New science is much more likely to be discovered by modelers who try to simplify their model without adhering to old science, whether physics, chemistry, geophysics, or whatever.
Vaughan Pratt: Matt, is there some general rule in statistics for a reasonable number of parameters as a function of anything including unexplained variance (uv = 1 – r2), or uv as a function of number of parameters?
There is a plethora of general rules, and they include the number of observations as well as the r^2, and the correlations of the parameter estimates (stability of the estimates.)
However, data sets can be constructed to defeat any general rule, and in practice good models are selected after a thorough hashing out of all the issues, like here, and after determining which models are confirmed by other data and have correct predictions.
Fred Moolten, quoting Dallas: “Gavin said that the models were not adjust to fit observation “before 2000″ and are not adjusted to match trends, but average conditions. They do get adjusted though.”
I think that it is impossible to tell from the published record how much tuning has occurred. It is seldom the case that authors publish exactly what they have done, partly due to page constraints, occasionally a self-delusion that a choice early on does not matter, at times a self-delusion that only parameter values that get the correct result are physically real — the list of large and small flaws is long. Fred has a confidence that no important tuning to get desired results has been done, at least not in the work of Gavin Schmidt and colleagues. Most of the rest of us who have more experience in modeling and publishing than Fred has are much more skeptical than he is.
On a previous thread I defended my use of “ad hoc” with reference to a post-prediction (post incorrect prediction) of a re-examination of the effects of aerosols in one model. I share Lindzen’s suspicion that there is more ad hoc fitting than what has been explicitly disclosed. This is one of those things on which I would like to be wrong.
The truest test of the models is in the accuracy of their predictions. So far, none has been shown to be very accurate at making predictions. It is possible that they could be accurate over some long run while being inaccurate over the short run, but that has not been demonstrated either, and until it is demonstrated there is no reason to believe it
Vaughan, I can’t get this below your comment, so its lower down. Your post on complexity is correct. This is what we really need in nonlinear systems is a simplifying theory that can explain things. I am a big fan of simpler models within their range of validation. One advantage of these is that they tend to be inexpensive to run and so they can be subjected to much more rigorous validation. Anyway, thanks for posting this insight.
Fred, This thread has become unreadable because of the constant recapitulation of a single talking point taken from a literalminded legalistic interpretation of something Lindzen may have said. What I’ve heard him say in the past is merely that each model uses a different value for the aerosols, and that given the lack of understanding, it can be viewed as an essentially arbitrary adjustment factor, that can cancel most of the greenhouse forcing. That is far different than your prosecutor’s focus on one interpretation. This is just so much focus on “atoms of scripture cast as dust before mens eyes” and not on the “main design.” The fact of the matter is that there should be a lot more to the aerosol model than just the gross forcing. There is also the spacial distribution of the forcing, a critical input and the subgrid model which I assume must be pretty complex. But then again, given the level of ignorance perhaps its just a specified forcing. Let me say that the “real physics” must be very complex and involve such things as clouds, convection, etc.
As I summarized on the following modeling thread, those of us who have done complex modeling of similar systems to the climate system know that there are many serious problems having to do with tuning subgrid models and the other thousands of choices modelers make. The only way to rise above this nitpicking and uninformative vague statements approach is for modelers to examine rigorously the sensitivity of results to these choices. That’s my whole purpose for being interested in this is to try to show people that this needs attention.
The broader picture is a lot more important and actually involves trying to understand subgrid models.
David – If you hadn’t specifically addressed me, I wouldn’t add to this overlong thread. I’ve made clear the evidence I find convincing that Lindzen has been making false statements to the effect aerosols are adjusted to make models come out right. You may not be convinced. Readers can judge for themselves. I don’t know of further evidence to add, and I agree with you that there are other aspects to aerosol modeling that probably deserve more attention. I’ll leave it at that.
If climate science really understands the various factors and relative weights of those factors and if their models were accurately representing the actual climate system, why would the models of the system be inaccurate for near term predictions?
A simple question that none of those trusting climate models can provide an adequate answer to. The accurate answer is the system is not sufficiently understood.
+Lots
If your model can’t forecast a short time ahead, like next year, how can it possibly be expected to be right in 50 or 100 years?
@Latimer If your model can’t forecast a short time ahead, like next year, how can it possibly be expected to be right in 50 or 100 years?
Wow. I think you’ve hit the nail on the head here, Latimer. This seems to be the basic sceptic argument.
Without claiming its conclusion is right or wrong, one can at least see that the reasoning leading to that conclusion is illogical as follows.
Consider the religion whose deity is M*D (Maxwell’s Demon to you gentiles). In Chapter 7 of the Book of Reynolds we read “Each year M*D tosses a coin. Heads is hotter, tails is colder. M*D Himself cannot foretell the outcome of that toss. Climate hath no other driver but M*D.”
Long term, climate as governed by M*D is going to follow a random walk. While it will drift, it won’t drift rapidly, according to the nature of random walks. This makes it possible to bracket where temperature will be a century from now within reasonable error bounds.
Yet anyone selling a model that can forecast next year’s temperature is committing heresy by claiming greater clairvoyance even than M*D!
You may well not believe in M*D, Latimer. But do you still believe in your reasoning? (You did use the word “possibly”…)
Latimer, weather/climate prediction is certainly rife with wide error bands but it has always been my understanding that the shorter the time span the more unpredictable is weather/climate.
On the other hand, the longer the time span, the lesser the degree of unpredictability. The longer the time span the narrower becomes the error band – does it not?
The random walk argument is a powerful one because it can also be used in a pinch to cover for all the chaotic parts of the model. Consider that chaotic motions can go in any direction, but in the end if they do follow what looks like random trajectories, then those can be modeled as a random walk that reverts to a mean value (aka the Orrnstein-Uhlenbeck RW process). The process will appear to randomly walk, but without a non-physical change in the free energy, that cumulative energy will remain what it was when it started.
The only events that can cause a reversion away from the mean are external forcings such as GHG increases, albedo changes, and a few minor behaviors that act like triggers.
I follow this line of thinking because when all is said and done, the diagnosis will show the net energy change and any hidden sinks will be revealed. It might take decades, but I can follow along in my spare time.
But L*D is one up on M*D – because he can predict the toss.
Climate is not a random walk.
‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade. And abrupt climate change may be more likely in the future.’
Oh and Webby – let me give you a clue – otherwise you might remain clueless . The mean is halfway between a glacial and an interglacial. It last happened on a Tuesday.
Rob Starkey: If climate science really understands the various factors and relative weights of those factors and if their models were accurately representing the actual climate system, why would the models of the system be inaccurate for near term predictions?
It is possible that the model of the trend is correct (influence of CO2 change on temperature change), but that the model of rest of the climate is unknown, and that the rest is cyclic, and entered a “low” epoch of the cycle just after the predictions were made. If so, the temperature will shoot up again at a rate higher than the forecast rate, starting perhaps 2025, and by 2050 the temperature will be close to the model prediction.
Now back to your exact wording. “If climate science really understands the various factors … ,” then this won’t happen: short-term forecasts should be more accurate than long-term forecasts in that case. The lack of close agreement of temperature to forecast shows that “really [understanding] the various factors ” does not characterize current knowledge.
Fred,
I appreciate your respectful tone and patience in replying to so many responses that make similar points.
You seem to have a lot of unjustified faith in the opinion of just one person – Gavin Schmidt. Aren’t the opinions of people I cited – like Jeffrey Kiehl, Peter Huybers, Reno Knutti, Stephen Schwartz, and not to mention the AR5 Chapter 10 authors – more likely to be neutral than a scientist like Gavin Schmidt who runs an advocacy blog like RealClimate? I am not suggesting that Gavin is dishonest, but he is hardly neutral.
The GISS model is just one GCM out of more than 30 and would be the work of hundreds if not thousands of scientists and engineers. Moreover, Gavin himself has only been around 15 or 20 years – compared with Lindzen who has been around since the 1960s. Lindzen was contributing to GCM development in the 1970s. Why claim that Gavin should know more?
As Huybers points out, as does Kiehl, and I saw Held point this out too, that many of the choices made in the model development are simply undocumented. No one can claim to know whether or not there was tuning. Yet analyses such as Kiehl 2007, Knutti 2008, Huybers 2010 – even Dessler 2010 I noted – more or less prove that there has been tuning in the model development process.
Have you actually read the papers that I cite? They are now widely cited and discussed – especially the original paper by Kiehl.
Finally, Lindzen’s point is not explicitly about model tuning, so I think the points I made that you didn’t respond to may be more important.
Alex – I’ve read some of the papers, and I’ve quoted Kiehl. Gavin Schmidt has made the points I emphasized in many places, including the collide-a-scape site, but other modelers have made the same points (e.g., Jim Hansen and I think Andy Lacis). These people aren’t necessarily more expert than others in all aspects of climate, but they know much more than Lindzen and the others you mention about how climate models are constructed.
I don’t want to belabor the point, but there is no disagreement about the need for modelers to make choices. What Lindzen wrongly stated is that the adjustments (of aerosols) were made for the purpose of improving the match between simulation and observations – i.e., that they were fudge factors. I don’t believe any modeler has suggested anything but the opposite of that, and in the absence of evidence the modelers are deliberately untruthful, I think we can conclude that Lindzen has no basis for that allegation, and shouldn’t make it.
Fred, how are the aerosol parameters and forcings set then? If its not to match observations, what else is there? The physics is essentially unknown according to the IPCC.
David – Please read the collide-a-scape exchanges, where Gavin goes into some detail about how aerosols are handled.
“the physics are unknown”?
What are you talking about, David? This seems to be a caricature of some of the sillier contrarian arguments that hold that if we don’t know everything, we know nothing.
Fred, You of course take single phrases out of context. Another Gleick tactic. The physics is very uncertain and is ESSENTIALLY unknown. Let’s see 0.4 – 2.7 W/m2. The upper range is higher than all GHG forcings and the lower bound is smaller than solar variations.
David Young, it is much worse than you say:
He didn’t take a phrase out of context. He fabricated a quote and attributed it to you.
Fred, It took me 30 seconds to find the single sentence in Schmidt’s very long dissertations that gives the method for setting aerosol forcings. They simply took the mean of the literature estimates, i.e., about 1.0 W/m2. That may explain why their sensitivity is 2.6K well below the IPCC mean. You know Fred, you could have just said that if you really understood it. Just taking the “median” of the literature estimates is a punt when the range is so large and the understanding so low. But its certainly a legitimate way to do it. I do think that using it to match data would be a better method from a scientific point of view. That’s what is done in most cases by modelers. If something has an error bar of 160% of the median value, you treat it as somewhat adjustable within that range.
David – I think you should have read further. The models didn’t take that value as the median for aerosol forcing, but for the aerosol indirect effect. If you follow the references, it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.
The justification for the choice is reasonable, but the point is somewhat irrelevant. Lindzen’s claim that aerosol forcing is adjusted to match trends is a false statement based on everything cited in the way of evidence, unless the modelers are being deliberately untruthful about how they designed their models.
“they simply took the mean of the literature estimates, i.e., about 1.0 W/m2. That may explain why their sensitivity is 2.6K well below the IPCC mean.”</I.
David – Unless I misinterpreted your statement, you also don't seem to understand that the climate sensitivity they cite is not based on how the aerosols behave. Your statement linking the two suggests that you are not familiar enough with the concept of climate sensitivity, how it's derived, and the process by which it emerges from models.
sorry the italics weren’t closed
Closed?
What Fred!! “it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.” Are you telling me that they saw that the effect was small on temperature, what are they doing using tests against real data and looking at sensitivities of model outputs?? I thought it was set from first principles physics!!
Fred, its late and past your bed time. Suffice it to say that however the values and the subgrid models are set, they are essentially arbitrary adjustment parameters, just as Lindzen said. Each model uses a different value for the unknown.
Fred, If you had read the references earlier in the thread, there was one that examined the relationship between the aerosol forcing assumptions and the sensitivity I think. It’s late and I don’t have time to track it down. Perhaps someone else will. To assert that they are independent assumes that modelers don’t do “implicit” tuning to get a reasonable sensitivity. Something that I think is pretty likely.
Fred, Just to be clear. I enjoy arguing with you and like you. I can just imagine being on the patio with you smoking a cigar and enjoying a fine bottle of wine and arguing about these issues. I do get a little upset when you assume that I am ignorant of a field where I have quite a bit of expertise. And your idolization of Schmidt is somewhat odd. He is a good scientist who is perhaps too involved in “communicating” to control the message. Cheers.
David – the Kiehl reference is one I cited earlier and I explained why it doesn’t tell us anything about model adjustments to match trends. You can find my comment elsewhere in the thread.
Your other points have already been addressed as well, including the use of inverse modeling to arrive at the best estimate for a parameter. You should read those comments too. No-one has ever claimed that one can derive aerosol effects from first principles without utilizing observational data. However, our knowledge of both the physics and the observed properties of aerosols are used for model inputs, but these aren’t adjusted with the goal of arriving at a particular trend line.
It’s midnight here, so I’ll stop for now. Despite the heated discussion, I think I got something useful out of it. In particular, I think Pekka made a good point about the possibility that subtle biases can creep into mainstream assumptions, and when these are then used by modelers, the model itself can be biased. I don’t know whether that pertains to aerosols, but it’s a valid general point.
On the original and more specific question of whether, as Lindzen asserts, aerosol forcing is adjusted to make model trends match observations better, I think the evidence in unequivocal. Lindzen is wrong. Parameter choices in model development are made for a number of legitimate reasons, but not for the reason Lindzen claimed, and I think it’s unfortunate that he has continued to make that claim.
David – I wrote my last comment without having seen the gracious one you wrote ahead of mine. I too enjoy our discussions, and I have great respect for your knowledge. I will probably disagree with you often on matters where I think your knowledge is only part of the recipe for a good understanding, but it will still be worthwhile.
I’m a little bit less assertive than Fred about aerosol tuning, since some evidence exists to suggests that model aerosol parameters might have been conditioned based on the modelers understanding of historical climate change. One cannot assume that modelers are completely ignorant concerning existing literature on sensitivity or observations, so choices can inherently be made, even if unconsciously, on a basis such as that. His point that people have not played with aerosols as fudge factors to get observations right, etc, however, is correct. It’s also wrong to say none of the physics is known, though large uncertainties remain, particularly with cloud indirect effects.
It should be kept in mind that since the AR4, there have been a number of advances in monitoring and quantifying aerosol effects. There have been several measurement studies for aerosol effects, though these usually are not completely independent of modeling. It should also be kept in mind that the big issue is not necessarily how radiation interacts with aerosols, but understanding and monitoring the aerosol distribution and the environment in which they are in on a global scale. In fact, the time evolution of aerosol forcing is an even more uncertain quantity than the current aerosol forcing. When aerosol properties are known, there is skill in modeled vs. observed shortwave fluxes. In the AR5, new direct effect RF results are based largely on simulations in AeroCom (an inter-comparison of many global aerosol models that includes large evaluation against measurements, such as AERONET, MODIS, and MISR data).
Regardless of any of this, it does not excuse Lindzen’s incorrect statements about aerosol treatment by modelers, nor does he get any credit for picking the very high end of the ~1-3 W/m2 uncertainty range in total RF (2010 relative to 1750). Note that the AR5 will also define a so-called ‘Adjusted Forcing’ (AF) that has a different definition than RF (allowing atmospheric and land temperature to adjust while ocean conditions are fixed), which has usefulness in aerosol discussions due to various semi-direct rapid responses, though this quantity is also largely uncertain. Regardless of how one feels about the ability of models to get aerosols down, no one would have gotten the impression from Lindzen’s talk that he carefully picked the extreme tail end of plausible forcing values to get the lowest sensitivity he could get, and then couldn’t even get into the transient vs. equilibrium issue.
This is inexcusable. As Andy Lacis mentioned, Lindzen is selling a good story, he is not selling objective science, or giving an honest representation of how the scientific community thinks about this topic.
@Fred What Lindzen wrongly stated is that the adjustments (of aerosols) were made for the purpose of improving the match between simulation and observations – i.e., that they were fudge factors. I don’t believe any modeler has suggested anything but the opposite of that, and in the absence of evidence the modelers are deliberately untruthful, I think we can conclude that Lindzen has no basis for that allegation, and shouldn’t make it.
Fred, your third sentence beginning “I think we can conclude” appears to be based on your second sentence, “II don’t believe any modeler has suggested anything but the opposite of [adjustments serve to improve the match between simulation and observations].”
I’d be fine with this with a really tiny edit: “we” –> “I”.
You have some gall attributing illogical reasoning to the rest of us. If you seriously believe the modelers have a clue about what aerosols have been doing since 1960, I would say it was time for Judith to open up a thread on that topic. (Or reopen it if we’ve already had at least one, I haven’t been keeping track.)
Can the modelers say what the effective altitude of “the aerosols” was between 1960 and 1980? Was it 2 km, 8 km, or 15 km? The first would heat the surface, the last would cool it. Is that what the models say? If not then I’d love to understand why not.
Vaughan – You objected to implications of illogicality, although they weren’t aimed at you, but I see some evidence of illogicality in your comment, in the form of non-sequiturs. The point I wanted to make in representing what the modeler’s state is that they don’t adjust aerosol inputs in order to make projected trends come out right. This is not the same as saying that aerosols are understood perfectly (nor that they are understood not at all). That’s where the non-sequiturs come in.
If the modelers don’t adjust aerosol inputs to improve performance, but merely handle aerosols on the basis of what is known about them, plus their observed concentrations and distribution, then Lindzen’s claim that aerosols are fudge factors is false, and I believe irresponsible.
I think others believe it to be wrong and irresponsible as well, so I probably should say we believe it to be wrong and irresponsible.
Fred
I believe you are absolutely incorrect in your assumption that modelers do not “tune” their models in regards to various aerosol forcings. That is exactly what they do in order to get the models to meet what they know about historically observed conditions.
Rob – Please see my recent comment #179684. Basically, you are suggesting that Gavin Schmidt is either a liar in stating that there’s no such tuning, or else that he doesn’t know what he’s talking about. Well, that’s fine, but don’t you think you owe it to him to say that to his face.
One way to resolve this is to contact Gavin and repeat to him what you’ve just stated, explaining why his statement is false. Then, if he responds, I hope you’ll share that with us here so that we can judge who knows more about the subject, who is telling the truth, and who is making false statements either through ignorance or design. I’m willing to assume ignorance rather than dishonesty in the absence of evidence to the contrary.
Alternatively, if you’re not willing to do that, perhaps the best thing is to avoid making definitive statements about the subject.
Fred: If you follow the references, it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.
Could you explain what you understand by “inverse modeling”, and why that does not undercut your whole argument about the lack of tuning of free parameters? It could be something simple like the “inverse modeling” that is included in calibrating measurement instruments, or it could be just the kind of fudging that you claim is not there.
It appears that a good night’s sleep has seen the cranky Fred replaced with the careful and long winded Fred. I’m not sure which one I prefer.
So in the spirit of long winded posts, I think it will be good to put this Lindzen vs. Schmidt issue in perspective.
This business of modeling complex systems (in fluid dynamics, we do chemistry, multi-phase fluids, thermodynamics and forcings too) is still in its infancy. The issue that I think is underappreciated by climate scientists is how sensitive their results may be to modeling “choices.” Trust me on this, there are thousands of choices. Climate science is probably no worse than others in this area, but it does seem to be rare to systematically look at the sensitivity of results to these thousands of choices. Some of the simpler ones are easy to do, but it gets harder as the models get more complex. Believe it or not, there is a rigorous theory for calculating these sensitivities for systems of partial differential equations in a fast and systematic way. It is becoming more widely used in simple applications like aerodynamics or structural analysis, but even here the field is still dominated by codes that are too numerically sloppy for it to be applied in a meaningful way.
Once you start to apply this rigorous theory, and there is a big investment in code rewriting required to get to that point, you see all kinds of interesting and informative information. For example, you can actually use sophisticated optimization to determine parameters based on data. This is done for example all the time in geology, where seismic data is used to infer underground properties.
There is no evidence that I’ve seen that climate scientists are aware of this theory. That’s understandable since they have so many pressures to just make more runs and add more “physics” to their models.
In any case, I do think Fred would benefit by looking into Reynolds’ averaging if he has the mathematical training to understand it to get a better feeling for how subgrid models are constructed and tuned and how more terms are added over the years and the immense problems of validation and verification are handled (often not very well). It is fine to just repeat the words of others, but real understanding can enable you to go much further.
I still think that the focus on discrediting Lindzen is strange. Like any scientist, he is clearly wrong about some things. What is strange about it is that he I think he has a perspective that could be very valuable to the field.
Whether aerosol models and forcings are “tuned” to match trends is a rather narrow issue without much relevance to the larger issue of model tuning and looking at sensitivity to these choices. By the way, Fred seems to have given us no insight into the aerosol interaction subgrid model itself, which surely must be complex and have lots of parameters. Tuning this model can have the same effect as tuning the aerosol forcings. So Schmidt’s comment may be technically true, but of no real significance. At least that is my suspicion, but I could be wrong on this.
The problem here is that the understanding of complex models is very difficult to acquire. I am constantly learning new things myself. The issue of the models is not well suited to the “communication of science” mode of operation. The communicator inevitably is rather ignorant of a lot of details in other parts of the models. However, the idea of sensitivity of results to inputs or choices is more easy to understand. Then you can present a range of results that conveys uncertainty more effectively. It’s a constant problem, modeling is constantly used in industry and government. Those doing the modeling have a vested interest in certain outcomes and there is an incentive to present the results as more certain than they are. This is also true in medicine even though there are more controls in place there and a wider recognition of the conflicts of interests.
The bottom line here is that regardless of whether Schmidt’s or Lindzen’s statements are narrowly true or not or maybe half true is a very minor issue except to those like Fred involved in the climate war as combatants. My suspicion is that both Schmidt and Lindzen have a contribution to make. The larger point is that in fact there are serious problems with the way complex models are built, run, and their results conveyed. This explains the narrow focus on this largely irrelevant issue in this thread, its something we can argue about superficially rather than getting to the real issue, which requires a more serious and rigorous learning experience.
The whole debate about what constitutes tuning and what the intention of the modellers is, is just another semantic debate that has nothing to do with substance. If Joshua were here this thread would be three times as long, though Fred Moolten is doing his best to pinch hit.
That the adjustments are made is not apparently in dispute. Even how they are adjusted similarly does not seem to be the issue.
WHY they are adjusted seems to be the ball game here.
In my opinion, who cares? Modellers can’t model the climate to a degree sufficient to justify large scale policy changes yet anyway. If someone came up with a model where you could input data from any given period, and it produced a reasonable track of what actually happened thereafter, over numerous time periods, that would be of interest.
In other words, when they have a model in which they can input the initial conditions (as best we know them) of 1000, and a model run tracks reasonably well how the climate changed over the next 100-200 years, that would be of interest. But only if it also worked when you input initial date for 1500, 1750, 1000 BC, 1500 BC, etc.
But from everything I see, they haven’t even been able to model out 10-20 years from the present with any real accuracy, a period for which we have a much greater quantity and quality of data.
If the tuning regardless of motivation created models that were useful, and verifiable (kinda the same thing), then how they got there seems rather irrelevant.
I followed Gavin’s discussion of the issue on Collide-a-Scape, and I don’t remember a single skeptic or lukewarmer, at any level of sophistication, changing position based on the semantics. I don’t see the CAGW believers here doing any better job of it.
Gary – Please see the above discussions, where your points have already been addressed.
Fred,
Thanks for the condescending reading advice, but I had read the thread previously. Which is why I wrote the comment I wrote. I followed the discussion by Gavin as Collide-a-Scape as I said, and I see nothing added by you to what he wrote there. In fact, he explained his position re models much more coherently in my opinion.
You suffer from the same myopia Gavin did. After days of participating in open discussion on numerous issues on Kloor’s blog, Schmidt was amazed that others, Dr. Curry in particular, still disagreed with him.
Not because they didn’t understand what he said, not because they were pawns of big oil, but because they came to different conclusions after reviewing the same facts he did. It seemed a novel concept to him.
What he failed to understand, and you do as well, is that there is a great deal of subjectivity in coming to ultimate conclusions. He came from the perspective that those who disagreed with him either did not know what the facts were, were not sufficiently qualified to properly analyze the facts, or were simply too biased to realize their errors.
He was, quite simply, befuddled by Dr. Curry’s responses in particular, which met none of those stereotypes. You are in the same position. You’re just a lot more verbose about it.
GaryM, you say:
This is a common thing for Gavin. He did the basically the same thing the on the very same blog, back when Mann2008 was criticized over the Tiljander issue. He repeatedly expressed confusion and amazement at the people who disagreed with him, though as was noted at the time, he didn’t actually address what they were saying.
On an interesting note, he’s since admitted what they were saying was right (in a couple comments at RealClimate). He’s never retracted anything he said on the issue previously, and he’s never gone back to Kloor’s blog to say, “Hey guys, you were right.” In fact, he’s pretty much never discussed the topic again.
Gary,
Sorry, the relativistic viewpoint gets no points. A lot of interpretations of data come down to being subjective, but what is or is not done in the GISS model is not one of them. That our understanding of internal variability in the beginning of the 20th century does not reflect on attribution efforts for climate change in the later half of the century is just a fact. Judith Curry did not understand the science or her own logical fallacies in both of those points. Some things are good to debate, others just require familiarity with what is done in a particular field. As it happens, Gavin works with the GISS model extensively and also works on implementing and understanding various solar reconstructions, for example. Other people don’t get a free license to make stuff up, even if some of the science is uncertain.
What he failed to understand, and you do as well, is that there is a great deal of subjectivity in coming to ultimate conclusions
One’s scientist’s subjectivity is another’s illogic. Maybe one day logic will have room for subjectivity, but as any Star Trek fan will tell you, that day is still well in the future on Vulcan. As well as in the faculty lounges of the physics departments of MIT, Harvard, Stanford, Caltech, Princeton, Chicago, etc.
No conclusion that admits subjectivity deserves the epithet “ultimate.”
OK penultimate.
My TV tells me that means “almost”.
Chris Colose,
“Maybe one day logic will have room for subjectivity….”
CAGW has nothing to do with logic. AGW yes, CAGW, the obsession with taking control over the energy economy no. The reason Gavin and his acolytes here cannot understand why others can reach different conclusions from theirs, is that they refuse to acknowledge the political nature of so much of what they claim as science.
Why does Gavin defend to the death the dishonesty of the hockey stick, and proclaim its continued viability while simultaneously claiming it is irrelevant? Why does he admit that there is “tuning” of climate models as they diverge from actual data, but deny that the tuning is done to make the models better match the data?
In both cases, and in many other arguments in the climate debate, the reason has nothing to do with science. In this raucous political debate, the fear of conceding any dispute to the other side is sacrilegious. Particularly where every statistical jot and tittle in any opposing research is declaimed as evidence of the falsity of CAGW skepticism in its entirety.
“No conclusion that admits subjectivity deserves the epithet ‘ultimate.’”
Precisely, but CAGW is an “ultimate” conclusion. But for the need to win the political debate, CAGW advocates would not be fighting like Custer at Little Big Horn on virtually every hill in the climate debate, including the issue of how to characterize the tuning of climate models. All the battles of models and their tuning or validation, paleo climate reconstructions, whether there has been “statistically significant” warming in the last 15 years, etc., become boring and mundane, if you remove from the equation the threat of massive economic dislocation required to decarbonize the economy.
Almost all of the debates in “climate science” devolve into proxies of that “ultimate” decision, that CAGW advocates have all already made. To badly paraphrase the bard:
To decarbonize or not to decarbonize, that is the question. Whether tis nobler in the mind to suffer the slings and arrows of outrageous skepticism, or to take arms against a sea of skeptical arguments, and by opposing them, end the economy.
This debate has so much drama because of the massive political stakes. Constantly dressing up political arguments (such as how to describe the reason models are tuned) as “science” and “logic” does not change this.
GaryM: In my opinion, who cares? Modellers can’t model the climate to a degree sufficient to justify large scale policy changes yet anyway.
To me, the second sentence is key.
However, Lindzen did claim that model parameters have been tuned to provide a better match to the recent past, instead of from independent evidence, and if that is so (and especially if they used many possible parameter values and reported only 1 or a few — a common practice) then there is even less reason to think that the forecasts might be reliable.
If the tuning regardless of motivation created models that were useful, and verifiable (kinda the same thing), then how they got there seems rather irrelevant.
I agree again for the long run. Modelers claim now to have models that are good for the long run, but if they based parts of the model (or parameter estimates) on recent data, they have most likely “overfit” the models to random variation (variation unrelated to the main trends and relationships), and there is less reason to think they’ll be good models for the future.
Fred,
But I quoted the IPCC AR5 ZOD chapter 10 on tuning. The IPCC authors agree it is possible, at any rate, that models have been tuned in undocumented ways to reproduce the 20th century temperature record. There would need to be reasonable evidence before the IPCC would concede this much. Yet you say it is not possible because Gavin Schmidt, Jim Hansen, and Andy Lacis say it is impossible – all three being outspoken advocates on climate change action, and perhaps more importantly, are the most likely to be embarrassed by the discovery of tuning in models. I don’t think this is convincing.
Certainly, others have interpreted Kiehl’s widely cited paper as evidence of GCM tuning, e.g. Eduardo Zorita.
http://climateaudit.org/2007/12/01/tuning-gcms/
In any case, if you say there wasn’t tuning then how do you explain the observations that Kiehl, Reno Knutti, Peter Huybers, Andrew Dessler and others have observed?
Kiehl – “there is a clear inverse correlation between the forcing and the climate sensitivity”.
Huybers – the cloud feedbacks tend to compensate for the sum of all other feedbacks to keep climate sensitivity within the canonical IPCC range.
Dessler – models with positive LW feedback tend to have a negative SW feedback; models with negative LW feedback tend to have a positive SW feedback.
Peter Huybers looks at this question carefully and concludes that tuning is the most likely explanation. So what does he overlook if you insist that he is wrong?
– Huybers, P., 2010: Compensation between Model Feedbacks and Curtailment of Climate Sensitivity. Journal of Climate, 23, 3009-3018.
Donning an asbestos suit, let me re-ask the naive question :
Isn’t fiddling with models to get them to match observations a perfectly valid activity ?
Isn’t that just what Planck did as per Vaughn’s comment – eventually coming up with a “really cute little formula that brought … laws together, but that had no physical explanation”. Which then presumably told people where to start digging to look for a physical explanation.
(btw I take fully the point that present-day models are nowhere near being “cute”. And are thus, inter alia, absolutely no basis whatever for imposing any new and massive economic and political burdens on the world).
Alex – the original question may have gotten lost in some of the discussion. It was whether, as Lindzen claims, aerosol forcing is adjusted to make model projections match observed trends. The answer is no, based on the best sources available – the description by the modelers of how they actually go about determining the aerosol input. I would recommend going back to the Schmidt/Curry collide-a-scape dialog for that discussion and references.
“Tuning” (parameter choices) is a necessity in models, but it is done for reasons other than to match trends with observations. Is there evidence to the contrary? Inferences drawn by others who are not modelers don’t constitute evidence, because the various correlations have many possible explanations.
To illustrate, I’ll use the most often cited example – the Kiehl 2007 reference . In your earlier comment, you quoted a ZOD statement: “Kiehl et al. (2007) finds that models with a larger sulphate aerosol forcing tend to have a higher climate sensitivity, such that the spread of their simulated 20th century temperature changes is reduced.” One reason it’s called a zero order draft is that it’s written before the errors are corrected, and the above is a big one, because Kiehl reported exactly the opposite – an inverse correlation between forcing and climate sensitivity in a subset of models chosen because of good matches to observations. It was the models with the lowest climate sensitivity that had the highest total forcing, and the aerosol forcing was positively correlated with total forcing (see Kiehl figure 2). Apparently some of the other authors you cited got that wrong (e.g., Knutti).
Now this creates a problem for anyone proposing that multiple modelers decided to “adjust” aerosols in hopes of making their projections perform better for a number of reasons.
1. It’s not intuitively obvious why an inverse relationship should exist between high aerosol forcing and climate sensitivity in models that perform well, and therefore not obvious why modelers (who were unaware of Kiehl 2007) would adjust aerosols upward if their models emerged with a low climate sensitivity. Kiehl gives no explanation, and Knutti had the wrong explanation – he thought the high aerosol forcing reduced the total net forcing (positive minus negative) but Figure 2 shows the opposite. It’s not at all clear that the inverse relationship involves direct causality – for example, the relationship might in part reflect other factors including differences in ocean heat uptake. In any case, there was no reason for modelers to anticipate it and plan their aerosol forcing in advance.
2. Inverse modeling shows that reducing the cooling aerosol input causes the projected temperature trend to be magnified. Many of the claims based on Lindzen use this relationship to argue that aerosol forcing is adjusted upward to permit the observed trend to be as low as it was while preserving the modeler’s claim for high climate sensitivity. The Kiehl study shows the opposite – high sensitivity correlated with low aerosol forcing.
3. It’s almost universally understood that model climate sensitivity is a model output, not an input, and that modelers can’t dictate how it will come out. It is therefore unlikely that a modeler would know in advance what aerosol forcing to input based on the climate sensitivity that would later emerge.
4 . As both Kiehl and Gavin note, models typically don’t enter aerosol forcing as a value, but let it emerge from the data on aerosols that they enter. See Gavin’s description as to how this is done independent of any goal involving final magnitude or trend matching. Since he and others are the ones doing it, their description should be the accurate one unless they are deliberately untruthful.
5. The Kiehl study (and others) selected a subset of models that performed well in matching observed trends. If, for any reason, there is indeed an inverse relationship between a model’s climate sensitivity and aerosol forcing in that subset, it follows mathematically that a high sensitivity would be matched by a low aerosol forcing, but this is a property of the selection process. If all models, including those that performed poorly, were tested, there is no reason why the same inverse relationship should necessarily hold. In that sense, selection for good performance dictated the observed relationship, and attributing it to intent on the part of the modelers is unnecessary. They were simply the ones who happened to get it right, and the ones who got it wrong were not evaluated by Kiehl.
6. Most important is the question of truthfulness. Either modelers (Gavin, Hansen, etc.) are telling the truth when they say they don’t do any tuning before or after a model is run for the purpose of making its projection perform well, or they are telling untruths. The notion that a large number independently, or through conspiracy, do something different from what they claim is a serious charge. The fact that others who don’t design models have implied this type of untruthfulness shouldn’t be given credence in the absence of evidence for their claim. No observations that have been reported require that to be the case. Lindzen and others should refrain from suggesting this type of “fudging”. As best we know, it isn’t done.
Oops. In reviewing Kiehl, I found that my points 1 and 2, and my criticism of Knutti and the ZOD were wrong, because there was in fact a loose positive correlation between aerosol cooling and climate sensitivity. Therefore, a physical rationale does exist for Kiehl’s findings. However, the claim of deliberate adjustment can’t be justified, for reasons I give in points 3 through 6.
Fred et al – – it would help to clarify in the above, what is meant by a “large” aerosol forcing. Does that mean more negative, or less negative?
Also, OT, but you have probably seen Isaac Held’s latest. I would be interested in your opinion of my comment.
Bill – Your comment is very pertinent. Part of the problem I had was what appeared to me to be a misleading statement in Kiehl – “Figure 2 shows the correlation between total anthropogenic forcing and forcing due to tropospheric aerosols. There is a strong positive correlation between these two quantities”. Actually Figure 2 shows a negative correlation – higher forcing (negative forcing) from aerosols is negatively correlated with total forcing. Presumably, Kiehl intended to mean that an aerosol forcing that was less negative was positively correlated with total forcing, but it was confusing when that was expressed as a strength of aerosol forcing. If I had been less careless in reading the numbers on the x axis, I wouldn’t have misunderstood that. Kiehl also points out that “Some of the models used in these simulations employed only the direct effect, while others used both direct and indirect effects of aerosols, which makes a more detailed comparison of simulated aerosol forcing difficult.” I think it actually makes it impossible, because the magnitude of the indirect effect is significant.
I haven’t had a chance to look at Held yet.
Bill – A constant or near constant relative humidity in a warmer atmosphere means a higher specific humidity – i.e., more total water vapor. This would serve as a warming influence and constitute a positive feedback.
Fred,
Well when you get time check out Held,it’s interesting, he has a pointed reply to my comment, and I’m fine with that, but I still think what I’m describing will happen and will have to be dealt with carefully.
This thread is getting unbearable, but your notes about the direct and indirect aerosol effects (of similar magnitude and sign in the GISS forcing time series) reminded me that I was going to say: While I am making no claim about tuning or lack of tuning, it does seem to me that comments along the lines of Pekka’s above are much more pertinent to the indirect effect, where it seems to me the estimates would be much harder to correlate with observational data (but not impossible).
Bill – Pekka made good points, but they can’t be used to excuse Lindzen’s claim that aerosols are adjusted for the purpose of making model projections come out right. The aerosol indirect effect is relevant, because it involves considerable uncertainty, as well as an extensive literature trying to narrow the plausible range. Has the magnitude of this effect been deliberately chosen in models with an eye toward making the models perform better? That would require choosing from the higher rather than the lower end of the estimated range in an effort to reconcile only modest observed warming with typical climate sensitivity estimates. This is the kind of claim Lindzen and others make – the aerosol forcing is chosen too high in order to make the models look good.
As an example of what is done, however, here is a quote from Gavin Schmidt on the issue of “tuning”: ”However, Judy’s statement about model tuning is flat out wrong. Models are not tuned to the trends in surface temperature. The model parameter tuning done at GISS is described in Schmidt et al (2006) and includes no such thing. The model forcings used in the 20th Century transients were also not tuned to get the right temperature response. Aerosol amounts were derived from aerosol simulations using the best available emissions data. Direct effects were calculated simply as a function of the implied changes in concentrations, and the indirect effects were parameterised based on the median estimates in the aerosol literature (-1 W/m2 at 2000) (Hansen et al, 2005; 2007).”.
If you look up the literature on the indirect effect (e.g., via Google Scholar), the range is extensive – from perhaps about -0.2 W/m^2 to more than -4 W/m^2. Much of the variation is toward the high end – i.e., above the median value. The choice of -1 W/m^2 is therefore conservatively low, such that even lower values within the range would have relatively minor effects on trend simulations, whereas higher values would make the models significantly underestimate observed warming trends. That choice is not one that would be made if the purpose were to prevent simulations from coming out too high. The more recent literature has begun to converge toward the -1 W/m^2 value, excluding the much stronger negative forcing, further justifying this choice based on evidence rather than “fudging”.
Of course, this requires us to believe first that Gavin is telling the truth, and second that he is correct when he asserts (elsewhere in the discussion) that he is unaware of any group that engages in tuning to match observed trends. At some point, someone might present evidence that these statements are false, but until that is done, no claim for fudging can be justified. It looks like many groups are simply trying to arrive at the best values they can, and the models are using those data simply in order to be as accurate as possible in the light of some uncertainty.
Fred
“The model forcings used in the 20th Century transients were also not tuned to get the right temperature response. Aerosol amounts were derived from aerosol simulations using the best available emissions data.”
The modelers would still be “allowed” to tailor the relative levels of each aerosol within the margin of error of the specific item without that statement being untruthful. There is a large margin of error in the estimated aerosol levels. In addition, the relative impact of each aerosol on the others and on the system as a whole can (and I expect were) adjusted so that the models would meet the observer criteria that were available.
Because of line breaks in my above comment, it may not be clear that the lowest (weakest) end of the indirect aerosol forcing range is still negative, at minus 0.2 W/m^2. This would be consistent with the physical principles involved.
Rob – It now appears that you’re simply calling Gavin a liar when he says flatly that this kind of “tailoring” isn’t done – choices are not made with any intention to get the trends right. I urge you to contact him and explain your position and if he responds, share it with us.
Preferably, though, I think you should acknowledge that your claim is wrong, and that you had been misinformed on this topic. That would be honorable.
Fred,
So which do you think happens more – adjusting the model to match observations or adusting observations to match models?
Or perhaps you believe computer programs are born perfect and don’t need adjustments?
LOL – you’re funny.
After reviewing many of the comments in the several exchanges above, I thought I’d summarize my own perspective on what I believe we can say with confidence and what’s less certain. We can conclude confidently that Lindzen and others are wrong in claiming aerosols are adjusted to make model projections match observations.
A number of individuals (Pekka Pirila, Chris Colose, Alex Harvey) have suggested that despite the lack of intentional tuning for that purpose, some bias can creep into the literature so that the data that modelers use will act to make models look better than they are. I think the possibility is legitimate, but we also have to ask whether the evidence supports it. I don’t know all the evidence, but Gavin Schmidt, in discussing the GISS models, describes processes that seem fairly independent of that bias. In the case of forcings for which considerable uncertainty persists, such as indirect aerosol effects, the chosen values were conservative and would have introduced little or no favorable bias for the models. This example may not be representative, but it would be useful for anyone knowing of contrary examples to cite them. My impression at this point is that the problem may exist, but probably exerts only minor effects. We need more data on this.
A point has been raised that several studies show fairly good matches to observed trends despite significant variation in the way they arrived at those matches. For example, models with higher sensitivity exhibited stronger negative aerosol forcing, models with weaker cloud feedbacks exhibited stronger feedbacks of other types, and so on. Is this evidence for implicit, perhaps unconscious, tuning to get the right trends?
In the absence of direct evidence for tuning, I think the answer is probably no, because I think there is a good alternative explanation that requires no manipulation on the part of individual modelers – selection bias in the choice of models to look at. If a model is going to simulate trends well, it can do it in different ways. Some will do it with higher forcings, others with higher feedbacks, and so on, so that they differ from each other. What they have in common is that they are selected for getting the right answer, and that excludes models that don’t have the forcings or the feedbacks operate to give good results. If no models were excluded, would we still see strong forcings matched with weak feedbacks, or weak forcings with strong feedbacks? Presumably less so, because the models that perform poorly would probably fail to achieve that balance. This is tentative, because I can’t tell from the literature how much selection was actually imposed. Even so, it’s consistent with the reported results, and doesn’t require us to conclude that modelers have either unconsciously or dishonestly made choices to make their models perform better, while stating that they aren’t doing that. This too is something worth exploring further before drawing firm conclusions.
Fred Moolten: This would serve as a warming influence and constitute a positive feedback.
That is assumed but not known. The increased water vapor could produce an increase in the rate at which heat is transferred from the surface and lower troposphere to the upper troposphere, an increase in cloudiness (negative feedback) and increase in rainfall.
It is known from CERES data that cloud cover is greater in the warmer months and lesser in the cooler months, so the possibility of the negative feedback that I described is concordant with extant data. Cloud formation and radiative/convective transfer of heat from lower to upper troposphere is discussed at Isaac Held’s blog, and there is much uncertainty about what would happen next if temperature or CO2 concentration increased.
@Fred Moolten,
First of all, I really respect your tone in your communication on this issue and have understood, that although your own background is way off this topic (MD?), you done quite a lot of reading. Despite all this, I have a strong gut feeling that this model tuning and probably entire modelling discussion as a whole is an mostly out of your area of competence, despite all the literature you’ve gone though. In order to see what’s going on behind the curtains, which is highly relevant in interpreting and weighing the value of model outputs, you need to have relevant maths, physics/engineering background and preferably some real-life numerical modelling experience. Of which, Fred Moolten, to my knowledge you really have none.
Fact is that current knowledge does not allow us to construct a computer model from first principles and parameters initialized from precise satellite measurements, but we really need a great deal of parametrization. These parameters are always subjective selections. Vast majority of these parameter vectors/matrices, equations, their numerical solving methods are not based on first principles and/or direct measurements or values directly derived from such. The fact that we have the radiative part roughly correct does not mean much more that we have a good start. Of course I needs to be pointer out that even a small inprecision in radiative model alone could lead to a wildly different outcome over longer time. I think this is clear for most, and already discussed to death in countless threads on this site alone.
My main point here is that the process by which models are initialized (both the initial state and state invariant parameters) is just something you cannot possibly know by just reading journal articles. A claim made by one modeller (out of hundreds) of a single modelling group (out of dozens) does not change this fact, especially as this person is widely known as very active, but not exactly unbiased person on related climate change discussions. I might be mistaken here, and will stand corrected if necessary, but this one statement by this Schmidt seems to be your core argument vs “classic” sceptic claim of curve fitting.
I’m not implying any cover up or conspiracy by stating that how models are initialized and parametrized is something not found from literature but rather stating the obvious. It is unevitably a unprecise and subjective process by nature, and one method of tackling the issue is to make several runs; unfortunately the massive scale (i.e. number of grid cells, calculation steps and number of parameters) is so huge that subjective decisions about parameters needs to be made.
Personally, for what it is worth, my interpretation about the current state of predictive skill of GCMs remains very low, and this “tuning process” is indeed one of the primary reasons for my sceptisism. Of course there are numerous other, perfectly valid reasons for suspicion, but they which have been discussed by many also in this thread and need not to be repeated here.
Anander – Thanks for your comment. You are correct that I have only an outsider’s knowledge of how models are designed, although I understand the basic principles. pretty well. However, it’s important not to set up a straw man argument suggesting that I or anyone believes models are designed simply from first principles without parameterizations, and without testing against observational data to ensure that the parameters are as accurate as possible. Much of this is done to get the basic climatology right – seasons, latitudinal variation, winds, ocean currents, etc. In other words, tuning is an accepted reality in model design and initialization – that is not an issue.
The issue is whether they are tuned for the purpose of making the projected trends match observed trends. In particular, Lindzen has claimed that aerosol forcing is adjusted to make the projections come out right. You seem to be suggesting that Gavin Schmidt, in stating this isn’t true, in describing how it is actually done, and in indicating that he is unaware of any model group that does what Lindzen claims, is either being untruthful or ignorant of how the modeling community acts in general.
That’s possible, but I think it’s more likely that he is correct and that Lindzen’s claim that aerosols are adjusted to make the modeled trends match observed ones is false. He clearly knows much more about this than you or I, and his statements on this issue are unambiguous and have been made on more than one occasion when the issue has arisen. The only further way I think we could get more information is to contact him for additional input, or find other modelers who will confirm or contradict what he has to say.
This brings up another point, that I’ll make here rather than on the new thread on models that started yesterday. In my view, this further discussion of models and their virtues and limitations is severely limited if the only people discussing it are non-modelers. For that discussion to be more than an exchange of unverifiable opinions, some well-informed, some less so, the dialog should include one or more people who construct models for a living. I suppose it’s fantasy, but I would have loved to see participation by Jim Hansen (after he was asked not to discuss “death trains” or the pipeline from Canada but only model construction). Barring that, there are some other good people who have occasionally participated here who would be valuable, including Gavin Schmidt. Discussions with Gavin can get heated and contentious on various issues, but when it comes to constructing models, I have no doubt he will truthfully tell what he knows, and it would be informative.
Without someone like him, the discussion won’t be nearly as useful.
Fred Moolten, thank you for your reply. No strawman intended really.
Although my view generally is more complex on this tuning issue that e.g. Lindzen proposes, I see some truth in his interpretation, and on the other hand, it is totally understandable for a modeler and especially one as outspoken and active as Dr. Schmidt to respond to this claim in a way he have done. Neverthless I see the real truth here is somewhere in between — in order to realistically be able to model the climate system and for example just to keep the intermediate values within reasonable ranges etc, some (most probably very heavy) tuning is inevitably required. I’m quite sure this is something most people who have insight about the inner workings of the models very much agree on, but unfortunately this discussion has been so polarized that they wouldn’t ever say so in public — nobody wants to give any talking points to sceptics-
Generally speaking and stepping bit away from the pure modelling anyway, isn’t it so, that the varying aerosol forcing anyway the most important official explanation of the 20th century temperature variations (post-WW2 especially), and if the current interpretation is shown to be false, there will be quite a lot to explain and most important hypotheses of the 20th century climate variations would pretty much be sent back to the drawing board. This is my general understanding about the significance of this issue, but again I will stand corrected if this entirely false.
And as you said, continuing the discussion about details without participation from modellers actually doing the work becomes rather fruitless quite soon. There isn’t too much point in going to details really.
By the way, effectively disputing the widespread sceptic legend, there are climate model codes available in public domain (CESM for example). At least I have found studying the real code behind these discussions rather insightful – as (we) say, the truth is always in the code, specifications (not that there much to be found, at least in traditional computer engineering sense about academic software) are just paper. Most of course don’t have the data, computing resources, (personal) time and even skill to actually run these programs, but just reading the code will give you quite a lot of information that won’t certainly come up from the literature, for instance about data structures and degrees of freedom involved.
Your call, Fred, is to an authority bound to an ideological site. But you might be right.
==============
@kim, fred
Could Gavin Schmidt survive the heat of a site where he doesn’t hold the keys of moderation in his hand? He hasn’t shown much desire to venture out of such a comfort zone in the past.
Difficult question? Zaparoonee and you’re gone.
“isn’t it so, that the varying aerosol forcing anyway the most important official explanation of the 20th century temperature variations (post-WW2 especially), “
“Official” or not, I agree, Anander, that aerosol cooling is accepted as a major factor in the “global dimming” from about 1950 to the late 1970s, and in the “global brightening” due to reduced aerosol negative forcing subsequently, at least up to about 2000, when some possible further aerosol increases have been suspected. That this is a valid phenomenon is not in doubt, with evidence from multiple regions and multiple time points – mainly in the Northern Hemisphere but also to a lesser extent in the SH. The dimming was associated with reduced surface solar radiation resulting from a reduced transmission of a given solar irradiance to the surface, and was seen in both all-sky and clear-sky conditions, excluding cloud changes as the only operative factor. The subsequent brightening partly but not completely reversed some of the cooling effects that preceded it.
Without the aerosol dimming, as you suggest, the warming post-1950 would have been expected to be greater based simply on GHGs and other warming factors, and so the aerosol effect
Also, as you suggest, there is uncertainty about the magnitude of the negative forcing. This required efforts to arrive at the best input possible in models. It did not mean that modelers made choices designed to make the model projections fit the observed warming. The choices that Gavin Schmidt described for the GISS models were based on the aerosol data he had available, and were conservative rather than at the high end of plasuible aerosol negative forcing values. If that is considered “tuning”, it was not tuning designed to ensure a favorable outcome in the projections, unless Gavin was not telling us the truth.
I wrote a couple of sentences with tortured syntax above. The dimming was the aerosol cooling effect. The subsequent brightening partially reversed the cooling and was in part due to reduced aerosols. The net anthropogenic aerosol effect for the entire post-1950 interval, based on the observational data, was cooling, although the post-1976 effect may have involved a warming.
Fred,
When we are discussing the logic in the role of aerosols, we must ask what was the basis for the estimate of the strength of the aerosol dimming. Was it really knowledge about the physical mechanism and amount of aerosols or was it determined from earlier analysis of temperature time series?
If the basis was the analysis of temperature time series then there is a circular argument: Aerosol dimming is determined from temperature time series and it’s used to reproduce the time series. When that is done it’s also to be expected that the model will be implicitly tuned to reproduce the climate sensitivity that was used in the earlier analysis of the aerosol dimming.
I don’t know what really happened. Thus the above tells what could have happened, not whether it really did. Showing that this is not the right explanation would require at least that it’s shown, how other information could at the time tell the strength of the dimming accurately and reliably enough.
“When we are discussing the logic in the role of aerosols, we must ask what was the basis for the estimate of the strength of the aerosol dimming. Was it really knowledge about the physical mechanism and amount of aerosols or was it determined from earlier analysis of temperature time series?”
Pekka – It appears primarily to be based on physical mechanisms and aerosol amounts as incorporated into the GISS Model E
Fred,
That conclusion may be right, but it would be necessary to know more about the details of the model to really conclude. The paper tells a fair amount about physics that’s taken into account, but only a real specialist could tell what that really means.
The main reason for being a bit skeptical is in the fact that other main stream sources including several other modelers and the IPCC Reports emphasize that the strength of the aerosol forcing is not known at all accurately. The overview of radiative forcings for year 2005 in AR4 tells that the uncertainty range for the direct effect is -0.9 .. -0.1 and for the cloud albedo effect -1.8 .. -0.3. If there would really be a well justified physical understanding, how could the uncertainty ranges be so wide.
Fred,
People from the consensus side of the argument interpret Lindzen’s statement as an accusation of fraud, or something close to it. I do not read it that way.
Lindzen is actually quoted as saying,
“The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.”
You have paraphrased this as,
“…aerosol forcing is adjusted to make model projections match observed trends.”
If you look carefully, that’s not an accurate paraphrase.
I would compare Lindzen’s state with one of Kiehl’s statements,
“models with low climate sensitivity require a relatively higher total anthropogenic forcing than models with higher climate sensitivity.”
So what does Kiehl mean by “require”? I think it is either a physical requirement or an arbitrary requirement. No one is suggesting that there is a physical reason why forcing and sensitivity should compensate; so without such a reason you are left with only a few other possibilities – sheer chance, which would be extraordinary; and an unconscious tuning in response to expectations of the model developers – which is less extraordinary.
I also note you find the tuning argument implausible because climate sensitivity is an emergent property of the models. Sometimes the forcing is too. So, I would direct you to a paragraph from Huybers:
“Covariance could also arise through conditioning the models. A dice game illustrates how this might work. Assume two 6-sided dice that are fair so that no correlation is expected between the values obtained from successive throws. But if throws are only accepted when the
dice sum to 7, for example, then a perfect anticorrelation will exist between acceptable pairs (i.e., 1–6, 2–5, etc.). Now introduce a 12-sided die and require the three dice to sum to 14. An expected cross-correlation of 20.7 then exists between realizations of the 12-sided die and each of the 6-sided die, whereas the values of the two 6-sided dice have no expected correlation between them. The summation rule forces the 6-sided dice to compensate for the
greater range of the 12-sided die. This illustrates how placing constraints on the output of a system can introduce covariance between the individual components. Note that this covariance can be introduced, albeit not diagnosed, without ever actually observing the individual values.”
In the case of climate models, models may have only been accepted only when they reproduced aspects of the historical climate – in particular the surface temperature record. (Or, indeed, if their sensitivity lay outside the Charney range of 1.5 – 4.5 K.)
(By the way, I put my own view more fully at Michel Crucifix’s blog –
http://mcrucifix.blogspot.com.au/2012/02/ahem-few-clarifications.html.)
Alex – Lindzen stated that aerosol forcing is “unknown” and that the models made an “arbitrary adjustment” to make them match observations. This is almost certainly false for reasons stated several times earlier. I also explained why it isn’t necessary to invoke “adjustments” to explain the Kiehl findings, which can be explained on probabilistic grounds involving selection.. Since these comments are already in the thread, I won’t repeat them here, but I believe it will be possible to find additional support for them in a more thorough scrutiny of the models, and I’ll post further evidence as it emerges.
Fred, there are those of us who predicted that aerosols would be used as a bodge. Welikerocks saw it years ago, and I suspect Steve Fitzpatrick expected it, also.
It’s all about the albedo. Learn what is there, don’t just imagine what is convenient.
======================
Couple of points about the talk:
— the second half is aimed at MPs, not scientists. Hence the lack of references and “rigour”. Consider the audience.
— “Changes are not causal but rather the residue of regional changes.” This is the POV that climate is inherently regional, not global. Global numbers and effects are the sum and interaction of regional processes, in the main. That is, there are not “global changes” driving regional, but the reverse.
Completely agree with both of your points
Energy entering and manifesting itself in the system is a regional event as radiation doesn’t possess heat. So the changes are regional like ENSO which in turn triggers a cascade of events.
CO2 ppm is not the same at each test location — also takes time to migrate around the system.
Re Co2 levels, John, you might find the links here interesting. Callendar 1938 is the one who determined CO2 levels in the 19th century. Slocum 1955 showed how Callendar cherry-picked the data, arbitrarily leaving out over 1,000 measurements. Callendar’s 290 ppm is the accepted 19th century level. Slocum showed that it should have been 335 ppm, using the very same data. Slocum, playing nice, did everything but call Callendar a data fudger and a scientific fraud. But he left nothing to the imagination.
Callendar was trying to be a warmist alarmist, long before Hansen. And he was caught out by Slocum, but no one today knows the history, so they use the cherry-picked value..
Both papers can be found using Google Scholar.
Steve Garcia
CC says about about Lindzen- “Im not saying “Lindzen is wrong because he’s boring and no one likes him” but rather pointing out that he has lost credibility in the community. That’s the same old consensus argument which is political, not scientific. CC, I bet you and others can come up with dozens of examples in many scientific fields where a scientist had lost creditbility, but turned out to be correct. Also, your assertion that Lindzen has lost credibility is itself suspect. Perhaps that is the case among your cohort of climate science buddies, but there are a lot of scientists out there, and I doubt that you have polled them. Also, what you consider boring, others may consider gravitas, very different and from the rants or dismissive arrogance which characterizes much of the debate.
The reason CC and other warmists say Lindzen has “lost credibility” is because (unlike CC’s “climate scientist”) he follows the Scientific Method. Unlike “climate scientists” he doesn’t hide his data and algorithms, cherry-pick tree ring series, turn varve data upside down, or engage in ‘pal review’.
To have “credibility” in CC’s eyes, you have do all the things Lindzen won’t do.
I’m not “in the community” but he loses credibility with me because I am capable of doing the simple calculations of forcings and temperature rise, and spot the flaw in the argument which is to ignore the other forcings. He has been making this argument almost unchanged for years because, I guess, he knows he can get away with it with certain audiences.
Focussing on Chris’s complaint that he has lost credibility in the community is merely to avoid the discussion of the *reasons* why he has lost the credibility. Reasons being that his theory has not had any good support, his recent papers had some obvious flaws that were quickly spotted and he has a tendency for putting forward the same points to many meetings of non-experts without accounting for the fair criticisms his points have received.
Steve Milesworthy:
It would be interesting to learn what you imagine Lindzen’s “obvious flaws” are.
Steve : are you seriously saying Lindzen routinely ignores non-CO2 forcings ?? That claim sounds like drivel to me. Is your “obvious flaws” claim any better grounded, I wonder ?
Punksta, I did not say he “routinely ignores non-CO2 forcings”. I said he routinely gives presentations to non-experts in which he highlights a low sensitivity which is obtained if one ignores non-greenhouse gas – ie. aerosol and solar – forcings:
“If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C”
You should make sure you understand a point before you conclude it is “drivel”. I guess it means he has pulled the wool over your eyes.
An “obvious flaw” in his latest paper (Lindzen & Choi) was that you got the opposite result to his if you changed the range of his arbitrarily chosen sampling regions by as little as one month.
Steve, I retract my comment about your contribution being mere drivel. It’s positively disingenuous. How much wool is being pulled over people’s by such an open statement ? Your claim of Lindzen’s dishonesty is itself just dishonest.
An “obvious flaw” in his latest paper (Lindzen & Choi) was that you got the opposite result to his if you changed the range of his arbitrarily chosen sampling regions by as little as one month.
Can you elaborate on this obvious waffle ?
Punksta,
I didn’t use the word “dishonest”. Stop putting words into my mouth.
“If one assumes…” directs an inexperienced audience to assume exactly that. For the statement not to be misleading it should be followed by a clear explanation that nobody seriously assumes that the statement is definitely correct, and that even if the aerosol inputs are “arbitrary” they are sizeable.
“Can you elaborate on this obvious waffle ?”
How is such a clear statement “obvious waffle”?
Steve
Oh do stop feigning innocence now, it’s pathetic.
And don’t be ridiculous – “If one assumes” is not an invitation to assume.
And since you accept the effect of aerosols, clouds etc is “arbitrary” – indeterminate as of now? – how can we also know they are “sizeable” ?
And as regards the waffle, what claim do you refer to ?
Punksta, you are looking all ways to pretend that Lindzen is “innocent” and assuming I am “feigning innocence”. But I have engaged with people who *have* been misled by Lindzen’s line so it is legitimate to point this out.
Rather than vent your frustration at the valid points I am putting to you, why not use your energy to investigate Lindzen’s claims.
And don’t be ridiculous – “If one assumes” is not an invitation to assume.
Steve M : [non-responsive]
And since you accept the effect of aerosols, clouds etc is “arbitrary” – indeterminate as of now? – how can we also know they are “sizeable” ?
Steve M : [non-responsive]
And as regards the ‘waffle’, what claim [of Lindzen’s] do you refer to ?
Steve M : [non-responsive]
Sometimes it is difficult to summon a desire to respond to someone who can’t follow a thread. I’m having to guess on how to italicise here – apologies if it doesn’t work:
[I]”And don’t be ridiculous – “If one assumes” is not an invitation to assume.”[/I]
Yes it is.
[I]”And since you accept the effect of aerosols, clouds etc is “arbitrary” –
indeterminate as of now? – how can we also know they are “sizeable” ?”[/I]
You misread for about the fifth time. I did not *accept* that the effect of aerosols is “arbitrary”.
[I]”And as regards the ‘waffle’, what claim [of Lindzen’s] do you refer to ?
Steve M : [non-responsive]”[/I]
This is very confusing, because it is you who is accusing me of “waffle”. I have not accused Lindzen of waffle.
Sometimes it is difficult to summon a desire to respond to someone who can’t follow a thread.
Well spotted – this is indeed the big problem with your posts.
And don’t be ridiculous – “If one assumes” is not an invitation to assume.”
SM: Yes it is.
Obviously not. If one assumes != Assume.
And since you accept the effect of aerosols, clouds etc is “arbitrary” –
indeterminate as of now? – how can we also know they are “sizeable” ?”
SB: You misread for about the fifth time. I did not *accept* that the effect of aerosols is “arbitrary”.
I did not misread – “arbitrary” is the word you actually used.
But, if you actually believe the effects of aerosols and clouds are settled science, do let us know these important finds.
“Waffle”. You made a vague claim about some claim of Lindzen’s being obviously false. Which one/s ?
Hint:
Use html tags for italics etc
OK. What you consider to be waffle was my reference to something that is clearly described in this section:
“The LC09 results are not robust.”
of:
http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/
I didn’t mean that your claim itself was waffle. I meant you were waffling as to what the claim is.
Give us a one paragraph summary so we can see if it’s worth following the RC link.
[QUOTE]The result one obtains in estimating the feedback by [Lindzen’s] method turns out to be heavily dependent on the endpoints chosen. In [Trenberth et al] we show that the apparent relationship is reduced to zero if one chooses to displace the endpoints selected in LC09 by a month or less. [/QUOTE]
The RC article includes a plot that compares Lindzen et al choices with Trenberth et al choices. The Trenberth choices look equally as reasonable or slightly more reasonable than the Lindzen choices and come up with a different result.
So the result is not “robust”.
There are a number of other problems listed though the others are more waffly ;)
Lindzen has apparently accepted that there were “obvious flaws” in a follow up paper in the “Asia-Pacific Journal of Atmospheric Sciences”. The abstract does not appear to claim it is rebutting the criticisms.
(Googling around Lindzen seems to be moaning that JGR rejected the paper and PNAS refused to use the reviewers he wanted (Will Happer and former colleague Dr. Chou) on this latter paper. Also
http://judithcurry.com/2011/06/10/lindzen-and-choi-part-ii/
)
I did not misread – “arbitrary” is the word you actually used.
But, if you actually believe the effects of aerosols and clouds are settled science, do let us know these important finds.
Missed this post earlier.
I said: “that even if the aerosol inputs are “arbitrary” they are sizeable.”
Note the “if”. You know, the “if” that would make it obviously conditional. I think you’ve sort of made my point about Lindzen’s “If one assumes…”
As it happens, arbitrary and sizeable is not that inconsistent when it is understood that they may cause both sizeable negative and positive forcing, and that the forcings are uncertain such that (in total) they could add up to a low number.
I’d like to repeat this from 25 minutes ago because It’s a real question. Has the following been well considered in the literature with comparisons of Arctic and Anarctic carbon soot emissions and albedo effect on temperature trend or not? I hopeful that carbon soot emissions are an AGW forcing we might all agree on, thereby actually doing something positive despite the uncertainties about CO2 atribution and climate sensitivity-
Doug Allen | February 27, 2012 at 8:35 pm | Reply
Good points, but I think regional temperature trends are much more complex. The Anarctic, itself land covered by snow and ice and surrounded by mostly ocean, has warmed very little. The Arctic, on the other hand is ice and snow, surrounded mainly by land. The anarctic is far from centers of industry and soot emissions, and soot emissions fall out of the atmosphere fairly quickly, probably not crossing over the equator to any great extenct. The arctic is close to 90% of the world’s industry and receives a lot of the carbon soot fallout. I think the difference in albedo, from soot fallout, plus the positive feedback of albedo change when Ice and snow becomes water, may explain in large part the differences in Arctic and Anarctic temperature trends and by extension the differences in northern hemeisphere and southern hemisphere temperature trend. If I am wrong about this, Dr. Curry and others, give me some scientific studies and data that refute this or bring it into question. I have seldom seen this hypothesis considered, and it has a very strong bearing on the competing roles on CO2 and carbon soot emissions.
Doug, I would say that albedo plays a pretty major role and soot does impact albedo.
http://i122.photobucket.com/albums/o252/captdallas2/polesandtropicsRSS.png
Fred writes; “If you give a short presentation in general terms to a non-scientific audience, you can prove just about anything you want, with no-one to say you’re wrong. The reason that Lindzen’s perspective is not widely accepted within climate science resides in details that are not in the talk, and which an audience unfamiliar with climate data would be unable to judge in any case.”
I’m not understanding what your point is, Fred. Are you saying LIndzen shouldn’t be able to give talks to present his point of view. If not, what are you saying? Should Al Gore be allowed to speak? Ultimately this is about social policy, and like it or not we’re living in a democracy. You seem to be pining for some sort of egghead-ocracy whereby no one short of a PH.D. in physics is allowed to vote.
So what would you suggest? How would you go about educating the great unwashed to your satisfaction?
As to no one being able to stand up and explain why Lindzen’s wrong, I can only say it’s rather a shame that no one on your side of things is willing to debate. It’s my understanding that professor LIndzen has no qualms at all about facing those who disagree with him.
My response appeared below. Sorry it wasn’t nested under your comment, which had been my intention.
Fred, please don’t lose your patience. I enjoy your input and am capable of scrolling quickly over GaryM et al. and other intemperant people.
Pokerguy,
I am beginning to get a grasp on Fred’s bizarre world view.
For Fred, CAGW is a scientific fact (including the high probability of C). Therefore, anyone who says or writes anything inimical to CAGW is dishonest, because the only way you can disagree with CAGW is to outright lie, or lie by omission. Thus the WSJ graph Dr. Curry cited in an earlier thread that accurately depicts the IPCC’s warming predictions is dishonest because it doesn’t explain that the newer models are supposedly better than the older models. And Lindzen’s presentation to the House of Commons is dishonest because there are other CAGW talking points that, if Lindzen had discussed, would have proved how stupid Lindzen’s point is.
In other words, if you disagree with Fred on CAGW, you are either stupid or dishonest, and likely both.
Of course progressives think like that on virtually every issue, but for Fred it is an article of faith and a point of personal obsession.
Gary – Thanks for following my comments so diligently from thread to thread. I’m thinking of adding you to my list of favorite groupies.
Fred,
Thank you for using this opportunity to imitate Joshua. Keep reaching for the stars.
Gary – It’s the groupies who reach for the stars.
Fred,
Best comment of the thread – your “groupies reaching …”
Fred’s wit should never be under-rated.
While I tend to think Fred (and some others) have well thought out views, he (and others) tend to be resistant to the possibility that their views may need to be modified.
Scientists don’t even know how to deal with the complex climate science adequately.
We ‘laymen’ know that inside every complicated idea is a simple idea trying to escape. Once you ‘scientists’ have finally freed the ‘simple idea’ we will understand.
Think the convoluted orbits of the planets and stars before someone figured out the earth was not the center of the universe.
pokerguy – Your questions, posted at 9:15 PM, were answered by me at 8:58 PM. Perhaps it took you more time than that to compose your comment, so I’m not accusing you of disregarding what I had written, but in any case, you can go back to my earlier response to Anteros for the relevant points.
Judith,
What is really being reflected in this debate is the age old debate between theorists and experimentalists except the theorists are today avoiding real world scrutiny and testing by substituting computer models.
If they had powerful computers in 1904 the plum pudding model of the atom of J. J. Thomson might still be being defended with wonderful results from tortured computer models. Geiger, Marsden and Rutherford would have been dismissed as sceptics and the Royal Society would be saying there is a “concensus” and the science is “settled” and dismiss Rutherford as a mere upstart scientist from the colonies. Even then Rutherford had to get Geiger and Marsden to do the experiment and then “interpret” it to minimise the fall out from demolishing the “settled science”.
I find the comments on this page interesting. So far, I have seen four people say Lindzen is wrong (not counting stefanthedenier). Oddly enough, none of them have discussed anything our host highlighted. Consider Pekka Pirilä:
Lindzen referred to the warming observed in a period of 150 years. Pirilä claims this is dishonest as most of that warming was (he says) observed in the last 75 years. This is an extremely weak basis for an accusation of dishonesty, and it’s the entirety of Pirilä’s response. We then have Chris Colose who begins with:
Colose begins by “poisoning the well.” Before discussing anything Lindzen says, he denigrates Lindzen. He then goes on to say things like:
This seems almost meaningful, except nothing Colose refers to is anything Curry highlighted. Instead, he refers to relatively obscure arguments which no average reader is likely to know about, research about or even care about. Instead of discussing the core arguments of the topic, Colose relies on dishonest rhetorical tricks and discussions of peripheral arguments. We then have Jim D, who says:
There is an implicit accusation of dishonesty here, but it is nowhere near as prominent as in the previous two posters. Unfortunately, Jim D’s comment seems to make no sense. Curry highlighted Lindzen saying we’ve seen almost one degree of warming, we’ve had almost a doubling of effective CO2 concentrations, and the planet’s sensitivity to such a doubling is about one degree. This is all perfectly consistent, yet Jim D comes up with radically different numbers, and he does so without providing any calculation or source. We then have Fred Moolten who offers the only reasonable disagreement on the page:
He doesn’t actually say why any of Lindzen’s points are wrong, but he explains the way in which they are (supposedly) wrong. This isn’t much, but it is something, and he offers it without any derogatory remarks. That makes it the best response offered on this page.
Ultimately, Lindzen’s presentation makes a number of very simple points which Judith Curry highlighted. Despite a number of people disputing them, nobody responded to them. I find that fascinating. To any uninformed viewer, there would be absolutely nothing on this page to indicate Lindzen’s position was wrong.
Denigrating the opposition is what Chris Colose and most AGW supporters do. It is their trademark. If someone starts out with an attack on the person and not the science, I know without even reading any of their points that they are an AGW supporter
I don’t think your description is accurate. I’ve seen the same the same sort of behavior from people on both sides. In fact, I’ve probably been guilty of it myself.I understand why people do it, and I don’t think it is inherently wrong.
The problem comes when people attack a person without actually addressing any substantive points. I don’t care if Chris Colose or others make fun of Lindzen (or anyone else). I care that they do so while not contributing to the discussion at hand.
Quite frankly, I find it mind-boggling such simple points aren’t getting any substantive responses by people who disagree with them. If you can’t actually discuss simple points, why should anyone listen to you?
Brandon, though I have not yet offered anything substantive to this discussion, I often get the feeling that people don’t read Judith’s comments or certainly don’t take them up as point of debate, except in cases of extreme agreement or diagreement.
Fred Molton
I wrote: “The modelers would still be “allowed” to tailor the relative levels of each aerosol within the margin of error of the specific item without that statement being untruthful. There is a large margin of error in the estimated aerosol levels. In addition, the relative impact of each aerosol on the others and on the system as a whole can (and I expect were) adjusted so that the models would meet the observed criteria that were available.”
Now I acknowledge that I do not know much about programming a GCM, but I wrote what I did because it seemed like a very reasonable way for the modelers to develop their GCM‘s. The criteria the GCM’s are trying to accurately forecast are not these forcings so adjusting them in the past seemed a reasonable way to potentially increase accuracy in the hindcast.
You wrote I was wrong and wanted me to admit such.
Please look at what the IPCC said about model development: Looks like the IPCC is writing the same thing that I wrote Fred. The modelers allowed the aerosol forcings to vary with the range of uncertainty.
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter8.pdf
“Models have been extensively used to simulate observed
climate change during the 20th century. Since forcing changes
are not perfectly known over that period (see Chapter 2), such
tests do not fully constrain future response to forcing changes.
Knutti et al. (2002) showed that in a perturbed physics ensemble
of Earth System Models of Intermediate Complexity (EMICs),
simulations from models with a range of climate sensitivities
are consistent with the observed surface air temperature and
ocean heat content records, if aerosol forcing is allowed to
vary within its range of uncertainty.”
Fred- could it be more plain that you made a mistake?
Rob, what you quote is irrelevant to the point I made, which is that the modelers don’t tune aerosols to match observed trends. You should contact Gavin Schmidt as I suggested.
What you quote is what I have mentioned in several places above. Inverse modeling is often used to estimate the value of a parameter. It can be used to test different aerosol forcings to see which allows a model to best match observations, but as I mentioned, inverse modeling results are not used to make projections, which require forward modeling. The latter is done, as Gavin mentions, without trying out different forcings to see which one performs best, but based on the criteria he describes in the part from him I quoted rather than the results from an inverse model exercise. I described some inverse modeling results in an earlier comment, including the observation that utilizing a somewhat smaller aerosol forcing changed temperature projections to only a minor extent (this is from one of the Hansen et al references).
You have acknowledged your lack of understanding of this issue. If you contain Gavin as I suggest, I’m confident he will confirm the points I attribute to him, and he can also explain why what modelers do for trend simulations is not the same as inverse modeling for parameter estimation, since he has experience with each.
OMG- Fred Molton
You are completely incapable of admitting you are wrong.
Fred- why do you the modelers allowed the aerosol forcings to vary within the range of uncertainty if it wasn’t to help the model perform better in meeting observed conditions?
Do you think they did it for fun?
Rob – I’m not sure what combination of stubbornness and ideological fervor prevents you from reading what other people write and trying to learn from it. I just finished describing the different uses of inverse modeling (where different parameter values are tested) and forward modeling, which is used to make projections without knowing in advance which values will perform best but must derive them from data and physical principles. The latter doesn’t involve trying out different values to see which works, and there is no tuning. You then ignored what I wrote and repeated your previous misconception.
Whether you want to understand or not is not a problem for me, Rob, because I can live with your remaining misinformed. It should be a problem for you, if you care to improve your understanding.
Incidentally, here is a nice paper on the use of inverse modeling for Cloud-Aerosol Interactions. It illustrates the principle, and I hope you’ll understand why it can’t be used for future projections when there is not yet an observational trend the different values can be compared with.
If Lindzen thinks the other GHGs apart from water vapor have more than doubled the effect of CO2, and that aerosols have not had any effect, he is running counter to the IPCC estimate that the other GHGs have had 50% of CO2’s effect and aerosols have about canceled this. Also, if the other GHGs are more than doubling the effect of CO2, the future warming is worse than we thought, but he has no support for this statement (and Judith questions it because she hasn’t seen these numbers before either). So I think he is just being alarmist here.
Jim D, I want to thank you for actually making a substantive response this time. You did not address much of what I said, but you did at least give a real point to discuss. First, you say:
According to the IPCC AR4, the forcing from CO2 (as of 2005) was 1.66. The total forcing from greenhouse gases was 2.63, meaning non-CO2 GHGs had ~58% of the impact of CO2. If you include solar irradiance, that goes up to 2.75 and ~66%. It then goes up to 2.82 if you include the stratospheric water vapor directly caused by methane, giving ~70%. Finally, if you include tropospheric ozone changes, you get a total forcing of 3.1, or a non-CO2 forcing of ~87%. If you use that proportion and update the CO2 forcing for 2012 levels, you get ~3.35 as your total forcing. That’s 90% of the generally accepted 3.7 so Lindzen’s comment is reasonably accurate.
The aerosol forcings from the IPCC AR4 are only -.5. This is nowhere near the positive non-CO2 forcings which have a total of 1.44. Moreover, the error margins on aerosol forcings are 80%, meaning the IPCC says they could be as small as -.1.
For the shortened version, there are positive forcings other than greenhouse gases. When you include those, the best IPCC estimate for aerosol forcing is less than half that of positive non-CO2 forcings. Moreover, that estimate has such wide error margins that it almost includes 0. These facts largely invalidate your response.
In actuality, the total forcings seen so far could be reasonably close to the forcing expected from a doubling of CO2. Lindzen’s comment was not precise, and it does rely upon some assumptions, but it is not unreasonable.
Yikes. I just realized the table I referred to has two lines for aerosol forcing. This increases the central value to -1.2 W/m^2 (not -1.5 as Jim D claims upthread). This is somewhat bad for Lindzen’s position, but it is compensated by the fact the uncertainty increases a great deal.
The total uncertainty of aerosol forcings not only includes 0, but it even ranges as high as +.3 W/m^2. That’s right. The aerosol forcings which are said to cancel out the non-CO2 forcings may actually contribute to them instead.
I apologize for missing that line in the table, and I apologize for the mistakes which crept into my comment because of it. However, the effect of including the line I missed only serves to strengthen Lindzen’s position.
Brandon, AR4 has versions of that forcing diagram that sum the bars up, and you see that the total is very similar to CO2 alone. This implies other GHGs tend to cancel aerosols. For Lindzen to make his case, he has to say what he thinks aerosols are doing. It is not just a model argument. There is a lot of physics that explains why aerosols make not only clear sky but also clouds have higher albedo. People study this, measure it with satellites, write papers on it. It should be considered and not just dismissed as a model invention if that is what Lindzen is doing. Maybe nobody at MIT is in that area of science, or he doesn’t talk to them, but he seems a bit isolated on this matter,
Eek. I need to learn to read charts better. I thought the range given in error estimates in that chart gave the total forcing, not the error margin. Silly mistake, I know. In my defense, I am the only person who has actually referred to the real numbers, so it’s not like I’m doing worse than anyone else.
Anyway, with that change, Lindzen’s position is not as simple to support. Even if you take the least damning estimations (for Lindzen) from the AR4, the total forcing from aerosols is -.4 W/m^2. This means it isn’t consistent with 0, and it cannot simply be ignored. However, I believe Lindzen explained in his speech why he disagrees with those estimations. If so, he didn’t just ignore them. Beyond that, at the far end of the error margins given by the AR4, we’re still seeing 80% of the forcing expected from a doubling of CO2. This means his comment is still relatively reasonable even if we accept the AR4 estimates (though it would have more certainty than it should).
With all my mistakes corrected (I hope!), the central thrust of my response doesn’t change. Lindzen’s comment may not be a great answer, but it’s also not anywhere near as unreasonable as portrayed by Jim D.
Speaking of Jim D, may I ask why you’d refer to a visual diagram when I provided a direct link to the actual numbers? Why would you rely on estimates derived from reading a picture when you can see the actual values? Would you also explain how you can say this:
You say the aerosols are about as strong as CO2 and they tend to cancel out the “other GHGs.” For that to be true, the forcing from “other GHGs” would have to be approximately as strong as the forcing from CO2, something you directly disputed when you said:
You’ve changed your position from saying “other GHGs” have 50% the forcing of CO2 to 100% without any explanation. Is there one?
Jim D, What about the reference to the literature where modelers discuss how they make use of this adjustment factor. It’s in Lindzen’s FermiLab talk if you are interested. So, according to JimD, just how to modelers choose their aerosol forcing, given the range in AR4 between 0.4 and 2.7 W/m2. Surely, it makes a HUGE difference. Taking the upper value, there should be no warming at all!
David Young:
This is nonsense. Sure, aerosols cancel out all of the anthropogenic influences, but everyone knows the observed warming is due to natural fluctuations!
Sorry. I couldn’t resist.
Brandon, you can see the IPCC bar charts by just doing an image search of IPCC Forcing. These usually also have numbers too. CO2 is near 1.6, the total is near 1.6, and the GHGs and aerosols are near 1 and -1. (OK, more than 50% of CO2). This is consistent with what I said. Lindzen doesn’t explain his aerosol view except as a way to get at modelers, not mentioning the people doing the aerosol observations.
Jim D –
Whether Lindzen is right or wrong about the aerosols, don’t you think someone should be challenging the models, to make them back up what they show? Shouldn’t the two sides then go at each other with the best conclusions winning? Or some third conclusions come out of it?
And if you do think that, don’t you think it would be in the spirit of open inquiry that the modelers let the challenger at least see what it IS that they are doing, so the other side can have an informed basis for getting to the root of the situation?
Are the modelers more interested in keeping the status of their models, or in getting at the truth of the matter? (I don’t mean truth here as final truth, but as a next step with a solid basis.)
Steve Garcia
David, yes, I am familiar with Kiehl’s words on this as a way to get the sensitivity down to values more consistent with observations. Without this, the water vapor feedback is too strong to explain the relatively weak warming of the later 20th century that you would get without aerosols. They had no way to change the water vapor feedback, because that is basic water saturation physics, but aerosols were uncertain and generally reflective. This is the part of the model where there is least certainty, because of the detailed chemistry involved, and there are only general observations to support model parameterizations. This science is in a better state than it was only a decade ago, and improving with more research and observations.
Jim D, you have an annoying habit of not responding to what I say. For example, you say:
There is no doubt I can see those charts. In fact, I had them open in one tab when I typed my response. However, what I asked you was:
I didn’t ask why you were using a chart. I asked why you were using a chart when I gave you a link to the actual numbers. I asked why you would pick a chart over the numbers the chart is made from, and you respond by not answering anything I said. Instead, you just say the chart is readily available and continue to use it to estimate the actual values I gave a link to.
You do it again when you say:
Here you admit your earlier comment was wrong, but you don’t actually respond to my comment about it. Neither of these cases cause any real problems, but it is annoying to have you respond to me while mostly ignoring what I say. Anyway, apparently the main point you want to make is:
I haven’t looked at the entire presentation from Lindzen, so I don’t know whether or not he did explain his view on the aerosol issue. If not, he certainly should have. On the other hand, you didn’t even raise this point in your initial comment, and you basically didn’t respond to anything I said about that comment.
Perhaps Lindzen does need to do better, but apparently, so do his critics.
Although Lindzen is entitled to his interpretation of evidence, I don’t believe he’s entitled to misrepresent aerosol forcing as a fudge factor used to make model predictions conform to observations, and it’s unfortunate that myth has become a staple in some blogosphere discussions. The question is not whether different models use different aerosol forcings – they do – but whether aerosol forcings are adjusted to “tune” the models to the observed temperature trends – they aren’t.
Some of the problem is a misrepresentation of the Kiehl GRL paper. From among multiple models, Kiehl selected a subset that agreed fairly well with temperature observations, but with different climate sensitivities (climate sensitivity is an emergent property of models and not an input). He found an inverse relationship between sensitivity and aerosol forcing. This is unsurprising given that the subset was selected for good predictive skill. However, the inference that each model had been tuned is false, based on the descriptions of how the models were constructed and parameterized. If all models, rather than just those with the selected attributes (good match to observations but differing climate sensitivities) had been evaluated, there is no reason to expect the same result.
There are remaining uncertainties about aerosols, but non-negligible aerosol negative forcing is not one of them, and it seems to me that Linden’s perpetuation of the “fudge factor” myth is an impediment to attempts to focus discussion on how best to resolve the uncertainties.
Fred –
You say that climate sensitivity is an emergent property of models and not an input. That surprises me. Isn’t it the case that Jim Hansen’s predictions of 1988 were ‘based’ on a model with a climate sensitivity of 4.2C/2xCo2?
Similarly, didn’t the IPCC FAR specify that its prediction of 0.3deg per decade of warming was based on a model with a sensitivity of 2.5C/2xCo2 – and that the “limits of uncertainty” were two other models that used sensitivities of 1.5 and 4.5C/2xCo2?
Anteros – Climate sensitivity is an output that arises from model inputs including basic physics and known properties of CO2, water, hydrostatics, etc., plus parameterizations designed to match the properties of starting climates before a simulation of trends is attempted. The modelers don’t actually know what their model’s climate sensitivity will be when they input the relevant variables,. Furthermore, the models are so complex that they can’t really tweak parameters with the expectation of changing it in a predictable way. That’s one of the reasons the sensitivity range is as broad as it is. There are many good sources describing this, and RC is one place to look (search for models), because Gavin Schmidt is an expert in this area. TI don’t think the above is a matter of controversy within the science itself on the part of individuals who are intimately familiar with model construction.
When you quote model sensitivities, as you have done, you are referring to the outputs. In other words, Hansen’s early model emerged with a sensitivity of 4.2 C/CO2 doubling, but that figure wasn’t something he knew in advance.
The point I would like to emphasize that I mentioned above is that aerosols were deemed necessary in models because without them, the water vapor feedback was too strong to account for the recent temperature trend. If they could have tuned the water vapor feedback they may have tried, but the fact they didn’t is because it is defined by somewhat fundamental physics, like Clausius-Clapeyron, which you can’t change. Aerosols were considered to be generally reflective, especially sulphates (as seen from the measurable effects of Pinatubo for example), so, no surprise, aerosols had the right properties to avoid the overwarming. However, they are complicated as emissions aren’t known accurately and chemistry has a way of converting aerosols, while their effects on clouds also leads to higher albedos but it is somewhat dependent on details of cloud microphysics, so many factors confound the issue. Hence, since it can’t be derived from first principals the way radiation and thermodynamics can, a certain amount of ground-truth in observations is needed to constrain the chemistry. This could be called tuning, but really it is constraining a complex system. The aerosol forcing uncertainty bars shown by IPCC reflect this.
Fred Moolten: From among multiple models, Kiehl selected a subset that agreed fairly well with temperature observations, but with different climate sensitivities (climate sensitivity is an emergent property of models and not an input). He found an inverse relationship between sensitivity and aerosol forcing.
thanks for the clarification.
1) Statement 2 of Slide # 3 can be shown to be completely false with a simple “back of the envelope” calculation.
2) The “work” used to compile this presentation was not peer-reviewed, would not pass a peer review, and is based on several false premises. No wonder Lindzen is usually dismissed in the AGW proponent crowd.
3) When, just below Statement 2 of Slide 3, Lindzen states “Given the above, the notion that alarming warming is ‘settled science’ should be offensive to any sentient individual, though to be sure, the above is hardly emphasized by the IPCC,” he is clearly attempting to bully an audience with little scientific knowledge. This is a typical AGW denialist strategy…a very unprofessional strategy at that.
Pierre, you make the fifth commenter to fit my description. Similarly to Jim D, you say:
Unfortunately, you do not explain this. This is particularly problematic as Lindzen clearly justifies that statement when he says:
You could argue his justification is wrong (perhaps by saying that supposed doubling didn’t happen). You could also argue a different reason for him being wrong (such as by saying more warming is “in the pipeline”). Instead, you simply dismiss that statement out-of-hand even though you don’t respond to any of the justification for his statement.
That you do this and then denigrate him means you clearly demonstrate what I discussed.
For Pierre’s sake ;-) , if you can’t do this very simple calculation, you have no business commenting on GW, pro or con. Sorry! I’m accustomed to “debating” with deniers, rather than someone who is apparently open-minded.
Regarding Lindzen’s comment (that you provided) that “there has been a doubling of equivalent CO2 over the past 150 years,” this is simply not true. The baseline pre-industrial CO2 concentration is very widely accepted to be 280 ppm and we are now slightly above 390 ppm. A doubling of CO2 would be 560 ppm, so we are *only* about 40% about above the pre-industrial CO2 concentrations. So Lindzen is wrong on the comment you provided. The CO2 concentration a century ago is not far from the pre-industrial value. The temperature increase from a century ago is also just about 0.8˚C, and we know that there is still more temperature increase to come from the CO2 NOW in the atmosphere, even with only a 40% increase in CO2. Yes, this is more of a qualitative argument, but it is valid. Based on this empirical evidence, Lindzen must be wrong. A more recently (Schmittner et al, 2011) calculated climate sensitivity, is 2.3˚C for a doubling of CO2, not too far from the IPCC’s best estimate of 3˚C, and within the IPCC’s likely range of (2-4.5ºC). The Schmitter paper cautions “Our uncertainty analysis is not complete and does not explicitly consider uncertainties in radiative forcing due to ice sheet extent or different vegetation distributions. Our limited model ensemble does not scan the full parameter range, neglecting, for example, possible variations in shortwave radiation due to clouds.” It does become a problem in interpreting results from different research because of what the researchers have included as the cause of climate sensitivity.
Pierre, it’s disturbing you say people “have no business commenting on GW” if they can’t do a calculation which is nonsensical. Specifically, you say:
This comment shows a severe lack of understanding of what Lindzen said. This is extremely confusing as you even quoted what he actually said:
Lindzen did not merely discuss the change in CO2 concentrations. His comment covers all increases in greenhouse gases. If you’d like to argue he is wrong about that total increase, you can, but the simple fact is you’ve grossly misrepresented what he said. A mistake like that is understandable, but that you so grossly misrepresented such a simple point while making a claim as to who ought to be discussing matters is extremely disturbing.
Whether or not Lindzen’s comment was correct, it is clear you did not understand it before dismissing it. That is a bad sign.
Lindzen said what I quoted; it’s REAL simple. He also said what you quoted…real simple as well. If you don’t understand climate change issues, you should not comment. I apologized for the comment about you not being allowed to post. I retract that apology. I’m thrilled you’re posting at the judithcurry.com for idiots only site. Curry did not participate in the analysis of the BEST data and was apparently unable to contribute in any meaningful way. She’s not a significant player in the field of climate change. But she is qualified to start a Web site/blog to mislead the hopelessly naive wrt climate change.
Brandon – You might be interested in my comment at Brandon 11:58pm.
In it I point to two papers, one which casts aspersions on the one which set the 19th century CO2 level at 290 ppm, when with the same data it should have been 335 ppm.
Steve Garcia
For Pierre’s sake , if you can’t do this very simple calculation, you have no business commenting on GW, pro or con.
Ignorance? Check!
Arrogance? Check!
Carry on.
Pierre, you say:
D’oh. I really am terrible with blockquotes. Oh well, my comment should still be easy enough to read.
feet2thefire, I’m not actually familiar with either of the papers you mentioned, but I also don’t think they’re particularly relevant. That issue has been examined by many papers since then, and I think time is better spent looking at them. Early results could have been gotten incorrectly, yet still have been accurate due to luck. If the early paper you mention was wrong, I can ignore it’s conclusions, but I can’t simply ignore conclusions of other papers because they happen to be similar.
Mind you, it’d still be an interesting thing to learn about, and because of that, it’s worth reading them. I just don’t think flaws in papers from 60+ years ago are going to alter my understanding of things very much.
Brandon Shollenberger –
I’ve seen the data myself. And the data can’t change. It was taken back then and that’s it. They can’t have new data. There WAS no more taken. It’s not like there were lots of CO2 detectors back in 1880 and that Callendar missed them.
I invite you to look up the papers on Google Scholar and then read them. They aren’t that long, nor are they incomprehensible. You will see that Callendar lest out all the data that didn’t fit the conclusion he came to. There were a LOT of them that he left out. Slocum’s work looks at the same data, and Slocum concludes that Callendar had no justification for excluding the data he left out. In any discipline that is called cherry picking, when it is a biased data set that remains.
Also at http://www.warwickhughes.com/icecore/, in Figure 2, Dr Zbigniew Jaworowski graphically shows the cherry picking of Callendar.
Steve Garcia
Brandon –
For blockquotes just put
after. What is between them will be blockquoted. The after version simply has the “/” before the “b”. Just make sure of your spelling of “blockquote”.
Steve Garcia
Brandon – Hahaha – I screwed THAT up!…LOL
Crap! I inadvertently USED
feet2thefire, you’re wrong when you say:
Ice cores provide records of atmospheric gases. Many have been drilled since the mid 1900s. This gives the new data you say can’t exist.
Also, I haven’t read those papers so I don’t know what periods their measurements cover, but it’s worth remembering CO2 levels were rising well before 1880. It’s possible the “correct” value given the data set used in them was higher than 290 because the atmospheric levels had risen above 290 by that time.
Brandon – Point taken about the ice cores. Jaworowki takes those to task, too, and he has dealt with many, MANY of those. He argues that the assumptions that the gases in any layer are pristine are simply wrong, and he says why. But, yes, that data can be added. But don’t forget that Antarctica and Greenland are not very good representations of the rest of the world, especially Antarctica.
Co2 levels rising prior to 1880 is true, but probably in lock step with the massive aerosols, so any CO2>temp correlations have that BIG complicating factor mixed in, and driving temps down, if I am not mistaken.
Most of Callendar’s data was in Germany, which is significant. In the early 20th century they had 10,000 data points (several times what other data Callendar had for the 19th century) – and the average of them was 438! In the 19th century Germany’s still substantial data showed a level of 400. I have NO info on what environmental effect those levels produced.
Also, I’d be interested if more recent papers have at all used Callendar’s data, and if so if they used his cherry-picked set or all of it. It can’t be ignored. Can you point me to any papers, to save me time searching for them?
Steve Garcia
feet2thefire:
I’ve seen similar arguments before, but they are things the people drilling the cores take into consideration. I can’t say with certainty those arguments are wrong, but I don’t have any confidence in them as is. As for how representative cores may be, CO2 is a well mixed gas in the atmosphere. As long as there aren’t any sinks/sources influencing the area a sample is taken in, it should be fine. I believe that is the case for the ice cores used.
I wasn’t looking at any relationship between temperature and CO2 there. I was just pointing out CO2 levels in the 1880s would not be expected to be as low as in preindustrial times. I have no idea if you’re right about aerosols in that period, but it doesn’t impact my point.
I don’t know what factors would be involved in samples taken from Germany, but I’m positive it wouldn’t be representative of the globe as a whole. There is far too much vegetation and urbanization there (with no ocean winds to remove the impact) to get pristine samples.
I know the work underlying the major CO2 records don’t use that data. I’m not sure what other papers might do, but I don’t think it matters for the point we’re discussing. You can find the major CO2 measurements used here. I believe the most important of those for historical CO2 records is the Etheridge data set, primarily relying on the 1988 paper.
[blockquote]Lindzen said what I quoted; it’s REAL simple. He also said what you quoted…real simple as well. If you don’t understand climate change issues, you should not comment. I apologized for the comment about you not being allowed to post. I retract that apology.[/blockquote]
How arrogant. Just admit that you misinterpreted Lindzen and that you claimed he (or commenters here) do not understand GW because of your gross misinterpretation of what he said.
Also, you do not decide whom free speech applies to.
[blockquote]I’m thrilled you’re posting at the judithcurry.com for idiots only site. Curry did not participate in the analysis of the BEST data and was apparently unable to contribute in any meaningful way. She’s not a significant player in the field of climate change. But she is qualified to start a Web site/blog to mislead the hopelessly naive wrt climate change.[/blockquote]
This part of your comment just reinforces the fact that too many of the pro-AGW-scientists are arrogant intellectual thugs who will never admit they were wrong on something (because you are afraid that you will lose your ‘authority’ or what’s left of it).
You are not helping your cause with that attitude.
Pierre is right that has not been a doubling of CO2 equivalent since 1750. Only about 76% of a doubling approximately. The way it is worded though, Lindzen may have included water vapor which would put the equivalent forcing of all greenhouse gases at a doubling. Kind of sneaky, but possible.
Shaminism’s 1st Law, broadly translated by Kim somewhere, (guilt and maidens.)
Hey, man(n), catastrophe’s imminent, fire,famine flood, flux, not to mention pestilence, and you’re to blame!
But we can save you.
Beth, I over looked this bit of wisdom.
It is fantastic.
+10 to kim for writing it, and =10 for you catching it.
Lindzen makes the point that recent human activity has actually changed the average temperature of the planet by close to 1 degree C.
Every time I read that fact it really takes me aback that we can actually change the climate of a planet this size just by tossing molecules into the air. Wonders never cease to amaze.
Actually, he doesn’t. He says there has been close to one degree of warming, but he doesn’t say it was caused by human activity. This is made clear by comments like (emphasis mine):
Just saying.
Sorry, Lindzen said this in the first paragraph:
Then his last line in the presentation is:
indicating a lower level in which he doesn’t want to go under, thus protecting his intellectual honesty. So he must have some value he believes in with some error bars attached to that number.
I haven’t made temperature projections myself because I am still pulling together the pieces of the puzzle, but quite obviously people that have thought about this a long time, like Lindzen, think that humans are capable of changing the climate.
Fred writes: “The use of news articles and talks generally provokes a great deal of arguing, but i believe more actual understanding would emerge if we started with published articles or other legitimate sources of data such as material presented at meetings, and occasionally, Internet content from individuals not involved in partisan controversy. Dozens of potential starting points are published every week, so there’s no dearth of material for serious discussion, if serious discussion is a goal here in preference to argumentation.”
Fred, I’m trying hard, honest, but what does this even mean in a real world sense? You argue that debates aren’t worthwhile because the non-scientists in the audience aren’t equipped to judge who has the more persuasive arguments. And talks like the one Lindzen gave are no good because there’s no one there who can point out the speaker’s errors. We could fix that problem it seems to me with debates, but then you’ve already ruled those out.
So now you’re suggesting some sort of meetings in which “serious discussion” could take place. Presumably this serious discussion would be between the scientists. But that brings us back to the same problem, that warmists will not even get into the same room with skeptics. (You still won’t tell me why this is so by the way) So who would be at these meetings of yours besides wall to wall warmists. And would these meetings be open to the public? I’m guessing no, because as you’ve stated several times, the public is unable to comprehend what’s being talked about…
I meant serious discussion on this blog. My recommendation was to start with some actual data source rather than a news article or a talk to a political entity. It could be a journal article, a meeting report, a Web article outside the partisan wrangling (e.g., from Isaac Held’s blog), and then what follows would be what we ordinarily do here, except it would start on a sounder basis.
Fred,
“a Web article outside the partisan wrangling (e.g., from Isaac Held’s blog),”
I suspect that you will find considerable disagreement on what constitutes “outside the partisan wrangling”; some would argue that a fair amount of what is published in the field is nothing more than a continuation of partisan wrangling. I do agree that Lindzen’s talk covers too many subjects in too little detail to be discussed in a technical blog thread. Still, I agree with Judith that Lindzen makes a couple of fair points, specifically, that the real disagreement is over feed backs and net climate sensitivity, not about the basic physics. The repeated “98% of scientists agree” argument grows tiresome, even while I am one of the 98%.
You noted above that there is some knowledge of aerosols. Well, perhaps, but limited. It is also true that different climate models do use substantially different levels of assumed aerosol effects, and that those assumed aerosol effects are inversely related to each model’s diagnosed sensitivity. So I think Lindzen is correct that climate models use aerosols as a fudge to more or less fit historical data.
But Lindzen’s most important point related to models is that he sees models as having taken on an inappropriate role in climate science, with the focus being on ‘validation’ rather than ‘testing against data’. I would be a bit more specific than Lindzen: any real validation of a model involves making accurate predictions about the future, and for a significant period of time. By this measure, they appear to not be doing so well, and indeed, to be significantly over-predicting the temperature trajectory.
“By this measure, they appear to not be doing so well, and indeed, to be significantly over-predicting the temperature trajectory.”
What a quaint way of saying that the anthropogenic forcing of climate is over-predicting the temperature trajectory.
Or did they just included to much Sun or not enough clouds?
Honestly not trying to play up to our host, but, something like this?
http://www.sciencedaily.com/releases/2012/02/120227111052.htm
That takes away one of the big skeptic canards about recent snowy winters being a sign of no climate change. Judith should do a post on this.
Jim D | February 28, 2012 at 1:54 am |
That takes away one of the big skeptic canards about recent snowy winters being a sign of no climate change. Judith should do a post on this.’
Excepting, Antarctic sea ice has been increasing whilst Australia has had it’s coldest summer for decades.
I dunno Fred, I like picking out the technicalities from the bigger picture stuff and then getting into the details downthread.
Jim D, only in Orwellian language, spoken by warmists. Snowy winters are sign of climate change (cooling).
Edim,
On the surface, I don’t think the hypothesis that melting Artic icepack due to warming could be causing an increase in NH snow fall is that far fetched. I recall seeing comments discussing how this is one of the mechanisms by with climate readjusts. More water vapor in the NH leading to increased snow fall, ultimately leading to increasing ice pack and cooling temperatures.
timg56, I agree – it’s not that far fretched. There’s something to it. Earth is very old and there has been many global warmings and coolings. Every warming so far was followed by cooling and vice versa. No exception.
Pokerguy, yet mosh and McIntyre gladly walk into the den of denialists for the Heartland Institute conflags. And Scott Denning also if memory serves me right. So there are some who do not fear open debate/discussion.
Nor does Lindzen. In 2007 he and Michael Crichton (yes, that Michael Crichton) were on the skeptical side of a debate at MIT. View it at http://tiny.cc/3ncsn – Part 1 of 10 (about 90+ minutes altogether).
The audience was polled before and after, so as to score the debate. The skeptics picked up 35%, if I recall. The skeptics kept talking about the specifics of the science, while the pro-AGW side kept referencing authority. The latter was not a winning strategy.
Warmists might want to watch it – to see what not to do in a real live, fair debate, when the other side gets equal time.
Steve Garcia
I hope Lindzen got a guffaw or two when he showed NASA/GISS data manipulation that yielded an additional 0.14 Kelvin/century”
QUOTE
We may not be able to predict the future, but in climate ‘science,’ we also
can’t predict the past.
UNQUOTE
gc – he got a big roar of laughter for what you quoted!
The laughter continues.
http://www.realclimate.org/index.php/archives/2012/03/misrepresentation-from-lindzen/
You might like to look at another reason for laughing at this graph
And you might also care to look at http://www.repealtheact.org.uk/blog/apology-from-prof-lindzen-for-howard-haydens-nasa-giss-data-interpretation-error
“Apology From Prof. Lindzen for Howard Hayden’s NASA-GISS Data Interpretation Error”
Here’s the video of the speech in two parts:
http://www.youtube.com/watch?v=Wy50yaBIDPE
Sorry, I see now that Paul in Sweden beat me by a few hours :)
Unhelpful comments Pierre. Show us the back of the envelope please. List a few of the false premises and tell us why you think they are false.
As Fred noted, I am sure the issue of models is complicated, but two observations are worth noting. The mean models of the IPCC are diverging from observations, not converging.
However, the most effective slide in my opinion was the one showing the actual values used to calculate the average global anomaly. The average anomaly plotted on a scale not designed to magnify differences demonstrates what it actually is: noise around the baseline that in any other field of research would be ignored in the face of variability in the data that is many times greater than the quantity of the anomaly values.
Observations:
Lindzen bludgeons his audience with 58 slides, more than the non-expert audience can be reasonably be expected to take in, which could be considered to be an attempt to project an image of authority, while telling the audience not to listen to appeals to authority.
At #2, he acknowledges that 2x CO2, in isolation, will cause around a 1 K temp increase. At #17, he claims that there is no causal link between temperature anomalies and and anthropogenic forcings. At #18, he goes back to saying that it is trivially true that man’s activities are contributing to warming. (Brian H’s explanation is silly. For instance, Hadley circulation is a global pattern that influences regional changes; and Hadley cells expand in warmer climates and shrink in cooler.)
At #3, he makes the mistake of assuming that oceans warm, ice melts, and plant albedo changes, all instantaneously in coming to the estimate of less than 1 K per doubling.
At #4, “..subject to great uncertainty.”
Well, great uncertainty within confidence intervals. Nevermind that Lindzen’s estimates are outside of those intervals. For instance, for Lindzen’s low climate sensitivity to be correct, negative feedbacks would have to be nearly as large as positive feedbacks. The large climate swings in the majority of paleoclimate studies indicate this is not the case.
At #6, “Science is never incontrovertible.”
True, but it would help if there was some reference to what was claimed to be incontrovertible, and who claimed it. In the meantime, I’ll assume that Planck, Tyndall, et al, and the host of paleoclimate studies haven’t been proven completely wrong yet.
…
#20, an effective argument?
It trots out an analogy which doesn’t even apply and finds flaw with it. That’s only effective if you are predisposed to believe the conclusion.
#28 “Our present approach of dealing with climate as completely specified by a single number, globally averaged surface temperature anomaly, that is forced by another single number, atmospheric CO2 levels, for example, clearly limits real understanding; ”
Uh, the conditions of the premise would limit understanding if they were true, but they are not. This is a statement that others are making claims which they are not making. I mean, how hard is it to open up an IPCC report and look at the table of positive and negative forcings/feedbacks? Globally averaged surface air temperature anomaly gets a lot of attention, but everyone with knowledge of heat capacity knows there is more going on. If anything, the typical climate scientist makes the mistake of assuming the audience knows more than they actually do; they tend to assume knowledge is common that isn’t.
“so does the replacement of theory by model simulation.”
Except, the models are based on theory, and in fact are the only way of testing if the attribution of effects is correct (approximately) in theory. Any model set of parameters (strength of effect atttributions) which does not hindcast well can be rejected. To my knowledge, most of ones with a low climate sensitivity have been.
Ugh, Lindzen has been successful in wearing me out.
Slide 20 simplified: classic fallacy at work.
http://judithcurry.com/2012/02/27/lindzens-seminar-at-the-house-of-commons/#comment-177709
Yeah, yeah, modus ponens, modus tollens, and all that.
As I said, the argument does not apply in the current context.
Assertion: “the argument does not apply in the current context.” I don’t know what you think the “argument” or “current context” is, so I will not comment further here.
“For instance, for Lindzen’s low climate sensitivity to be correct, negative feedbacks would have to be nearly as large as positive feedbacks. The large climate swings in the majority of paleoclimate studies indicate this is not the case.”
That is only true is we know we know all parameters. How sure are we about that? I mean, God knows what we’ve been missing. And another thing: Is climate sensitivity in an ice age the same as in between ice ages?