Lindzen’s Seminar at the House of Commons

by Judith Curry

Lindzen’s seminar last week that was presented at the House of Commons may be the most effective seminar he has given on Global Warming.

The pdf of Lindzen’s presentation is found [here].  Some laudatory comments on Lindzen’s talk from unexpected quarters such as  Simon Carr of the Independent.

Lets take a closer look at his presentation.

Slide 2:

Stated briefly, I will simply try to clarify what the debate over climate change is really about. It most certainly is not about whether climate is changing: it always is. It is not about whether CO2 is increasing: it clearly is. It is not about whether the increase in CO2, by itself, will lead to some warming: it should. The debate is simply over the matter of how much warming the increase in CO2 can lead to, and the connection of such warming to the innumerable claimed catastrophes. The evidence is that the increase in CO2 will lead to very little warming, and that the connection of this minimal warming (or even significant warming) to the purported catastrophes is also minimal. The arguments on which the catastrophic claims are made are extremely weak – and commonly acknowledged as such. They are sometimes overtly dishonest.

JC comment: well I’m sure that got their attention.

From slide 3:

Here are two statements that are completely agreed on by the IPCC. It is crucial to be aware of their implications.

1. A doubling of CO2, by itself, contributes only about 1C to greenhouse warming. All models project more warming, because, within models, there are positive feedbacks from water vapor and clouds, and these feedbacks are considered by the IPCC to be uncertain.

2. If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.

Given the above, the notion that alarming warming is ‘settled science’ should be offensive to any sentient individual, though to be sure, the above is hardly emphasized by the IPCC.

JC comment:  #1 is the conventional thinking, although see previous posts on no feedback sensitivity [here and here].  #2 is an oversimplification of how climate sensitivity is determined in the conventional way; for nonconventional thoughts expressed previously at Climate Etc., see [ here and here].

Slide 4:

  • Carbon Dioxide has been increasing
  • There is a greenhouse effect
  • There has been a doubling of equivalent CO2 over the past 150 years
  • There has very probably been about 0.8 C warming in the past 150 years
  • Increasing CO2 alone should cause some warming (about 1C for each doubling)

JC comment:  “There has been a doubling of equivalent CO2 over the past 150 years”  Not exactly sure what that means, perhaps equivalent means also CH4, etc?  This does not seem correct.  Also, about 1C for each doubling?  Apart from what the no feedback sensitivity actually means, this sensitivity is not linear for multiple doublings of CO2

Unfortunately, denial of the facts on the left, has made the public presentation of the science by those promoting alarm much easier. They merely have to defend the trivially true points on the left; declare that it is only a matter of well- known physics; and relegate the real basis for alarm to a peripheral footnote – even as they slyly acknowledge that this basis is subject to great uncertainty.

JC comment:  this is a profound statement

Slide 6:

Quite apart from the science itself, there are numerous reasons why an intelligent observer should be suspicious of the presentation of alarm.

  1. The claim of ‘incontrovertibility.’ Science is never incontrovertible.
  2. Arguing from ‘authority’ in lieu of scientific reasoning and data or even elementary logic.
  3. Use of term ‘global warming’ without either definition or quantification.
  4. Identification of complex phenomena with multiple causes with global warming and even as ‘proof’ of global warming.
  5. Conflation of existence of climate change with anthropogenic climate change.

JC comment:  very good points, althought #4 is not clearly stated

Slide 7:

Some Salient Points:

1. Virtually by definition, nothing in science is ‘incontrovertible’ – especially in a primitive and complex field as climate. ‘Incontrovertibility’ belongs to religion where it is referred to as dogma.

2. As noted, the value of ‘authority’ in a primitive and politicized field like climate is of dubious value – it is essential to deal with the science itself. This may present less challenge to the layman than is commonly supposed.

JC comment:  generally good points, but I object to the last sentence in #2.  Scientists don’t even know how to deal with the complex climate science adequately.

Slide 10:

3. ‘Global Warming’ refers to an obscure statistical quantity, globally averaged temperature anomaly, the small residue of far larger and mostly uncorrelated local anomalies. This quantity is highly uncertain, but may be on the order of 0.7C over the past 150 years. This quantity is always varying at this level and there have been periods of both warming and cooling on virtually all time scales. On the time scale of from 1 year to 100 years, there is no need for any externally specified forcing. The climate system is never in equilibrium because, among other things, the ocean transports heat between the surface and the depths. To be sure, however, there are other sources of internal variability as well.

Because the quantity we are speaking of is so small, and the error bars are so large, the quantity is easy to abuse in a variety of ways.

JC comments:  good points.

Slide 16:

Compares global temperature time series for the periods 1895-1946 with 1957-2008.   The trend and variability for the two periods are very similar (which is a strong argument against the unprecedented rate of change), but there is no clear indication that the second period is overall warmer than the first.

Slide 17:

Some take away points of the global mean temperature anomaly record:

  • Changes are small (order of several tenths of a degree)
  • Changes are not causal but rather the residue of regional changes.
  • Changes of the order of several tenths of a degree are always present at virtually all time scales.
  • Obsessing on the details of this record is more akin to a spectator sport (or tea leaf reading) than a serious contributor to scientific efforts – at least so far.

JC comment: I don’t understand the second bullet?  I disagree with the last bullet; the details of the record in terms interannual and decadal variability are of  importance to people. The details obviously aren’t useful in supporting or refuting AGW, but proponents then base their arguments on 50 years of data?

Slide 18:

4. The claims that the earth has been warming, that there is a greenhouse effect, and that man’s activities have contributed to warming, are trivially true and essentially meaningless in terms of alarm.

Nonetheless, they are frequently trotted out as evidence for alarm. 

JC comment:  this is the key point, and it isn’t made often enough

Slide 19:

Two separate but frequently conflated issues are essential for alarm:

1) The magnitude of warming, and

2) The relation of warming of any magnitude to the projected catastrophe.

Slide 20:

When it comes to unusual climate (which always occurs some place), most claims of evidence for global warming are guilty of the ‘prosecutor’s fallacy.’ For example this confuses the near certainty of the fact that if A shoots B, there will be evidence of gunpowder on A’s hand with the assertion that if C has evidence of gunpowder on his hands then C shot B.

However, with global warming the line of argument is even sillier. It generally amounts to something like if A kicked up some dirt, leaving an indentation in the ground into which a rock fell and B tripped on this rock and bumped into C who was carrying a carton of eggs which fell and broke, then if some broken eggs were found it showed that A had kicked up some dirt. These days we go even further, and decide that the best way to prevent broken eggs is to ban dirt kicking.

JC comment:  I this is a very effective argument

Slide 28:

Where do we go from here?

Given that this has become a quasi-religious issue, it is hard to tell. However, my personal hope is that we will return to normative science, and try to understand how the climate actually behaves. Our present approach of dealing with climate as completely specified by a single number, globally averaged surface temperature anomaly, that is forced by another single number, atmospheric CO2 levels, for example, clearly limits real understanding; so does the replacement of theory by model simulation.

JC comment:  I agree with the above statement

In point of fact, there has been progress along these lines and none of it demonstrates a prominent role for CO2. It has been possible to account for the cycle of ice ages simply with orbital variations (as was thought to be the case before global warming mania); tests of sensitivity independent of the assumption that warming is due to CO2 (a circular assumption) show sensitivities lower than models show; the resolution of the early faint sun paradox which could not be resolved by greenhouse gases, is readily resolved by clouds acting as negative feedbacks.

JC comment:  above statement reflects more certainty than we actually have, IMO

Slides 29-56:

Lindzen’s view of the science of climate, mostly from the perspective of a simple  energy balance and  feedback model.

Slides 57-58:

You now have some idea of why I think that there won’t be much warming due to CO2, and without significant global warming, it is impossible to tie catastrophes to such warming. Even with significant warming it would have been extremely difficult to make this connection.

Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating.

In the meantime, while I avoid making forecasts for tenths of a degree change in globally averaged temperature anomaly, I am quite willing to state that unprecedented climate catastrophes are not on the horizon though in several thousand years we may return to an ice age.

JC summary:  Lindzen’s talk is in two parts.  The first part is very effective in pointing out the vacuousness of the defenses of AGW such as the 2010 Science letter signed by 250 members of the NAS and the 2010 letter from Cicerone and Rees.

The second half of the talk is Lindzen’s perspective on the science, which IMO has some good points but is overly simplistic.  To Lindzen’s credit, he doesn’t oversell his own perspective (although he seems extremely confident i his own perspective), but states this is “some idea of why I think“.   The significance of this is as a “second opinion” and a reasonably well argued perspective, as pointed out in the latest WSJ op-ed (as opposed to appeal to consensus).   Lindzen’s perspective is not implausible, as the IPCC perspective is not implausible (in the sense that neither is falsifiable at this point).  IMO both the IPCC and Lindzen are overconfident in the assessment of their perspectives; classic “competing certainties”, which means the uncertainty monster is lurking.

The reasons that I think Lindzen’s presentation is so persuasive to public audience are:

1. Lindzen’s persona and appearance, that reeks of scientific gravitas

2.  His argument in the first half of the talk is very effective, taking down the public statements by the NAS folk.

3.  His scientific argument in the second half of the talk is  appealing in that it relies on data and theory (rather than models).

4.  Keeping policy and politics out of his scientific argument

Your thoughts?

JC note:  I am currently in Boston, visiting MIT, returning to Atlanta Wed nite.  Hence my attention to the blog will be somewhat limited during this period.  I will try to moderate the comments on this thread for relevance.

1,483 responses to “Lindzen’s Seminar at the House of Commons

  1. Josh was there

  2. The most obvious questionable trick that Lindzen made in this presentation is concentrating in several places on the period of 150 years. As nobody thinks that the first half of that period is strongly affected by anthropogenic influence he effectively doubles the denominator and halves the average human contribution. I think this is done by purpose and is dishonest.

    • Fair point Pekka.

    • Pukka

      According to current wisdom co2 has had an effect since 1750. So looking back 150 years is reasonable especially as that is within the era of global temperature records as noted by giss and and Hadley
      Tonyb

      • Tony –

        isn’t Pekka’s point that Lindzen is giving the impression that the Co2 effect is evenly spread out over 150 years? It is something that many of us like to insist is false, with graphs like this –

        I think Lindzen’s emphasis is a bit deceptive but very much part of the territory…

      • Anteros
        And when would you suggest for the starting point?

        http://www.vukcevic.talktalk.net/CO2-dBz.htm

      • vukcevic –

        A good question to which a sensible answer is that there are caveats to be made for any starting point. Post WW2 has some basis in reason, as it marked quite a significant change in emissions. 2nd half of the 20th century for similar reasons – arbitrary but not cherry-picking (from any point of view)

        Nothing is perfect, but I agree with your point that spreading a 0.7C temperature rise over 150 years is at least disingenuous.

      • Not really. The effect is lagged. The effect is log. And you really have to look at the sum of all forcing.

        The simple thing is that you can deduce very little by looking at the temperature series. The science tells you why you cannot deduce the effect by looking at relatively short time series.

        We did not figure out that GHGs warm the planet by looking at the temperature series. It’s rather elementary physics

      • Anteros
        My point was and is: there is little correlation between the CO2 and the historical temperature data, that is not to say that the CO2 effect doesn’t exist, but its magnitude is seriously overestimated and it can be for all practical purposes ignored.
        As far as temperature correlations are concerned
        the best proxy available is the geomagnetic change
        based on 400 years records from the great maritime nations around North Atlantic.
        All those who look trough a narrow keyhole of science at the evolution of the historical temperatures data are unlike to see and understand complexity of the three main players:
        the sun, the earth and the ocean

        http://www.vukcevic.talktalk.net/CET-NAP-SSN.htm

        Study and understand the complexity of North Atlantic where you will find the true answer. See also:

        http://www.vukcevic.talktalk.net/CET-100-150-100.htm

        http://www.vukcevic.talktalk.net/CET-NVa.htm

    • 1) If you Google ‘CO2 concentration 150 years graph’, you will find it is a fairly common starting point. It permits apples:apples. (It happens to be a bit past the Dalton Minimum, which means it is a good starting point for a positive trend.)
      2) Given this, I suggest you owe Dr. Lindzen an apology for your last word.
      3) Further, Dr. Lindzen is to be congratulated for actually giving a starting point (date and CO2 concentration) for Arrhenius’s equation (logarithmic), which depends upon starting and ending concentration. E.g., less bang from the second dose. I have the impression that this is often left out of AGW claims.

    • One has to wander where all that CO2 came from in the early 1700s.

      Every (even short) period of the cold CETs was followed by a rapid temperature rise. Or maybe Europeans excessively burning fire wood in the cold winters caused subsequent warming. That is a win-win proposition, not only they kept themselves from freezing but insured that all that CO2 kept them warm for next few decades.
      I say 3 cheers for CO2.

      • I would hazard a guess that the introduction of the European Earthworm to the Forests/Prairies caused all sorts of changes in North America. The density of the grassland shot up and scrubb-land became grassland. The Albedo changes would have been quite impressive, and you would get a nice pulse of CO2/CH4 and N2O.

        http://www.mendeley.com/research/earthworminduced-n-mineralization-fertilized-grassland-increases-both-n2o-emission-cropn-uptake/

      • DocMartyn wrote:
        qute
        [] introduction of the European Earthworm to the Forests/Prairies caused all sorts of changes in North America. []The Albedo changes would have been quite impressive, and you would get a nice pulse of CO2/CH4 and N2O.[]
        unquote

        And a large dissolved silica pulse into the surrounding seas. More dissolved silica, more diatoms, fewer calcareous phytoplankton species, less CO2 pull down, less light isotope pulldown (diatoms are less isotope discriminatory), a light isotope signal left in the air and CO2 levels rising. Fewer phytos, less DMS, less low level cloud cover, more insolation, warmng. Which all sounds familiar.

        Or something else no-one’s thought of.

        JF

    • I took Lindzen’s comparison as straightforward and clear and definitely not dishonest. You have to actually listen to what he said about the graphs to understand what he was getting at.

    • I disagree.

      Nobody thinks the first two thirds of that 150 year period is strongly affected by anthro CO2, yet substantial warming occurred over that period. The period 1890-1945 demonstrated the same rate of warming as the subsequent 1945-2000 period during which anthro effects are supposed to be dominant. If anything, reference to the longer period discounts the magnitude of natural warming and exaggerates the anthro influence.

      Shame on Lindzen for giving away the farm :)

      • simon abingdon

        What do you suggest might have caused the 1890-1945 warming which didn’t then cause the 1945-2000 warming? Perhaps evidence for the 1945-2000 anthro effects being dominant is simply wrong. (Sensitivity overexaggerated and all that).

      • The cause of temp rise earlier in the 20th century is estimated to be the sum of solar and CO2 (failry accurate indices), as well as a reduction in volcanic activity (much less solid data). The difference with the current regime is that solar and volcanic hasn’t trended in the direction you’d expect to account for the warming.

        Climate sensitivity doesn’t have a place in this comparison, as climate responds to any external forcing, not just CO2. There are slight differences in ‘efficacy’, but that doesn’t impact for the purposes of comparing these two periods.

    • I also disagree that this would be a ‘trick’ or ‘dishonest’. He doesn’t claim the CO2-effect would be evenly distributed during that period. 150 years is a reasonable starting point since that is the only period where we have even remotely adequate measurements of GMTA. CO2-concentrations also have risen during that period (exponentially), still there is little or virtually no acceleration in the trend of GMTA during that period..

      I would like to see Pekka’s response on what the others have responded.

    • The entire focus on the past “150 years” as important is part of warmist dogma. The point is nonsense PP.

    • Pekka, your data is off. Atmospheric CO2 started rising quickly after WWII ended. Global temps have not risen that much from 1945 when CO2 really kicked into high gear. Lindzen could have chosen other time scales:

      * He could have shown that global temps rose quickly in the 1930s when CO2 was not rising quickly.

      * He could have shown global temps declining from 1945 to 1975 when atmospheric CO2 was rising quickly.

      Pekka, I think you are being unnecessarily critical and do a disservice to civil discourse by calling him dishonest. The data simply is not on the side of the warmers.

    • Pekka,

      I am not so sure. You say “nobody” thinks this, but the graphical appeal of the hockey stick certainly includes the early 20th century warming. There seem to be a couple of common arguments. One is the IPCC SPM statement, which Fred reminds us of the limitations “most” >50% and the time period of 1950s-2000s. The other is an argument made either implicitly or explicitly by An Inconvenient Truth among others, which is all too happy to plot the CO2 rise along with the temperature rise for at least the entire 20th century.

    • The carbon emissions from fossil fuels up to 1935 were about 11.6% ot the emissions up to 2010 according to the CDIAC data. The resulting increase of CO2 concentration from what it would have been without the emissions is only slightly larger share of the increase op to now. Thus some 85% of the human influence through carbon has materialized during the second half a the period 1860-2010. Even over this period the influence has been uneven.

      I reacted to this, because a couple of slides appeared really misleading. The warming over this 150 year period was indeed discussed on those slides as if it had been uniform and dividing the temperature change over this period by the length of the period would provide a meaningful number. The worst case is on slide 10 where warming of 0.7C is coupled with 150 years. That’s the main reason for my reaction. It can also be noticed that reducing the period from 150 years to 100 years would reduce the denominator by a third but actually increase the temperature change.

      How convenient it would have been to use a period of 1000 years for the calculation, if good temperature data were available. The justification for the 150 years is no more justified for discussion of the strength of the human influence.

      150 years appears also on page 4 where the claim of doubling of equivalent CO2 seems to be exaggerating the denominator even excluding all aerosol effects. With all GHG’s but excluding even direct aerosol effects the error is not large, but including estimated aerosol effects the error is again close to a factor of two (the estimates for aerosols have large error ranges).

      It’s always debatable, how far one can use cherry picking and other tricks to enhance own arguments before being called dishonest becomes fair. My view is that Lindzen exceeded that limit clearly.

      • Pekka, your point is taken. A couple caveats, one I noted above about the “happy coincidence” of early 20th century warming and CO2 increase, in graphics. The other – effects of land use change? Still, I agree that 0.7/150 is not helpful and if used in the context of “the warming is small, why should we care”, is misleading.

      • Pekka, nevermind what I said about land use. It does seem though, that radiative forcing from CO2 in say 1935 was a slightly higher % of 2010 than concentration or emissions, due to the logarithmic effect on temperature.

        Interestingly, looking at the GISS diagrams for forcings, what really stands out about the early 20th century warming is the apparent contribution of the lack of stratospheric aerosols.

      • I would still disagree on page 10. He is simply making an assertion of ‘Global Warming’ as a quantity and a correctly reports the observed change in this quantity over 150 years. The slide has nothing to do with the causes or anthropogenic effects. Of course he could say also how much GW we’ve seen over the last 100, 50, or 30 years but I don’t see the trend being a serious point in this slide.

        However I might agree on your critique on page 4 where he claims the GHG-effect has increased equivalent of 2xCO2 during 150 years. How many watts per m^2 is the real number? Does he use this number just to keep it simple enough for the audience or does this really affect his conclusions? You say 2xCO2ekv is exaggerated, then you must know the real number?

      • Juho,

        On page 4 Lindzen notes that the numbers are not contested. That must mean that he considers the forcings listed in AR4 WG1 Figure SPM.2 on page 4 (or Figure TS.5 or 2.20, Figure 2.20(B) gives additional information on the sum) to be best estimates. Adding all positive GHG contributions gets close to the forcing of doubling CO2 but not quite to that, but subtracting then both direct and indirect aerosols ends up with half of that.

      • Bruce Cunningham

        An example of dishonesty in my eyes, is the claim by many alarmists that the temperature rise has not stopped for the last 12-15 years or so. They use the mathematical trick of using the average temperature for each decade 80’s, 90’s, 2000’s and display them as a bar chart thus showing that the 2000’s were the warmest. They then proclaim this as proof that the warming hasn’t stopped. I can’t help but believe that someone who tries such an approach, take anyone that will fall for such nonsense as fools. I believe it shows just how little they think of the general public. A very sneering attitude.

        Lindzen’s graphs attempt to do what many in the debate would like to be able to do but cannot. That is, compare what temps would be if we weren’t emitting CO2 to what they are since we now are. The only way one can even attempt to do that is by using a time period in the past where little CO2 increase was present and compare it to more recent times. Everyone knows this is not perfect, but it the best that can be done, while using actual thermometers to do the measuring instead of proxies. He uses the same X and Y axis scales in both graphs ( avoiding a trick many use), and uses data readily available to anyone (avoiding another common trick). I don’t see this as dishonest just because someone didn’t chose the same time periods I would have. All analysis is biased to some extent or the other. You just have to take the time to understand what someone is trying to say, and then decide if you think they are trying to “pull a fast one” as we say here. I don’t see any dishonesty. Lindzen is very intelligent and knows he is a prime target. I don’t believe he would attempt anything he thought to be dishonest, as he knows that many also intelligent people are looking very closely at everything he does, especially something he states in front of Parliament. They would be all over him.

        I have not heard Lindzen say anything dishonest. He doesn’t have to. He would be stupid to do so.

      • So your point being that the dishonesty is where dismisses the (hugely uncertain?) aerosol effects concluding the net radiative forcing has increased as much as ~3.7Wm-2?

        As what I understood from his presentation is, that these aerosol effects play as a ‘fudge factor’ in the GCM:s and that he is just trying to make a point how these affect the feedback analysis’?

      • Juho,

        There’s a 20% difference without aerosols and there’s is certainly some aerosol effect in the same direction. Thus giving the impression that the data is not disputed is worse than misleading.

      • MattStat/MatthewRMarler

        Pekka Pirila: The carbon emissions from fossil fuels up to 1935 were about 11.6% ot the emissions up to 2010 according to the CDIAC data. The resulting increase of CO2 concentration from what it would have been without the emissions is only slightly larger share of the increase op to now. Thus some 85% of the human influence through carbon has materialized during the second half a the period 1860-2010. Even over this period the influence has been uneven.

        I think that it is a nearly hopeless exercise, on present evidence, to choose the “correct” starting date for evaluating the rate of change of the global mean temperature. Each time a model forecast (scenario or whatever) is published, the temperature changes that matter are those that occur subsequently. Starting a temperature graph at the end of the little ice age shows that recent change (post 1975) is not unusual compared to the whole change since LIA. Proponents of AGW and opponents of AGW choose different starting points in order to make points that they believe. Lindzen’s choice is as defensible as any one else’s choice, and certainly as defensible as anyone’s choice to focus on the post 1975 record.

      • OK I think I got your point about the word ‘uncontested’. He propably should have included more uncertainty in his presentation overall.

        (Despite the lack of uncertainty, I still find the reason for his acceptance using the degratory word ‘denier’, quite funny)

    • 150 years seems a natural period to use as the most widely used global temperature data sets only go back to 150 years or so ago. The widely used HadCRUT global temperature series goes back to 1850, GISS and NOAA start a few decades later.
      Also, as the first decade or so of the twentieth century appears to have been unusually cold, and bias uncertainties in that period were particularly large (Brohan et al, 2006), using a 150 year period seems more appropriate than, as is often done, only starting from 1900.

      • What is natural, what is perhaps not quite natural but ok and what is dishonest is often not well defined.

        I maintain my view on this case. Fred had a strong view on picture of the WSJ op-ed, but I didn’t see that as he did. Most of the people writing on this site condemn the Hockey Stick pictures, but there are certainly also people who disagree on that.

        We can tell our impressions and opinions and argue on them, but for all these three cases it’s possible to present arguments for both sides. That’s possible as long as the data shown is not explicitly erroneous and the issue is about emphasizing the right points and giving the right impression.

    • I thought Lindzen used 150 years because that’s what the alarmists use. After all, their thesis of Man being a primary influence falls apart if CO2 or temperature rise starts before the Industrial Revolution. He’s not being disingenuous at all, he’s poking holes directly at the argument most often used by the alarmists.

  3. Lindzen represents the sound of reason.

  4. Hi Judy

    Were you just looking at the slides?

    Ie you don’t quite get all of what he said..

    Ie the co2 equivalence DID include methane, etc.

    Why not watch the video of it and perhaps reconsider some of this blog post..

    2 parts are available at Climate Realists website

  5. The House of Commons is an important place, where policy is being
    made….they need understandable arguments….
    Let’s throw the “Likes of the Gleicks” out and get all the Lindzens in….
    rejoyce everybody….finally scientific progress!….
    JS

    • You say that, but according to Dellingpole there were only 2 MPs present at Lindzen’s talk. I was disgusted to hear this, but I suppose not really all that surprised.

      • Robinson: If so, it would be regrettable…..I guess, MPs do not
        like a marathon hammer show with 58 slides…..this scares people away….
        Better: Short and concise with 15 slides….an more Q&A…..my
        opinion….I don’t know why Dick decided otherwise….but slowly but surely…..
        JS

      • If it is like Congress then key staffers cover events like this. They have the technical knowledge.

      • David – for good or ill MPs have very little money for staffing, especially since a recent expenses scandal. So at a guess the 2 MPs were the two brave souls who have dared to ‘come out’ as openly sceptical of AGW. Sigh.

      • If Lindzen could produce a short (30 mins or less) version of his talk, with 10 really good and snappy slides that would each be understandable standalone (rather than just a lot of words that he reads out), the impact would be much greater. A picture paints a thousand words.

        He has a good ‘narrative’ to tell, but his ponderous way of doing so weakens rather than enhances it.

  6. Dr Curry –

    Is there much in the whole climate debate that isn’t relevant to this talk, and therefore this thread?

    In slide 19 Lindzen talks about the two necessary ingredients for climate panic – 1) The magnitude of warming and 2) The relation of warming of any magnitude to the projected catastrophe.

    I think this misses a third ingredient – the rapidity of warming. Whenever I argue for the extraordinary adaptiveness of both life ingeneral and the human species in particular, I’m invariably told by some doom-endian that it is the speed of the change that’s the problem”, which as far as I know is based on negative imagination and nothing else.

    Otherwise, I’m grateful for your comments, especially noting the similarity of confidence between Lindzen’s world view and that of the IPCC (to say nothing of the ultra-alarmists).

    • Norm Kalmanovitch

      On page 11 of the pdf of the talk is a graph of HadCRUT3 global temperature with the year 2002 depicted with a vertical line.
      The first thing to note is the temperature spike that occurred in 1998 because of el nino conditions. Note that the global temperature dropped in 1999 to an equivalent level of 1997.
      The Mann “hockey stick” posted in the 2001 IPCC TAR ends in 1998 giving the blade of the hockey stick about 50% more additional length and giving the viewers of the Summary for Policy Makers of this report the false perception that the global temperature was increasing far more rapidly than it actually was.
      Far more interesting is if you start at 2002 and project back with the best fit straight line which shows the actual warming trend from 1979 to 2002.
      If you do the same thing starting at 2002 and drawing the best fit straight line to the end of the data (July 2011 in this case) you will get the cooling trend that started in 2002 and is still continuing today.
      If catastrophic warming as predicted by the climate models is actually going to happen this cooling trend will first have to come to an end and none of those promoting AGW are willing or able to make this prediction and until they do so and base it on hard physical evidence instead of fabricated parameters input into climate models; we need to be more concerned about the current global cooling than any global warming!

      • This is the GISS temperature anomaly and the rate from 1880 to present; rates taken over 16 years. The blue hatches are 16 years of rate averages.

        This is GISS and [CO2] 1881 to 2009 (1).
        GISS vs natural log ([CO2]), from which one can get the ‘climate sensitivity’ from the slope (2).
        Finally, if one removes the ‘climate sensitivity’, one only has to explain the big pyrimid from 1907 to 1979 (3).

        You can get rid of the very recent post-1975 warming using CO2 increase, but you sill cant do anything about the 70 odd years from 1907.
        You could probably get a better fit plotting GISS versus the chicken population or the production of green paint.

  7. Any chance to talk with Dr. Lindzen?????

  8. I interpret Changes are not causal but rather the residue of regional changes as a way of saying that there is no such a physical thing as a “global temperature” that gets changed by CO2 or other mechanisms: rather, the “global temperature anomaly” is the result of computations involving the temperature changes at a regional level.

    So if the world were made of two regions of same area, one with a +5C temp change and the other with a -2C temp change, the “global temperature anomaly” would be +3C even if an increase of +3C has in effect happened nowhere, hence it could not have been “caused” by anything.

    • Probably correct. Global temp anomaly is a statistic with a huge variance, not a measurement. BEST says something like 30% of stations show cooling. So it is not as though temp is simply being caused to go up.

      I also like his point about not focusing on the details, because I see the details as contradictory. Some data sources show warming post 2000 but some show none. Most show warming 1978-1997 but UAH shows none. According to these details we do not know when it has warmed and when not. Of course the question then becomes what exactly is science supposed to explain, if the data is contradictory?

      • “BEST says something like 30% of stations show cooling”

        Luck that all the temperature proxies used in temperature reconstruction depend on the average temperature in 100,000 mile squared grid, rather than the actual temperature in a particular local.

      • David W.

        It has been a long time.

        Has a variance for the global temperature anomaly been calculated?

        If yes, can you point me to a description of how it was done?

      • “BEST says something like 30% of stations show cooling.”

        many people have misunderstood that chart. you are not the first.

    • Good points, but I think regional temperature trends are much more complex. The Anarctic, itself land covered by snow and ice and surrounded by mostly ocean, has warmed very little. The Arctic, on the other hand is ice and snow, surrounded mainly by land. The anarctic is far from centers of industry and soot emissions, and soot emissions fall out of the atmosphere fairly quickly, probably not crossing over the equator to any great extenct. The arctic is close to 90% of the world’s industry and receives a lot of the carbon soot fallout. I think the difference in albedo, from soot fallout, plus the positive feedback of albedo change when Ice and snow becomes water, may explain in large part the differences in Arctic and Anarctic temperature trends and by extension the differences in northern hemeisphere and southern hemisphere temperature trend. If I am wrong about this, Dr. Curry and others, give me some scientific studies and data that refute this or bring it into question. I have seldom seen this hypothesis considered, and it has a very strong bearing on the competing roles on CO2 and carbon soot emissions.

      • A+ question. I hope someone in the know can point to some literature on it.

      • Latimer Alder

        Simple question, but do the satellites orbiting both poles report very different albedos between the two? If so that could be strong evidence for Doug’s theory. If not, less so.

  9. I am feeling increasingly sorry for the warmists. I always cheer for the underdog.

    “A doubling of CO2, by itself, contributes only about 1C to greenhouse warming.”

    What’s the evidence for this? I don’t buy it. What does “by itself” mean? By radiation only? If yes, it doesn’t say much about the overall heat transfer and only overall heat transfer can contribute to any temperature change.

    • Are you saying you doubt that the earth is warming? You question the validity of 6,000 temperature measuring stations?

      • Markus Fitzhenry.

        There has been numerous discussions about UHI, as well as the veracity of adjustments, Ross, yes many of those measurements are suspect.

      • Cut the Orwellian speak. Do you doubt that Earth is cooling on multi-millennial time scale (~10 ka)?

    • This is one of the items that skeptics and warmists agree on. It is in the physics of it. I don’t focus on this, but believe that it has to do with the Stefan-Boltzmann law. If I am wrong on that, I am sure someone here will correct me.

      Steve Garcia

      • Well, I disagree. Stefan-Boltzmann law is about radiation (flux of energy radiating from a body) and its dependence on T. The heat transfer between Earth’s surface and atmosphere/space is multimodal – it involves radiation, convection and evaporation.

      • Okay. I said I might not be correct and asked for someone to set me straight if I had that attribution wrong. Thanks.

    • Edim,
      “A doubling of CO2, by itself, contributes only about 1C to greenhouse warming.”

      “By itself” means prior to any theoretical feedbacks. And 1C of warming is not considered problematic. The warmers hypothesize significant and disastrous positive feedbacks from water vapor, etc. – perhaps 3x or more leading to warming of 3-5C. Others, like Dr. Spencer, claim net negative feedbacks which dampen the warming – perhaps 0.5x leading to warming around 0.5C.

      • Ron,

        I am not convinced of this ~1 °C per doubling of CO2. It seems to me that the multimodal heat transfer at the Earth’s surface is not modelled properly. Earth’s surface is free to cool by convection/evaporation, and together they cool the surface more than the surface radiation.

  10. My thoughts? Like a breath of fresh air and makes the likes of Peter Gleick seem like very sad little nutters wandering the streets with a sandwich board advertising the end of the world. And sadly that is just what they are; tiny, sad little human tragedies.

  11. I don’t want to draw attention away from Dr. Lindzen’s talk here, but to add to it by encouraging people to listen to another terrific presentation. Matt Ridley’s talk at http://tiny.cc/wml8j is short and sweet, and packed with info. He is one helluva speaker.

    Richard Lindzen is one of my heroes, so I don’t want to detract in any way from his excellent presentation. He is a Rock of Gibraltar as the voice of climate sanity, which he has always been. I’ve written to him on occasion, and he has always been a gentleman and a quiet voice in a discipline gone mad.

    Steve Garcia

  12. I was slightly less impressed by this than by Dr. David Evans piece ‘the sceptics case’ (http://wattsupwiththat.com/2012/02/26/the-skeptics-case/).

    Both covered much the same ground, but Lindzen’s was too long for politicians and other morons.

    If Lindzen and Evans could get together they could probably devise an exposition which every politico and opinion-former in the world should view.

  13. Judy, Slide 6 # 4 may mean the huge mass of peripheral studies which have used the hook of ‘global warming’ for the Gods of Grants, Ice Bear being the Polester Bear.
    =============

  14. Oral arguments regarding the US EPA’s Endangerment Finding begin today in Federal Court. The EPA’s science basis is the IPCC. The date for the science basis is: 2007. There has been no science updates. Frozen in time sort of speak.

    One of several points for litigation is that EPA did not do its own research to determine its science basis it relies wholly on IPCC. One of Lindzen’s points was that science is evolving and that what is currently known, and in particular the uncertainties, is much more than five years ago; i.e., the more we know, the less certain we should be.

    The issue as I see it, that the science is not known sufficiently to enact public policy. Extravagant claims of catastrophe are not matched with anything like that the science is converging on a single likely scenario, rather, with more data, there is greater divergence now than ever before. Witness the current temperature hiatus, no matter what its explanation. The ocean heat content 0 to 700 meters declining. These are but two instances of increasing uncertainty not more certainty.

    My own experiences, when I have had data divergence over time rather than convergence, I have had to step back and realize something profound and impactful is missing. Submit to the Journal of Irreproducible Results.

    Next hypothesis.

    EPA should have looked before it leaped.

    • Two or three years ago, when I went to EPA’s website for climate change I immediately noted that the only reference they list is the IPCC. I sent in a question asking why this was the case, since in every discipline I’ve gotten a degree in (3), I was taught that one should avoid relying on a single reference.

      Still waiting for a response.

  15. I completely disagree that Lindzen’s speech will have any impact outside brief blogospheric discussion. Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues; moreover, he’s been making the same low-sensitivity arguments (in various forms) for over a decade, and progress in the scientific literature and at academic conferences has been moving at rapid paces with virtually no influence from Lindzen. His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimatic constraints (see Knutti and Hegerl for a start).

    But it’s easy to see why his speech will have little influence. On many occasions, he steps well outside his expertise, and makes claims which experts in those areas already know full well or are completely wrong. For instance, he delves into planetary climate by talking about the faint young sun. There have been decades of work on this problem, and many subsequent criticisms of his lone paper on why high clouds can explain the faint young sun problem (e.g., (e.g., by Goldblatt and Zahnle). When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases, and the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%) and requires much thicker and colder clouds that are not considered plausible. It’s also based on unwarranted extrapolation from his “iris hypothesis” inferred from modern day observations, which itself has been challenged by a number of papers for being overinflated.

    Lindzen also jumps into the Arctic community by letting us know that CO2 can’t imply weak summer temperature amplification. Of course, if he bothered to read the literature (e.g., Mark Serreze has some papers on the seasonality of the ice-albedo feedback), he’d know that this is in fact what models and observations predict, because the Arctic is generally pegged to the freezing point in areas of high melt.

    His line that “…is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments” is just too stupid to even acknowledge. Apparently Lindzen doesn’t think we should include such non-CO2 factors? If you include them, then they are artificial adjustments; if you don’t include them, then you’re a warmist that ignored everything non-CO2. How convenient.

    • Chris Colose –

      “His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations”

      Except the observations of global temperature. Give it up, Chris. Your models are dead, and nailing them to the f***ing perch won;t make things right anymore.

      • Markus Fitzhenry.

        I’ve sent Chris a dozier on ‘Tokyo Rose’, so he can increase his skills.

      • Marcus –

        Dragging Tokyo Rose in is priceless…

        “Hey, G.I.! Warmists are right! The Japanese Emperor has new clothes! Learn how to fly kamakazi, because the Earth’s oceans gonna boil over if you don’t. It up to you to save the planet! Destroying U.S.S. Carbon Dioxide is only true way to salvation! Only Gleik and Jones and Gore tell truth! You big handsome G.I.!” /snarc

        Steve Garcia

      • Chris, you’re young and smart and you obviously have a passion for the field in which you’ve chosen to make your career. So I’d be surprised if your mentors haven’t explained a few things to you about the value of civility in getting your career established. If they did perhaps you should re-examine what they said. You don’t have to brown up to somebody just because of their senior professional status, but if you want to challenge them perhaps the instant gratification of stabbing at them in the blogosphere isn’t the best way to do it. Making enemies in academia is easy enough without sliding into incivility.

      • You do understand that Tokyo Rose spoke English without an accent, don’t you? She was a native English speaker.

    • Markus Fitzhenry.

      Abusive ad hominem (also called personal abuse or personal attacks) usually involves insulting or belittling one’s opponent in order to attack his claim or invalidate his argument, but can also involve pointing out true character flaws or actions that are irrelevant to the opponent’s argument. This tactic is logically fallacious because insults and negative facts about the opponent’s personal character have nothing to do with the logical merits of the opponent’s arguments or assertions.

      • //”…but can also involve pointing out true character flaws or actions that are irrelevant to the opponent’s argument. This tactic is logically fallacious…”//

        No it’s not, it’s just a statement which may or may not be true. Even though (logically) it may be irrelevant, in practice, it may serve as a template for assessing credibility. It’s appropriate to acknowledge that you don’t want Joe down the street who was a high school drop out to do heart surgery on you, even though that is not a logical argument for why he has/has not the theoretical capacity to do so. Similarly, I’m not saying “Lindzen is wrong because he’s boring and no one likes him” but rather pointing out that he has lost credibility in the community.

        Why this is the case is a separate matter, one that I touched upon the surface in my post, but also has been well-documented elsewhere in the literature and is freely available for people to look at.

        However, my suspicion is that very few people are interested in an honest investigation of his feedback hypotheses and the subsequent interrogations into their robustness, but rather want to throw potshots at AGW (or me personally).

      • Markus Fitzhenry.

        However, my suspicion is that very few people are interested in an honest investigation of his feedback hypotheses and the subsequent interrogations into their robustness, but rather want to throw potshots at AGW

        Take your mouth over to the following discussion, I’ll take to you there.

        ‘http://tallbloke.wordpress.com/2012/02/25/stephen-wilde-the-myth-of-backradiation/#comments

        You big mouths are actually scared of the knowledge sceptics have about feedback hypotheses. I’ve see plenty of semantics from you Chrissy, not much substance.

      • “he has lost credibility in the community.”

        It is the ‘community’ which is losing all credibility. Month by month, year by year, as the data comes in.

      • Basically, Chris Colose has hijacked this post and made it about HIM. Typical troll behavior, so that the points made by Dr Curry don’t get discussed.

        Too many come and see the school-yard name-calling and decide not to participate in the discussion which isn’t rational, just he said she said.

        And then mission accomplished: Don’t let there BE a discussion on the facts.

        Steve Garcia

      • Marcus [7:45 pm] “Marquess of Queensberry Rules have been thrown out.

        You can thank Gleick and his supporters for the rest of us taking the gloves off.”

        Oy VEY. Your guy defrauds, and your side says the rules of engagement have been broken, so you get to break out the Brown Shirts?

        Just how does THAT figure? Are you committable?

        Geez Louise.

        Steve Garcia

      • Similarly, I’m not saying “Lindzen is wrong because he’s boring and no one likes him” but rather pointing out that he has lost credibility in the community.

        But since – due to its rampant dishonesty and political bias – “the community” has rightly lost virtually all credibility – this is if anything a recommendation.

    • To Chris Colose:
      Lindzen has his own approach of some existing, somewhat halved or lowered
      climate sensitivity…..let him have his views, we do not have to agree….
      ….Important is that the Likes of the Gleicks are kept out of repeating
      their global CAGW Warmist nonsense….this were much worse than your
      worry about his type of approach….important is that Skeptics of all
      colors get into the House of Commons….and by and by, the Warmist
      Gleicks will disappear from climate science…
      JS

    • Chrissy,

      I very confidently predict that you will never attain 8% of the knowledge and relevance of Dr. Lindzen. You don’t speak for the scientific community, you little twit. You are a consensus scientist wannabe. Now get out of here before somebody roughs you up and takes your little plastic sheriff’s badge.

      • Markus Fitzhenry.

        The problem Don, the other 92% of his knowledge imparted to unwitting Students at St Albany’s is rhetoric.

    • It’s interesting to see the fall back to ad hominems when I point out several of the flaws in Lindzen’s arguments.

      Really, what is the point in opening up discussion to people incapable of reason?

      • Your pointing out was ad hominem. You got back what you deserved. You are not here to discuss, but to scold. Take it elsewhere, junior. We already have our fair share of trolls. By the way, where’s josh? He looks an awful lot like gleicko, from the wire rimmed goggles down to the Birkenstock sandals. I wonder if there is a connection.

      • Read your post, disappointed by the reaction too. Looking forward to a more specific rebuttal on the points you make.

        To be fair, I’m guessing many didn’t read past, “Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues”. That may be your opinion, but it sets the tone too. Re-reading, the first half of each paragraph was fairly derogatory, the second half worth pursuing. I’m not suggesting that justifies the reaction, just pointing it out in case you weren’t aware how your own tone sounds to someone outside the debate.

      • robin,

        They are a very angry lot. They are losing, and desperate. And we don’t have to be nice to them. Go over to RealClimate and the other colose friendly blogs and see how they treat deniers.

      • Nor have you been very courteous to Dr. Curry of late. Presumably she’s lost all ‘credibility in the community’ (ie: not one of us, team Team) as well.

        Eventually the number of scientists excluded from the ‘community’ will be larger than the ‘community’, and then what’s left of the ‘community’ can go and commune with only itself. Just like they already do at RC.

      • Markus Fitzhenry.

        Marquess of Queensberry Rules have been thrown out.

        You can thank Gleick and his supporters for the rest of us taking the gloves off.

      • robin,

        Why does Lindzen get a free pass, time after time, as the years pass by and he continues to talk nonsense? I pointed out, even if superficially, a couple of of Lindzen’s scientific issues (and even some references people could pursue further). This is true even if you don’t like my tone (which I think is well deserved). I’m not particularly interested in making everyone happy. If people don’t want to have just a bit of investigative integrity, I don’t see why I need to supply all the scientific answers here, but if people have legitimate questions on what I said I’d be glad to pursue them.

        Regardless of whether you like my approach or not, the ultimate end result is that this will be of virtually no significance in the scientific community, and of only temporary interest in blogs and amongst people who don’t know better. Much like most blog discussions.

      • Like a Warmer is going to do anything but but push Warmerism. Blah, blah, blah.

        Andrew

      • cui bono,

        Generally, I am very nice to people, and I don’t get hostility in me based on disagreement; it comes when I think that individual has lost the personal integrity to do objective science, familiarize themselves with what they talk about, and acknowledge criticisms of their work should they be valid. Same if they are just going to talk about the science. My problem with Lindzen is not that he proposed a negative feedback ‘iris’ hypothesis; in fact this was a legitimate submission to the literature that promoted a lot of discussion in the academic community. It encouraged many subsequent theoretical analysis and observational analysis with better datasets than Lindzen had available, along with people who specialized in those observational products.

        The problem began with Lindzen’s responses to those criticisms, which indicated that he had an unmoving stance on his position, even when others had shown that his proposed effect was greatly exaggerated, or even of the wrong sign. Even worse, were many of his indefensible statements in op-eds, talks, etc.

        With regard to Judith Curry, I originally liked what she wanted to do on this blog, such as discussing and expanding upon the uncertainties in climate science. Now, it has become a forum for glorifying any half-baked idea that is apparently “interesting.” Moreover, I think Judith Curry has significantly expanded the scope of what is ‘uncertain’ without actually familiarizing herself with the current science on those topics (such as solar-climate effects), even to the point of making things up. She’s free to run her blog how she wants, and people are free to like/not like it. I think that it is counterproductive to her original goal; you cannot improve understanding if you keep having to go back to basic textbook stuff and explaining why every nonsensical argument someone put on their blog is nonsensical.

      • Thank you, Chris, for a courteous reply.

        One of the reasons I like Dr. Curry’s blog is that it tries to question matters which you regard as “basic textbook stuff”. For example, the feedback multiplication. This is in the textbooks to be 3, but there seems no justification for this other than it really was the number Hansen and co first thought up back in the 1980s.

        Lindzen and others have a radically different figure, and those of us at the sidelines can’t help but notice that the models, which echo the threefold feedback, are not doing very well recently. Yet question this magic number, for whatever reason, and merry hell breaks loose.

        As for “current thinking” – whether Dr. Curry is on top of every twist and turn that tries to explain the increasingly glaring discrepencies betwixt models and nature I couldn’t say, but ‘current thinking’ is just that – ‘current’. It will change, and if you follow it slavishly, it will lead you a merry dance.

        Read some more about the history of science, especially the blind alleys and cul-de-sacs, the lumiferious ether and coal-fired suns, and you’ll get the idea. Science: always work in progress, and sometimes back to the drawing board. Or in Gleicks case, go to jail, go directly to jail, do not pass go…

      • cui bono,

        Your statements are really the reason why it’s tough to take these conversations seriously. There have been countless papers and entire reports dedicated to the sensitivity issue, yet you claim that it’s all something Hansen made up with a simple model 20-30 years ago. Either you’re trying to trick me, or you’re just unaware of the multitude of papers on the subject. In the first case, it’s pointless, and in the second case, you need to show that you want to learn more. I am rather familiar with the science of climate sensitivity and the current methodologies used to assess it, and Lindzen’s has time and time again failed the test of being robust.

        The point about models is equally bad, since very few people who question the models on these blogs have even read about models or know what they are comparing. They haven’t consulted the people who build the models and have written extensively on them, or have improved on them over time. Usually, they are just very broad statements that give no indication as to what variable or timeframe or statistic (and in what model) they are even talking about. It’s tough to respond to such vague statements when entire reports have been written on modeling, where they are useful, which results are robust, what needs improvement, etc.

      • Don,
        Not much refraction in them thar Gleick goggles.
        My, my, pretentious and projectile.

      • Chris Colose ……”.Really, what is the point in opening up discussion to people incapable of reason?”
        Funny Chris that is exactly what I think of you. I have never seen you give an inch in your dogmatic theology and consider that you may be wrong on any of the issues.
        What do you think of the Evans paper referenced in other comments. I think it is a very concise and coherent piece. Of course I am sure you do not

      • Chris,

        OK, then, back to the snark.

        I know enough to know there are many scientists who do not agree with a *3 multiplication. Some present good reasosn for believing it is < 1. If you don't know this, you are seriously living in a though-tight compartment.

        I don't want to consult with the numerous people who constructed the models (clever though they undoubtedly are). I am an 'end-user' of the models, and all I have to do is sit back and see whether they are getting things right. Looking at their projections vs. reality, they just aren't.

        You're asking me to disassemble a plasma TV and marvel at the thought and precision that went into constructing it. I, as a customer, and who incidentally paid for it, want to know why it isn't bloody working!

        Sadly it is now 2:15am here, so I'll retire to dream of models. Of a different kind….

      • Chris, I don’t have the expertise to debate you on the science, nor would I indulge in personal attacks, especially when I have no basis to do so. So I would hope that this thread reverts to a more moderate and considered approach than is apparent in what I’ve read so far.

      • Chris, I’m sure that you realize that Tamsin has ablog just starting, that she hopes to take us neophytes to a better understanding about how models are constructed and what we may be able to learn from them. So if you need a quick reference to help people get up to speed on the modelling thang, just send them to –

        http://allmodelsarewrong.com/

      • Steve Milesworthy

        Thanks, Chris, for the context.

        Indeed Lindzen has been making the same argument about low sensitivity for a number of years. I remember being able to spot the flaws in his analysis of recent temperatures versus forcing changes some years ago (one can fit the analysis on the back of a fag packet).

      • Markus Fitzhenry

        Now matter how much you care to bad mouth Lindzen. He is indisputably correct in the predominate fact.

        ‘All models are wrong’

      • Chris I agree completely with your comments here – both on the scientific flaws in Lindzen’s talk (which I have yet to see much discussion on here – not a surprise), and on your disappointment in the route Judy’s blog has taken. I check in every so often out of curiosity as an EAS/Ga Tech grad, but am unimpressed. It’s good to see you commenting – there are people out there that appreciate your thoughts. Keep it up.

      • Chris,
        It is interesting that in your sophomoric pretense you are outraged when your rude, unprofessional and childish behavior is returned to you.

      • MattStat/MatthewRMarler

        Chris Colose: entire reports have been written on modeling, where they are useful, which results are robust, what needs improvement, etc.

        Which are the three most recent best such reports?

    • ‘His line … is just too stupid to even acknowledge’.
      And yet you do.

    • Chris, which paleoclimate constraints?

      http://www.realclimate.org/index.php/archives/2012/02/global-temperatures-volcanic-eruptions-and-trees-that-didnt-bark/

      Based on the paleoclimate constraints of the northern hemisphere after considering the volcanic impact not only on the little ice age but through the 20th century, a considerable amount of warming would appear to be expected unless, ice age is the norm.

      The majority of the post 1950 warming is in the northern high latitudes which have considerable volcanic impact from northern high latitude volcanoes. If fact, if one wanted to, they could make an excellent case that the “unknown” aerosol factor is VEI 4 and 5 eruptions primarily in the northern high latitudes with the equatorial impact mainly due to large eruptions. Might have something to do with albedo sensitivity differences and land use changes.

      So the IPCC CO2 attribution of likely, as in 50% or greater, is looking shakier all the time.

      Of course if you want to switch to the southern hemisphere just for grins, you could compare this paleo recon, http://www.ncdc.noaa.gov/paleo/pubs/neukom2010/neukom2010.html to the Southern high latitude temperatures and find that paleo data is nearly as noisy as Antarctic temperature data.

      Kinda funny how in the southern hemisphere the dip circa 1860 is bigger than the 1816 dip. 1902 has a pretty good dip too. Of course, it is only tree rings.

    • Markus Fitzhenry.

      ”When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases, and the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%) and requires much thicker and colder clouds that are not considered plausible.”

      I’ll give you a hand Chris.

      An active sun alters the vertical temperature profile of the atmosphere especially at the poles so that the polar air masses shrink horizontally whilst the polar vortex intensifies vertically and the jets become more zonal. That results in less global cloudiness and more solar energy into the oceans. El Nino becomes stronger relative to La Nina and the troposphere warms.

      A less active sun does the opposite. Fits the observations perfectly.

    • Eric (Skeptic)

      Reading K&H08 tells me that paleoclimate evidence is not independent from modern evidence because it uses the same models to separate CO2 feedback from others (mainly dust and albedo). The other problem is that the base climate state is different and our weather pattern changes will be different. It means we could have higher or lower sensitivity than that calculated from the paleo data, but probably not the same.

    • Chris Close writes more unsupportable conclusions.

      He writes: Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues

      Chris- What is the basis for your claim? Seems like an unsupportable hope on your part.

      Chris writes- “His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimate constraints (see Knutti and Hegerl for a start).

      Chris- your statement is untruthful. Observations do not support your opinion of high sensitivity and you know when you are being honest that the paleoclimate record is only marginally reliable. Referencing someone’s paper is meaningless when it makes claims on the paleoclimate record that are overstating the reliability of that record.

    • Chris,
      Thank you for demonstrating the definition of Sophomoric.

      http://www.merriam-webster.com/dictionary/sophomoric

      Hmmmm…….wannabe grad student vs. professor? Rude arrogant young blowhard vs. experience and wisdom? A tough call. Not.

    • Dr. Colossal, your last paragraph @ 6:05 PM is simply a mischaracterization of what Richard said, and then you descend from error into abuse.
      ==========

    • Chris, with respect to the ‘faint sun’. problem, you do know that the switch from a reducing to an oxidizing atmosphere began with the evolution of water splitting rhodobacter, about two billion years ago.
      Do you know what the albedo of the planet was when the oceans were full of transition metal salts and the land was covered in metal sulphides.

      You think CO2 was the major cause of the Earth having liquid water, dispute the Earth having a completely different biotica, surface absorbance characteristics, ocean optical properties and very different types of clouds..

      You then wonder why the mainstream CAGW promoters are held in such contempt.

    • His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations …

      “Still it moves” and the temperatures refuse to follow the exaggerated sensitivity that others know “better”…

    • I completely disagree that Lindzen’s speech will have any impact outside brief blogospheric discussion. Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues;

      Open with the ad hom. Typical of you lot. The rest is likely ad pop jibberish, but I won’t know.

    • John Carpenter

      Chris, take a stroll over to the ‘Gleick’s Testimony’ thread to see how Andy Lacis handles a discussion with those who have an opposing view. Let him be a mentor to you. Here’s the start of his post….

      http://judithcurry.com/2012/02/26/gleicks-testimony-on-threats-to-the-integrity-of-science/#comment-177569

      Read all the replies made and the way he handles them. Take some notes too.

      • John Carpenter –

        To my dismay, I’ve been following this for most of the day.

        Chris is stuck in his mindset and has no capacity to hear anything that didn’t come from Hansen, Gore, Mann or CRU or any of their followers. By his definition, anyone that disagrees is wrong – end of discussion. Any fact that does not fit his understanding is misguided and erroneously derived. Anybody here who engages with him is talking to a brick wall. He is incapable of give and take and attempting to come to a mutual understanding. Those who disagree with him are, to him, only dumb clucks who never learned how to think properly and who must be educated by he who knows all.

        Steve Garcia

      • John Carpenter

        Steve, Chris take a lesson from Dylans ‘My Back Pages’

        ‘I was so much older then, I’m younger than that now’

        He’ll understand that line in another 10 to 15 years if he is able to examine himself in a critical way, otherwise we just have another arrogant SOB looking to climb the ranks.

      • John carpenter,
        Chris has that immunity to facts that only youthful arrogance can permit.
        By the way, here is a nice tweet exchange between gleick and friends that puts context on his forgery:
        “Copner (Comment #92133)
        February 28th, 2012 at 11:06 am

        In case anybody missed it, a couple of threads back, I posted this retrospectively hilarious tweet sequence.

        Gleick was even warned (although not specifically as regards document forgery), that it wasn’t wise to use the phrase “anti-climate”

        Got to laugh.

        ——————————————————————
        Nate Lloyd ‏ @macbuckets

        @PeterGleick @stephenfry When you use terms like “anti-climate” you give the game away. #ScienceIsPolitics
        3:41 PM – 30 Jan 12 via TweetCaster for Android · Details

        ——————————————————————

        Peter Gleick Peter Gleick ‏ @PeterGleick

        @macbuckets @stephenfry Yes, “anti-science” might be better. Or worse. But #WSJ isn’t anti ALL science. Just climate science, apparently.
        9:21 PM – 30 Jan 12 via web · Details

        this is from Lucia’s blackboard, by the way.

    • Chris, You are getting a little bit cranky. You should not take Lindzen out of context. I’ve heard him several times and his ideas are much more qualified than you state.

      On the faint sun paradox, Lindzen points out that CO2 is an impossible hypothesis, saying that it requires 3 bars of CO2. His paper merely asked a question, viz., could you explain it with high thin cloud just in the tropics. To my knowledge there is never a claim that this was the sole or even the main mechanism.

      On the aerosols, he has some references to the literature quoting modelers. You must admit that a forcing that has an error bar equal to 200% of the median value and a possible value of close to 0 is pretty arbitrary. Those are IPCC numbers incidently. So, how do you think the modelers set these numbers?

      On the arctic, you know of course that most parts are not ice covered during the summer, so that the alleged rapid ice melting cannot be a big factor at least in July and August. In any case, Lindzen says “CO2 is not obviously a factor during the summer.”

      On the sensitivity, you must look at the IPCC AR4 summary forcing chart to see that total anthropogenic forcings neglecting aerosols are above 3 W/m2 I believe. He is trying to estimate the sensitivity to a doubling of CO2, i.e., 3.7 W/m2.

    • When one includes internally consistent physics, no one has successively explained the faint sun without invoking substantial help from greenhouse gases,

      The faint sun paradox is as solar irriadiance has increased,the earth t has cooled GHG are a constraint not an explanation.

    • Chris,
      Seriously?? You are going to lecture Dr. Curry on what “most of the scientific community” thinks? I truly wish you get a scientific education someday because you don’t have one yet. Every scientist knows expert opinion is worthless. Data is what matters and Lindzen has the data on his side.

      Look at the data sometime, Chris. You will get an education.

    • “His unmoving faith in low climate sensitivity is at odds with virtually every assessment on the issue that also use more robust inferences from observations, as well as paleoclimatic constraints (see Knutti and Hegerl for a start). ”

      Then what IS the climate sensitivity, and how was it found to be that? Mind you, a small summary will be OK. I live under the impression that climate sensitivity is very hard to determine from paleohistoric sources. I’m willing to hear what is wrong with Lindzens theories but I would like to hear more than “read so and so” as an argument. If it is so very clear you should be able to summarize it for me.

      Mind you, while we are at it: Why is climate sensitivity a constant? That baffles me, as a layman. If I start to think about it climate sensitivity in an ice age could be completely different than in between ice ages, due to the fact that currently changes in ice cover result in far less albedo changes than during an ice age.

      • peeke,

        In not so brief summary (unfortunately this does not do any justice, which is why I asked people to read a few papers):

        1) Equilibrium climate sensitivity is very likely between 2-4.5 deg Celsius per 2xCO2, which unfortunately is not a narrow estimate, but values much smaller or much larger than that broad constraint have consistently failed a number of tests

        2) There are a number of ways that have been developed to look at the sensitivity issue. People have looked at the 20th century observed record, the response to volcanic eruptions, the solar cycle, the response of the net radiation budget to SST changes, etc. People have also looked at paleoclimate records from a number of different time periods, including the last millennium, the Last glacial Maximum, the Eocene, etc. Some of these things are useful at cutting off the low end estimate of sensitivity but not the high end. The response to volcanic eruptions for example rule out very low sensitivity values but, on their own, cannot rule out very large values. Others give rather broad constraints and you need to combine different lines of evidence to come up with a plausible range that can simultaneously satisfy a number of events within the degree of uncertainty in observation/proxy data, etc. Unfortunately, no single method can give a unique value of sensitivity for a number of different reasons (see below)

        3) Observational evidence alone cannot constrain climate sensitivity. This is because we do not know the total radiative forcing over the industrial era, and the rate of ocean heat uptake is questionable. This gives a distribution of sensitivity values that are all consistent with the observed climate. There have been a number of methods, for example, multi-model ensembles that sample the parameter and structural uncertainty across models and use observations or paleoclimate as a constraint to accept or reject sensitivity values which are possible. This is where further research needs to be developed, as it combines a lot of information at the model-obs-paleo interface and samples a large range of uncertainty and possible asymmetry between LGM sensitivity and 2xCO2 sensitivity for example.

        4) Lindzen has proposed a variety of negative feedback ideas- in the early 90s, he thought water vapor feedback could be negative, which he no longer defends. In 2001, he published his IRIS hypothesis. It was plausible, and was well accepted by the academic community and investigated by cloud physicists, etc. I consider Lindzen more a theoretician than an observational specialist, so the people more familiar with the obs. products looked into it more, and different observational datasets were produced since that time to examine (e.g., CERES). A number of factors of the IRIS hypothesis, including incorrect radiative properties have been pointed out, and others have examined Lindzen’s observations of varied high cloud amount with SST in more detail and concluded that it responds more to changes in subtropical clouds than to changes in tropical convection, which reflects a meteorological forcing than an SST forcing (so that even if SST were fixed, Lindzen would still observe an anti-correlation upon which his theory his built).

        5) More recently, Lindzen and Spencer (among others) have looked at variations in the TOA energy balance and its relationship to SST changes, which in theory reflects the efficiency of the Planck restoring feedback. See my theoretical treatment of the water vapor feedback and runaway greenhouse for a jist of the principle

        http://skepticalscience.com/radiation.html

        However, deviations between trends in global mean SST and TOA radiation on decadal timescales are very large, and a number of people have shown that this reflects ENSO variability, as opposed to a forced trend over the timescale of a decade or shorter. This also requires using a short and discontinuous satellite data, and the analysis needs to be of global scale. Simple models with no realistic ocean, no El Niño, and no hydrological cycle (as in Spencer and Braswell) make this approach even more unsuitable.

        6) I agree paleoclimate data are subject to limitations but several intervals in the past (like the LGM) have large signal to noise ratio because of the magnitude of change and forcing, and even within the error bars, a very low sensitivity cannot be considered an artifact of proxy interpretation. There are three fundamental ways climate sensitivity is derived from paleo-data: pure observations, observations with multi-model ensembles, or physics perturbed ensemble method using a single climate model. Only the first one must inherently assume the same sensitivity between one climate state to the other. I also agree that it is unlikely climate sensitivity is a constant, although for the LGM, the surface albedo feedback doesn’t necessarily need to be different because the ice sheets are treated as a forcing. Kohler et al 2010 is a good reference on this.

      • and the rate of ocean heat uptake is questionable.
        Is it questionable because the Argo floats are show only a very small amount of ‘heat uptake’ or is it questionable because the argo floats seem to disagree markedly with ‘previous assumptions’ about ‘ocean heat uptake’.

        I’m always interested why a scientist would ‘question the data’ when it doesn’t fit a theory.
        Even the renowned Dr Hansen has concluded the ‘missing ocean heat’ doesn’t exist and decided that the impact of aerosol’s is much greater then previously thought.

        http://www.columbia.edu/~jeh1/mailings/2011/20110415_EnergyImbalancePaper.pdf

      • There are very good reasons to question the data (either ocean heat content measurements from older buoys, initial ARGO measurements, or satellite-derived products) and a number of people are working on that issue in great detail. It’s also appropriate to examine the models, and many of the AR4-generation ones tended to mix heat into the deep ocean too efficiently (I don’t know if this has changed for the CMIP5 generation models for the AR5). This has no effect on equilibrium climate sensitivity, but instead determines the expected observed warming at any point in time during the perturbed (and changing) state.

        But the ‘missing heat’ is, in fact, a difference between two observational datasets and has nothing to do with theoretical considerations (i.e., the apparent inconsistency between satellite and in situ ocean measurements). The difference is not considered statistically significant however (see Loeb et al., 2012, Nat. Geo). Other forcings will impact this too.

      • MattStat/MatthewRMarler

        Chris Colose: 1) Equilibrium climate sensitivity is very likely between 2-4.5 deg Celsius per 2xCO2, which unfortunately is not a narrow estimate, but values much smaller or much larger than that broad constraint have consistently failed a number of tests

        Do you mean “steady state climate sensitivity” instead of “equilibrium sensitivity”? This terminological mistake occurs a lot in these discussions, and though I think it’s usually benign, it isn’t always clear whether the writer really means “equilibrium” or “steady state”. As long as the sun is providing energy and there is a net flow of radiation in (short wave) and out (long wave), then the appropriate concept is “steady state” (though even that is only approximate.)

      • Full thermodynamical equilibrium is not the only type of equilibrium. Various partial equilibriums represent also perfectly legitimate uses of the word, when the meaning is stated or clear from context as it is here.

        It is certainly true that people err often by picking facts related to thermodynamic equilibrium and apply them to the stationary Earth system, but that’s not a problem for the definition of equilibrium climate sensitivity.

      • @Chris colose

        “I agree paleoclimate data are subject to limitations but several intervals in the past (like the LGM) have large signal to noise ratio because of the magnitude of change and forcing, and even within the error bars, a very low sensitivity cannot be considered an artifact of proxy interpretation.”

        Why not? I mean, really it can’t, or do you consider it unlikely?

        I remember Lindzen trying to prove a low sensitivity and make an error. The result some people mentioned when correcting that error was 1K/doubling. That is suspiciously close to no feedback at all.

      • harrywr2 said: “I’m always interested why a scientist would ‘question the data’ when it doesn’t fit a theory.”

        At the end of a seminar, I once heard a theorist snark: “Once again, the data is rejected by the theory.”

        What he meant is that theorists have no end of questions to put to experimenters and empiricists, and the theorists can usually think of some objection to the experimental protocol or the methods of measurement and/or analysis of naturally occurring data. That can turn into regressive, defensive science if it happens too much. On the other hand, if you are a data worker, you have to live with it to a great extent. But a progressive scientific program isn’t one that has to be throwing up objections to hypothesis failures more often than it is celebrating victories for novel predictions.

        This is just an instance of the Duhem-Quine problem, but a really important one.

    • Chris Colose, you wrote:

      “The point about models is equally bad, since very few people who question the models on these blogs have even read about models or know what they are comparing.” Would you accept the following criticism, made in the last few years, of the lack of good evidence as to global circulation models having reasonable predictive capabilities and embodying realistic climate sensitivities?

      “Much of the work has focused on evaluating the models’ ability to simulate the annual mean state, the seasonal cycle, and the inter-annual variability of the climate system, since good data is available for evaluating these aspects of the climate system. However good simulations of these aspects do not guarantee a good prediction. For example, Stainforth et al. (2005) have shown that many different combinations of uncertain model sub-grid scale parameters can lead to good simulations of global mean surface temperature, but do not lead to a robust result for the model’s climate sensitivity.
      A different test of a climate model’s capabilities that comes closer to actually testing its predictive capability on the century time scale is to compare its simulation of changes in the 20th century with observed changes. A particularly common test has been to compare observed changes in global mean surface temperature with model simulations using estimates of the changes in the 20th century forcings. The comparison often looks good, and this has led to statements such as: ”…the global temperature trend over the past century …. can be modelled with high skill when both human and natural factors that influence climate are included” (Randall et al., 2007). However the great uncertainties that affect the simulated trend (e.g., climate sensitivity, rate of heat uptake by the deep-ocean, and aerosol forcing strength) make this a highly dubious statement. For example, a model with a relatively high climate sensitivity can simulate the 20th century climate changes reasonably well if it also has a strong aerosol cooling and/or too much ocean heat uptake. Depending on the forcing scenario in the future, such models would generally give very different projections from one that had all those factors correct.”

      As you are no doubt aware, the “Randall et al., 2007″ study that the above-quoted statement referred to as “highly dubious” constitutes the complete Chapter 8 “Climate Models and Their Evaluation” of IPCC AR4 WG1.

    • Chris,

      Regarding your point about Richard Lindzen losing credibility among climate scientists – how would you respond to the issue of climate scientists losing credibility with the public?

      The claims of effects from climate change are driving that loss of credibility, along with cries of persecution (funded by the evil fossil fuel industry) by some climate scientists. Whatever else you think of Dr Lindzen, he is on target with regard to this part of the debate. I could be Joe down the street and still see the failed science in studies like the recent one on Andean birds, where the conclusion of the researchers was that many of these populations may be at risk due to climate. The basis for this conclusion? Declining populations? Nope. Try that the range of their habitat had not shifted to the degree predicted by models. The researchers were surprised by this, even though they could document changes in temperature and other factors. So, because the birds were obviously too stupid to notice the threat of global warming, they were doomed because they weren’t moving fast enough.

      Guess I’m lucky I stopped at a Masters and didn’t stick with becoming a “climate scientist”. Because in this instance I would have questioned a) the model and b) my hypothesis and assumptions, before I hit upon the conclusion that the birds are not adapting fast enough and are therefore at risk. This is exactly the sort of “science” that global warming / climate change has spawned. It has people like Dr Andy Lacis, who is far smarter than I am, making statements about how we “know” that as more CO2 and water vapor get taken up by the atmosphere, the system has increasing energy and therefore leads to more extreme climate events. Feel like directing me to the reseach which as identified the mechanisms by which this occurs? Or how about studies on the frequency and intensity of storms? I haven’t found any for the former and most of what I’ve found on the latter pretty much say the opposite.

    • Chris,

      This is more emotive than scientific, and you say a number of things that are simply not true.

      1)

      You claim falsely that there have been “many subsequent criticisms of [Lindzen’s] lone paper [on the Faint Young Sun Paradox (FYSP)]”, and you cite Goldblatt and Zahnle 2011 (GZ11) as if it is one example out of many. In fact, GZ11 is the only paper that has disputed Rondanelli and Lindzen 2010 (RL10), and their criticisms have been answered. And surprisingly, you make no mention of the fact that Rosing et al. 2010 (No climate paradox under the faint early Sun, Nature, 464, 744–747, 2010. 3579) have also argued along similar lines as Rondanelli and Lindzen.

      Here are all the articles that cite RL10 on the FYSP.

      – Abe, Y. A. Abe-Ouchi, N.H. Sleep, and K.J. Zahnle, 2011: Habitable Zone Limits for Dry Planets, Astrobiology. 11(5): 443-460. doi:10.1089/ast.2010.0545.

      – Goldblatt, C. and K.J. Zahnle, 2011: Clouds and the Faint Young Sun Paradox, Clim. Past, 7, 203–25 220, doi:10.5194/cp-7-203-2011, 2011. 3578

      – Rondanelli, R. and R.S. Lindzen: 2011, Comment on “Clouds and the Faint Young Sun Paradox” by Goldblatt and Zahnle (2011), Clim. Past Discuss., 7, 3577–3582.

      – Hasenkopf CA, Freedman MA, Beaver MR, Toon OB, Tolbert MA, 2011: Potential Climatic Impact of Organic Haze on Early Earth, Astrobiology, 11(2):135-49.

      – Hessler, A. M., 2011: Earth’s Earliest Climate. Nature Education Knowledge 2(12):6

      – Fairén, A, J. Haqq-Misra and C.P. McKay, 2012: Reduced albedo on early Mars does not solve the climate paradox under a faint young Sun, Astronomy & Astrophysics, doi:10.1051/0004-6361/201118527.

      Then there is the interactive discussion at Clim. Past. Discuss

      http://www.clim-past-discuss.net/7/3577/2011/cpd-7-3577-2011-discussion.html

      RC C1795: ‘Review of: Comment by Rondanelli & Lindzen on “Clouds and the Faint Young Sun Paradox” by Goldblatt & Zahnle (2011).’, Itay Halevy, 10 Nov 2011

      RC C1837: ‘Review of Rondanelli and Lindzen comment’, Jim Kasting, 11 Nov 2011

      SC C2120: ‘Reply to Comment on “Clouds and the Faint Young Sun Paradox” by Goldblatt and Zahnle (2011)’, Colin Goldblatt, 22 Dec 2011

      EC C2123: ‘Editor’s comment’, André Paul, 23 Dec 2011

      AC C2435: ‘Interactive comment on “Comment on “Clouds and the faint young sun paradox” by Goldblatt and Zahnle” by R. Rondanelli and R. S. Lindzen’, Roberto Rondanelli, 18 Jan 2012.

      I have read all these papers and the only authors who criticise RL10 are Goldblatt and Zahnle.

      2) You claim, echoing GZ, “the high cloud feedback rests on rather crazy assumptions about the amount of high cloud cover (essentially 100%)”. However, Rondanelli and Lindzen, in their response, points out that GZ have simply misunderstood the claim.

      3) You claim that “Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues”. I wonder if you would share how you know this? Are claiming to have personally spoken to ‘most’ in the scientific community? Or are you repeating rumour? Or are you just making it up, as with point (1) above? It is easy to look at Lindzen’s most recent publications (published since 2010 say) and confirm that most of his results have been accepted by the community, including a number of papers on understanding aspects of atmospheric aerosols, problems simulating the atmospheric tides in GCMs.

      4) Your comments on the Arctic make it sound as though this is all settled science when in fact there is a controversy in the literature right now, and Lindzen is not the only participant.

      5) You claim, “His line that ‘…is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments’ is just too stupid to even acknowledge.”

      This is where you really ought to be careful. There is Kiehl 2007, Knutti 2008, Schwartz et al. 2010, Huybers 2010 and all of this has been cited and acknowledged in the AR5 ZOD, at least. Lindzen’s point essentially stands. There is also a paper by some of Lindzen’s MIT colleagues on the same matter. It may be less that Lindzen is “stupid” and more that you need to do a bit more reading.

      • Further to Alex Harvey’s point 5) [contra Colose] on model tuning, it would be well worth the reader’s time to return to an older thread on this blog, “CO2 no-feedback sensitivity: Part II” and review Richard S. Courtney’s absolute evisceration of Fred Moolten on this same issue. The relevant portion begins about 1/2 or a bit more down the thread with Courtney’s comment @ 12/15/2010 – 5:44 p.m.

      • Thanks Alex for sound science instead of Chris’ rhetoric. Evidence wins.

        Chris
        Lindzen threw down the gauntlet in Slide 16

        Just for fun: You’ve been told that earlier warming was natural but recent warming is due to man. Can you tell which is which?
        Global Average Temperature in Two Half Century Periods:
        Which is 1895-1946 (Nature); Which is 1957-2008 (Us?)

        Dare you take up his challenge?

        If you do have the courage to take up his challenge, perhaps you can enlighten us as to the difference between those temperatures.
        Then we welcome you erudite pontification on how the massive increase in CO2 during the second half of the century contributed to the difference between the two records, but not to the major increase seen in both records.

        Shall we await with bated breath?
        Or return to real science?

        While you are contemplating the massive warming that will be caused by the poor using coal to warm themselves and cook their food, perhaps you could consider the probability that there will even be an increase in light crude oil production in the forseeable future. See
        “The World Oil Supply: Looming Crisis or New Abundance?” The video of the University Of Wisconsin February 17, 2012 is now online. Ex-Shell CEO & Peak Oil Researcher Face Off over America’s Energy Future. Posted at “Citizens for Affordable Energy”

        “Gasoline will hit $5 per gallon this year predicts John Hofmeister, former president of Shell Oil Company,”
        Perhaps you could explain the underlying economics as to why an abundance of oil will cause the price of gasoline to hit record levels.

        Futhermore, Jeff Brown (aka westexas) and Sam Foucher document how the global Available Net (oil) Exports after China and India have already down 13% since 2005. Extrapolating current trends suggests NO available Net oil Exports in 19 years.

        How do you support catastrophic anthropogenic global warming (CAGW) by cutting US oil consumption in about half within 20 years at current trends?

    • Chris,

      Lindzen said:

      …is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments

      as opposed to: “shown to be consistent with observed warming by including known and quantified additional negative forcings from aerosols and solar variability”

      The fact that you put such a radically different spin on it says a lot more about your bias than anything else.

      Tip: If someone, particularly someone highly educated and experienced, appears to say something incredibly stupid, first check that you’ve heard them correctly before sprouting off. Failure to do so could result in acute embarrassment on your part.

  16. As to Dr. Lindzen’s examples (slide 20), I think these are examples of the fallacy of “Affirming the Consequent”, a rhetorical device. Dr. Lindzen may be trying to point out that rhetorical fallacies have no place in scientific debate. (IMO, they are common in politics.)

  17. Markus Fitzhenry.

    AGU President’s message 27 February 2012
    We must remain committed to scientific integrity

    In doing so he compromised AGU’s credibility as a scientific society, weakened the public’s trust in scientists, and produced fresh fuel for the unproductive and seemingly endless ideological firestorm surrounding the reality of the Earth’s changing climate.”

    Birds of a feather ………

  18. Hands up, how many of you believe that the PLANET is warmer by 0,8C now than 150y ago?!

    1] if the troposphere warms up by 0,8C – would expand INSTANTLY by 100m. Cannot expand down into the soil – but expands upwards by 100m, into the stratosphere. That extra volume of oxygen + nitrogen can intercept appropriate EXTRA amount of coldness in 3,5 seconds to equalize – it takes few minutes that EXTRA coldness to fall down to the ground and equalize – then instantly O+N shrink to previous volume. Because if it stayed expanded for a whole day (24h) would have intercepted / redirected down enough extra coldness, to freeze ALL tropical rivers and lakes. I live in the tropics, trust me, I’m the most honest person on the planet; the rivers and lakes are not frozen = therefore: extra heat in the troposphere is not cumulative B] for the last 162y, not enough extra heat has accumulated to boil one chicken egg!!! Q: does it take 150y for oxygen + nitrogen to expand after warmed extra – or expansion is INSTANT?!?!?!

    2] the amount of data available from 1850 is less than 0,00000000000001% ESSENTIAL, to know the correct temperature. Comparing one unknown with another unknown is the ”mother of all lies”

    TRUTH: ”big city island heat” now exist, between 0,5C – 3C, depends on the size of individual big city has grown. That has made the air in those cities to expand > increased the troposphere upwards by 4-5m. That extra volume wastes the extra heat / intercepts extra coldness and is redirecting it down to the surface. Because the surface outside those ”big cities” is much larger. (including the surface of the oceans) – the extra coldness redirected made to be COLDER by 0,00000001C, outside the big cities. Overall temperature in the troposphere is exactly the same today as it was 1850. Unless the laws of physics are abolished by the governments and UN, Global warming is 101% lie!!! My formula is correct: EX>AE>ECI (Extra Heat > Atmosphere Expands > Extra Coldness Intercepts)

    Lowering the GLOBAL warming from few degrees to 0,8C is same as massaging the truth with the middle finger, instead off with the whole left hand. It’s the ”kicking and screaming on the way to the confession box”’ Not only Lindzen, but every Climate Activist will be asked the question: -”why were you avoiding / ignoring Stefan’s formulas”? Most prominent will be asked in a court of law / under oath. It’s prudent to prepare answers for that question now. Because the other 101 questions will follow.

    • Duh. Additional volume is not additional mass. UR babbling and confuzed.

      • Brian H | February 27, 2012 at 8:10 pm | Reply
        Duh. Additional volume is not additional mass. UR babbling and confuzed.

        Brian, no need for confusion. When you get warmer – instantly spread and stick your arm in a bucket of ice – you are same volume / SAME MASS – but the extra heat released by your arm and swapped for extra coldness will equalize the temperature in your body. You would be imitating what the troposphere does. Stick to the laws of physics – you can’t go wrong. Where the troposphere expands upwards, when gets warmer for any reason; is much colder than ice in your bucket. Cheers

      • “coldness”? physics? Man, yuze confuze. There is no such thing as “coldness”. Only heat, in varying degrees from 0 therms on up. It spreads by various means. Eventually it will be spread evenly everywhere, at which point nothing more will happen. Ever.
        ;p

      • Brian H….. You are proving my point again; that you Swindlers have NOTHING solid – only looking for salvation in confusion. Engineers that produce refrigerators and stoves; they don’t need to say: ”your freezer is 230k warm; or ”your oven should be turned to 310k (Kelvin) to make a roast. Only Desperadoes in shonky climatology cannot read / understand the gauge in their fridges and ovens.

        I had already to defend myself from a similar bull-artist like you, by pointing to him that is no darkness, only lack of photons; but normal people call it ”dark” at night… Brian, when you come up with drivel; it’s a real proof; that you are scared from the truth – you are suffering from ”truth phobia” 2] when you pick on my misspelling; it’s your own admission that: all my proofs are correct. Thanks for your approval, Brian

    • Vaughan Pratt

      @stefanthedenier trust me, I’m the most honest person on the planet;

      Then you should have called yourself Diogenes instead of “the denier.” Logic requires both the honest and the dishonest to deny that they are dishonest. Can you imagine either an honest or dishonest person saying “I am a liar?” That’s the celebrated Liar Paradox!

      • Vaughan Pratt | March 1, 2012 at 2:35 am | ”no honest or dishonest person say that he is a liar”

        Wrong, Vaughan, WRONG!!! When a person states that he knows exactly ”the GLOBAL temperature” —that is admission that he is a liar. Because nobody has ever monitored the GLOBAL temp; on one small hill are 600 different variations in temp and change every 10minutes. Planet’s temp is not same everywhere as in human body. When one states that the planet is warmer year by ”0,02C, that is shouting loud and clear that the psycho is a shameless con artist / dishonest / liar. Their / yours ”admissions IN WRITING that they / you are a liar” are numerous. I’m writing a book about swindlers like you.

        P.s everything I prove, can be replicated / proven, right now. Small example: The hottest point is always closest to the ground – when gets warmer, for any reason – VERTICAL WINDS INCREASE!!! You talk about thermodynamics / convection – but don’t include Stefan’s / my formulas; because they prove beyond any reasonable doubt; that you people are lying INTENTIONALLY. I can prove most of the Swindler’s lies are lies; the only wrong proven about my proofs is that I misspell and have limited English vocabulary. Picking on my misspelling, is Swindler’s admission that all the rest I have is correct

      • Vaughan Pratt

        because they prove beyond any reasonable doubt; that you people are lying INTENTIONALLY

        Stefan, don’t get angry, get rich. If your method of establishing a person’s intent really does work as you claim, you’d better patent it. The justice system would be your first serious customer. Establishing intent in a court of law has long been an outstanding open problem, usually left to a jury to argue over.

        You’ll go far in this world with an invention like that. Got any others up your sleeve?

      • @ Vaughan Pratt | March 5, 2012 at 12:07 pm | Stefan, don’t get angry, get rich.

        Vaughn, see, you can tell truth; your tongue didn’t brake of. I seldom argue against truth, no matter how bad that truth is. But you are my friend, I will make an exception. Beware from people that have being fleeced, when they find out the truth – they will make you to look funny without testicles. Just be careful and… don’t book accommodation on Mt. Ararat; but in central American jungle; where they can’t find you. Cheers

    • Rob Starkey

      Wouldn’t existing satellite measurements provide either additional support for or refute your idea?

      • Rob Starkey | March 1, 2012 at 7:24 pm |

        Rob, satellites takes “occasionally” ”TWO DIMENSIONAL” infrared photos. Satellite will not tell the difference between: ”if is 1,2m layer of 20C temp and below of 500m layer of 16C – OR if is 500m layer of 20C and 10mm of 16C. Using satellites for monitoring global warmth is the mother of all lies. For a start, regarding climate, NASA is only above IPCC, in the phony GLOBAL warming sewer.

        I presume that is no need to explain to you that: 500m layer of air can contain more heat than 15cm. But for NASA that is a taboo… . Because: if is everything good ahead – NASA’s budget goes down by half —- if big catastrophes are ahead – NASA’s budget quadruples instead… they are not stupid…

  19. Anteros | February 27, 2012 at 6:38 pm |
    vukcevic –

    ” Post WW2 has some basis in reason, as it marked quite a significant change in emissions. 2nd half of the 20th century for similar reasons – arbitrary but not cherry-picking (from any point of view)”

    Hmmm … It seems to me that the emissions from the international military-industrial complex must have been significant during WWII.

    I have no insight as regards the amount of GHGs produced by high explosive material and related colateral damage but it must be significant compared to the post-WWII era.

    • I was thinking the very same thing while I was typing “Post WW2″ but not having any info either, I let it pass :).

      However, I suppose in the back of my mind was the graph of tonyb’s that gives the impression that WW2 had less of an emissions impact than we guessed – http://wattsupwiththat.files.wordpress.com/2011/11/tbrown_figure3.png

      I think one reason sceptics lean to 1940ish and warmists choose 1950 is because of the peak of temperature around 1940. Not very edifying in either case but that’s life..

    • Bombed out factories stop producing CO2 when the fires go out. Countries at war don’t trade. Military production robs domestic industry of man power and scarce raw materials. Total economic output drops even as war production increases.

      • JJ – “Countries at war don’t trade.”

        What a silly, silly comment. You cannot possibly be serious. During war factories are more active than ever. You have NO concept or knowledge of history, to make an ill-informed statement like that. That is like saying, “Black is actually white.” Wars are won by logistics. Supplies – bullets, cannons, planes, tanks, rifles, uniforms, helmets, bombs – whoever doesn’t manufacture or buy those LOSES. Ever heard of Lend-Lease? Unlimited submarine warfare? The Lusitania? England – and Russia, too – would have lost WWII without American supply ships. As in manufacturing. As in factories. Ever heard of Strategic bombing? It was the effort to STOP the German factories. KZ Dachau had about 200 satellite camps – all factories, and all situated to avoid bombing. Factories, factories, factories. Whichever side keeps theirs going, they usually win.

        Steve Garcia

      • Steve Garcia,

        I have heard of lend lease. I have also heard of rationing. I understand that GM and BMW were making lots of trucks and tanks during the war years. I also understand that they weren’t making many cars during that same period. I also understand that rationing ended and car production dramatically increased after the war.

        I understand that WWII was in large part an industrial competition. I also understand that this means that materials production, transport, and manufacturing were therefore high value targets. I have heard of strategic bombing. I also wonder why you think it was not effective at the assigned task. I also wonder how much CO2 Dresden produced, once the embers cooled.

        Two posters wonder aloud about the consensus regarding CO2 levels ca WWII. I offered a potential explanation for that more or less accepted fact. Without so much as a single fact or figure in support, you launch into a nasty diatribe replete with name calling.

        What a vile, petty little man.

      • Not a vile and petty man. You made a completely incorrect statement,

        Countries at war don’t trade.

        which showed complete ignorance of the facts you now say you knew.

        Your ignorant statement deserved no respect. If you knew those facts you should never have stated what you did. If you don’t want people pointing out your errors, please refrain from making them.

        Vile and petty? Here is your reply to Pierre:

        For Pierre’s sake , if you can’t do this very simple calculation, you have no business commenting on GW, pro or con. Ignorance? Check! Arrogance? Check! Carry on.

        You have a pathetic double standard. You can dish it out, but you can’t take it.

        Steve Garcia

      • Brandon Shollenberger

        JJ, I think you ought to read Steve Garcia’s comment again. You claim it was filled (replete) with name calling, and yet, it doesn’t even have a single instance of such. The closest he comes to calling you names is when he says:

        What a silly, silly comment. You cannot possibly be serious. During war factories are more active than ever. You have NO concept or knowledge of history, to make an ill-informed statement like that. That is like saying, “Black is actually white.”

        You could argue he mocked you, but there is no name calling. Heck, most of that is discussing your comment, not you. The only thing he said about you was you “have NO concept or knowledge of history,” and that isn’t name calling. Heck, it isn’t even really insulting you given how wrong your comment was. Speaking of, you seem to now acknowledge that trading does happen in countries at war, meaning you acknowledge your comment was wrong.

        Anyway, I highly recommend you reread his comment rather than seemingly agree with his point while calling him a “vile, petty little man” when he hasn’t called you anything.

      • Brandon –

        I did not see your reply when I responded to JJ. Thanks. I specifically did NOT call him names. In my response, I used the word “ignorant” in the dictionary meaning of the term, as in not having knowledge about a thing.

        I did not see a retraction of his “not knowledgeable” statement. though he now claims he does know those things. I am actually glad he acknowledged the real history and that he is not as lacking as I’d thought. But like you said, he didn’t admit his error.

        I do love these kinds of blogs, where I can find reasonable people who things can be discussed with. I recall one on here where hundreds of comments by “disagreers” were made and all were very respectful.

        In fact, JJ, I had no intention of insulting you, even if you were some dumb cluck. But I could not let your statement pass without comment. It was just flat out wrong. So, my sincere apologies.

        But when you make a really incorrect statement, don’t be surprised if someone calls you out on it. And you be cool, too.

        Peace to both of you.

        Steve Garcia

      • Vaughan Pratt

        What a silly, silly comment. You cannot possibly be serious.

        What a silly silly silly comment. You are obviously not serious.

  20. Here is a known fact from the geologic record. The last time CO2 was as high as it is now was 20 million years ago and sea level was 200 feet higher than today. We have no idea how fast the ice is going to melt, but we do know that sea level has had instances of very rapid rise in the last 15,000 years. We do know that the rate of seal level rise is now accelerating. Is pointing out these facts alarmist?

    • Something doesn’t add up, rossi. Is the extra 200 feet of sea level hiding in the deep ocean bottoms, along with the missing heat?

      • Markus Fitzhenry.

        Don, It’s worse than we thought.

      • We can’t account for the missing 200 feet of sea water, and it’s a travesty that we can’t.

        Maybe it’s in the deep ocean, hiding with the missing heat?

      • If you’re prepared to look closely, I think you’ll find both the 200 feet of sea level and all the missing heat in Kevin Trenberth’s underpants.

      • Markus Fitzhenry.

        No good Anteros. ‘Felicity’ had are really good look and couldn’t find any heat down there either.

        I think it has actually gone to the nether.

      • Gary M: Best line I’ve read all day!

    • Well, stating an untruth is alarmist. Have a look at the record – the slope of sea level rise has been constant for a very, very long time, but has actually *decreased* in recent years, not increased. See

      http://sealevel.colorado.edu/

      You may be looking at a less authentic source, or maybe none at all.

    • Yes, doubling CO2 is taking us back to Cretaceous levels (pre 65 million years ago). There were no ice caps in the Cretaceous, it was all in increased sea level. Furthermore, a second doubling (barely possible by using all fossil fuels) gets us to Jurassic, and a third doubling (not possible thankfully without the help of volcanoes) gets us to Triassic. These were increasingly warm periods in paleoclimate. I find this very interesting as a lesson on where we stand in the long view.

      • Your POV doesn’t take into account, nor even raise awareness of, other potential climate factors that could have led to warmer conditions 65M years ago. It is not at all a given that CO2 was the culprit!

        You may want to start with plate tectonics: Two events in the past 35M years have very likely contributed to substantial planetary cooling, namely the establishment of the Drake Passage (30-35M yrs ago) and more recently the closing off of Isthmus of Panama (5M yrs ago). Both events have impacted ocean circulation patterns and each is suspected of contributing to the formation of the North and South polar ice caps, respectively.

        Until you account for these two events (and possibly others), it is pretty difficult to assign blame to Cretaceous CO2 levels for the lack of polar ice caps back then.

        You may also also be aware of Jan Veizer’s benthic foramina studies, which indicate tropical sea surface temperatures over the Phanerozoic era (past 500M yrs or so) have been remarkably stable and do not appear to be correlated to the geological record of prevailing CO2 concentrations at the time (eg. GEOCARB III).

      • Paul, the global temperature cannot be affected by just ocean circulations, and the Cretaceous was warm enough to prevent ice caps which had not existed since the Permian 250 million years ago, which coincidentally was the last time CO2 had low values like now, and just prior to the probably volcanically-induced climate change leading to the Permian-Triassic extinction event and the high Triassic CO2 levels. The sun was even a little cooler in these previous periods, so higher CO2 alone won’t explain all the differences and may underestimate them if the sun isn’t accounted for.

      • If a change to the ocean circulation patterns leads to the establishment of a permanent last ice sheet (as happened with the one that formed over Antarctica), this impacts the overall planetary bond albedo, and that will have an impact on the global temperature. Indeed, one can see significant global cooling visible in the geological record just around the time of Antarctica’s initial glaciation.

        It isn’t just about ocean circulation: Ice caps tend to form in the presence of large land masses at the poles. When the continents are not near the poles, we have seen little in the way of glaciation take place.

        However, we did see a deep glaciation occur some 300M yrs ago (the “Gondwanan Ice Age”), when CO2 levels were some 15 times current levels. :-)

      • that should read: “permanent _large_ ice sheet” in the first para of my reply above. Sorry!

      • Actually, it was the Ordovician Glaciation I was thinking of (420M yrs ago) where CO2 levels were approximately 15 times today’s… mea culpa!

        Coincided with a large super continent at the South pole…

      • @ Jim D | February 27, 2012 at 11:25 pm |: Paul, the global temperature cannot be affected by just ocean circulations,

        Jim D, GLOBAL temperature doesn’t get effected by ocean circulation, global temp is always the same. Ocean circulation effects / CONTROLS the climate. When circulation increases / decreases = improves / deteriorates the climate on many places. Nothing to do with the PHONY global warming. I’m glad that I can be of some help for you; unfortunately, as a ”closed parashoot brains”’ medical help is more appropriate for you

      • The sun only needs to be a few percent fainter, as it was in previous glaciations, to permit ice at the poles even with ten times as much CO2. The Ordovician was also possibly after a period of declining CO2 implying cooling, but temperature and CO2 estimates back then are a bit fuzzy.

      • @ Jim D | February 28, 2012 at 1:03 am

        Jim D, the sun doesn’t get fainter – look at the size of it. They tell you that the sun gets fainter; but that is misleading. Yesterday was created some big sun-flares; sunlight comes here in 8 minutes, no delays – you will see that is not going to be warmer. Only damages to some electronics; but because the temp is controlled by O+N expanding / shrinking INSTANTLY in change of temp – expanding when warmed / shrinking when cooled extra = overall is same temp always. Stop worrying!!! Tell this to people that brainwashed you – make them to worry.

        They are lying, because they have many megaphones like you / they know that megaphone need only battery, but no brains necessary . Be happy, let the big Swindlers worry

      • Does this mean that I may be able to take the kids to see real dinosaurs in a park?

      • Paul, you are 90% correct; Jim D is 101% WRONG!!!

        For a start, for Jim D to state that: ”the sun was little bit cooler 500m years ago”… the man doesn’t know what the word ”shame” means, it’s a symbol of power madness… Yes Paul, H2O controls the climatic changes 100%, on many different ways. Big changes happened in closing the gap between south / north Americas, opening Bering straights, opening Gibraltar straights. Is not just the opening / closing by itself. But that changes the directions of currents, which effects changes places far, far away. The shonky science in the past never used facts and common sense – they were pining the climatic changes on solar activity, which is 101% wrong… then they become CO2 + methane molesters / jihadists.

        All you need is to compare Brazil V Sahara’s climate. For the jihadists those two places have SAME climate… because of same amount of CO2, same solar / galactic influences. Most of the ”climatologist” are a big city swindlers… Paul, if you drive from east to west coast of USA, or Australia; in one week you will encounter 50 different climates. Is it that 50 GLOBAL warming happened in that week, or was it more or less HO2 present in particular area?! People that cannot understand that: climate becomes more extreme without water / milder temperature CLIMATE, with lots of water present; are ”premeditated mas murderers” They are blaming ”water vapor” for the phony GLOBAL warming. If the truth is known, by building extra dams to prevent floods and droughts; not only loss of lives would be prevented; but extra water on the land = less dry heat created, more moisture in the air is for regular rain + day / night temp closer + more raw material for renewal of ice on polar caps and glaciers and lots of other benefits.

        Their massive drivel silences the truth, but the truth always wins on the end. It’s all on my website and in my book. Sophisticated swindlers cannot change the laws of physics. There is NO such a thing as GLOBAL warming, or GLOBAL ICE AGE. When part of the planet gets warmer than normal – other part MUST get INSTANTLY colder than normal. Oxygen + nitrogen regulate the temperature overall to be the same every day of every month, year and millenia. Proven already ”beyond any reasonable doubt”

    • Sounds like strong evidence against a CO2-temperature link. CO2 up but no sea, see.

    • Cagw_skeptic99

      How many years of declining sea level are required to falsify the claim that sea level rise is accelerating? Most of the true believers have stopped making the sea level claim because it just calls attention to the recent measurements that sea level has been falling, not rising at an accelerating rate.

      • Cagw_skeptic99 –

        You should know by now that nothing in all this is *ever* falsifiable. Nor designed to be. If sea level doesn’t rise, this will be explained by tweaking the Models, blessings and peace be upon them.

      • Well, I have been informed, all the water that’s missing is apparently flooding Australia. Apparently, as Australia is at the bottom of the Earth the rainwater doesn’t flow back to the sea, but sits around doing nothing, like evaporating or anything.

    • Is pointing out these facts alarmist? Yes, Ross. as you’ve dont it, yes it is. Read Lindzen’s remarks. Pointing out sea level rise is trivially true but making the leap from that to 200 feet is alarmist.

    • Ross you are wrong about the sea level rising rate is accelerating. In the Houston paper it is decelerating and others concur with that

    • Ross,
      Why do you think the two are connected?

    • Vaughan Pratt

      Something doesn’t add up, rossi. Is the extra 200 feet of sea level hiding in the deep ocean bottoms, along with the missing heat?

      Obviously not, as your tone indicates.

      But has it occurred to you that it could hide on top of Ellesmere Island, Greenland, and Antarctica? Do you have a physics-based reason why this could not happen?

  21. This material is largely recycled from previous talks, so we don’t have anything new to address in it. Lindzen stays clear of the last 30 years for good reason. Had he calculated how much warming his 1 C sensitivity would have given, it would have been less than half of what was observed. He then would have had to say where he thought the rest came from, which he has no idea of, at least that he has spoken about. For 1900-2000, his expected warming would have been near 0.35 C, only half of what actually occurred, even with the negative effect of aerosols that he doesn’t believe in (somewhat in a minority there). When he says that evidence suggests lower sensitivity, he is referring to his own study of tropical west Pacific clouds during ENSO cycles with cloud-forcing changes that he infers apply globally to the CO2 effect somehow. He found this inference hard to publish: tropical Pacific clouds – fair to present but disputed by other later studies, global application to CO2 – a step too far.
    His last sentence is ironic, in that it expresses certainty in his own low-warming prediction, despite his earlier caution about listening to people who say things are incontrovertible, presumably meaning besides himself. Apart from that, global warming is incontrovertible in the temperature record and that is the only sense where I have seen the term applied in an official statement by any scientists.

    • Jim, See my response to Colose upthread. Basically, Lindzen is looking at all forcings excluding the very uncertain aerosols. In AR4, the summary forcing diagram shows those forcings to be north of 3W/m2, pretty close to the alleged value for a doubling of CO2, the magic sensitivity forcings. I would urge you to see in the same diagram for the error bar on the aerosol forcings. Lindzen also has some references to modeling papers where the aerosol “adjustment” is explained along Lindzen’s lines.

      • The aerosols are uncertain but centered on -1.5 W/m2. How does he justify ignoring a first-order term like this? Just acknowledging a central value throws his sensitivity out of the window. He might even have heard of global dimming which occurred during the greatest part of the aerosol growth from 1950-1980. It just seems irrational to say no effect without reasoning it out.

      • The global dimming / brightening produces a number of problems for both the models in the AR4 such as Romanou 2007. eg Ohmura 2009.

        Global solar irradiance showed a significant fluctuation
        during the last 90 years. It increased from 1920 to
        1940–1950s, thereafter it decreased toward late 1980s. In
        early 1990s 75% of the glob indicated the increasing trend
        of solar irradiance, while the remaining area continued to
        lose solar radiation. The regions with continued dimming
        are located in areas with high aerosol content. The magnitudes
        of the variation are estimated at +12,_8 and +8Wm_2,
        for the first brightening, for the dimming and the recent
        brightening periods, respectively.

        Observations from surface actinometric stations in the south pacific have a number of confounding attributes. eg Wild

        evidence for a decrease of SD from the 1950s to 1990 and a recovery thereafter was also found on the Southern Hemisphere at the majority of 207 sites in New Zealand and on South Pacific IslandsLiley, 2009].

        Liley [2009] pointed out that the dimming and brightening observed in New Zealand is unlikely related to the directaerosol effect, since aerosol optical depth measurementsshowed too little aerosol to explain the changes. On the basis of sunshine duration measurements he argued that increasing and decreasing cloudiness could have caused
        dimming and brightening at the New Zealand sites.

        Hatzianastassiou 2011 show that observations in the 21st century also constrain the so called understanding eg

        An overall global dimming (based on coastal, land and ocean pixels) is found to have taken place on the Earth under all-sky conditions, from 2001 to 2006, arising from a stronger solar dimming in the SH (delta SSR = -3.84 W m-2 or -0.64 W m-2/yr) and a slight dimming in NH (delta SSR = -0.65 W m-2 or -0.11 W m-2/yr), thus exhibiting a strong inter-hemispherical difference. Dimming is observed over land and ocean in the SH, and over oceans in the NH, whereas a slight brightening occurred over NH land, with the SSR tendencies being larger in the SH than in the NH land and ocean areas.

        The Southern Hemisphere has undergone significant dimming due to a larger increase in cloud cover than in NH, which has dominated the slight dimming from increased aerosols. The indicated SSR dimming of the Southern Hemisphere at the beginning of this century demonstrates that much remains to be learned about the responsible
        physical processes and climatic role of cloud and aerosol feedbacks.

      • The largest aerosol effect is that on cloud albedo, especially over oceans, so perhaps it makes sense that it is seen in the SH. Anyway, why doesn’t Lindzen talk about any of this?

      • The aerosols over ocean in the SH are mostly natural ie biological,over land where we have good optical resolution eg lidar the counts are mostly negligible ie insignificant for antropogenic contribution.eg Lilley 2009

      • Jim D; You are not just WRONG, but back to front on everything also.

        1] ”Aerosols” are used exclusively for confusing Smarties like you. Aerosols have no influence on temperature. Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen. Aerosols cannot warm the stratosphere / stratospheric temperature is always the same. Because the diameter of the earth’s orbit is 30 light minutes big. The velocity the earth travels trough that coldness is incomprehensible for the shonky science only. Whatever you have learned from the book for brainwashing, is preventing you to see anything regarding climate in proper prospective.

        If you clear the mud from your head; you will be able to see clearly the ”drivel with confidence” by people like you, Brandon Shollenberger and others are saying, that wouldn’t make sense to an earthworm. Nothing personal, just friendly advise; try to think for yourself, instead off using the crap dished by the propaganda establishment, or the book for brainwashing (created by amateur climatologist / geologist, in the past 100y) Jim, CO2 absorbs more heat than O+N, but CO2 absorbs much more coldness than O+N at night. THOSE 2 FACTORS CANCEL EACH OTHER!!!! Only Flat Earther believes that is 24h sunlight on the whole planet, think about it… Some day you will have to justify the lies lies that you are spreading, even though others invented them.

      • stephanthedenier,

        +1

      • Vaughan Pratt

        Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen.

        Hard to imagine anyone more clueless about atmospheric pollution.

        Los Angeles had its brown cloud in the 1960s and into the 70s. Now India and China have their brown clouds. These are nowhere near the stratosphere. Check your facts first.

      • @ Vaughan Pratt | March 1, 2012 at 2:45 am |
        Aerosols, helium, ozone, are into the stratosphere – they don’t circulate up and down to exchange more / less heat; that is the job for oxygen + nitrogen.
        Hard to imagine anyone more clueless about atmospheric pollution.
        Los Angeles had its brown cloud in the 1960s and into the 70s. Now India and China have their brown clouds. These are nowhere near the stratosphere. Check your facts first.

        Vaughn, the brown cloud is from SOOTH, CO (carbon-monoxide) SO2; NOT from aerosol, helium, ozone. Not efficient burning of fossil fuel, because of depleted oxygen in those mentioned areas by you. Proves that badmouthing of creation of new methane, is one of the biggest crime. Only creation of NEW methane reverses the damages / IMPROVES THE OXYGEN LEVEL IN THE ATMOSPHERE. All proven already. You Vaughan are a big part of that crime, jihadists against CO2, CH4. I hope you will get appropriate penalties, for what you deserve, not more or less

  22. Paul in Sweden

    Campaign to Repeal the Climate Change Act – Prof Richard S. Lindzen Seminar (Global Warming: How to approach the science) held at the House of Commons Committee Rooms Westminster, London on the 22nd February 2012

    THE CLIMATE CHANGE ACT RECONSIDERED – PART 2 of 2 (credits re-edited)A Public meeting held in the UK House of Commons

    Part 2 of the House of Commons session that Lindzen participated had the relevant energy policy discussions.

    • Paul, Just to note that the 2nd video “Climate Change Act Reconsidered-2.mov” was from the previous meeting at Westminster on 30th November 2011 and not the Prof Lindzen meeting.

      For the CO2 advocates out there – the evidence? I was at both, and so have first hand observation.

  23. Leonard Weinstein

    Ross Cann,
    Your comment: “that the rate of seal level rise is now accelerating”, is very interesting. This is so especially since the sea level in fact was dropping the last few years, and now is only slightly rising the most recent period. The average rate over the last 5 or so years is near zero. How do you get an acceleration out of that?

    • Leonard; Because of the movement of the tectonic plates – they buckle = same place appears that water is raising – other places as some atolls / islands are sinking. We should be grateful for it. Why?

      If that wasn’t happening in the past… high erosion from the hills by water and winds – not one speck of dust by now would have being dry!!! .My conservative calculation says: ”there is enough water on the planet, to cover ALL the soil by 1,9km of water. Not because of CO2 or any phony GLOBAL warming; but because of the amount of water. In my book I have 3,5 pages on that subject. The only reason we have dry lands is because other places are sinking – that gives ideas to people of ”sea raising / falling”

      Another phenomena: 80% of all the water in every sea and ocean combined is below 4C. Water below 4C, when is warming up – it shrinks; when gets colder, expands. Experiment: put a bottle of seawater at 4C in your freezer – by the time cools by few degrees; the bottle will explode. Then fill a bottle with seawater at zero degrees and warm it by 3-4 degrees – the water will shrink by 5-6% – if that was a 1km deep ocean – should shrink a lot, WHEN THE WATER IS GETTING WARMER!!! They are not just wrong, but are back to front on most of the subjects. When part of a tectonic plate is sinking – gives an illusion that the seawater is raising. Illusion is not a science – but is used by lots of shonky scientists as factual

  24. The problem as I see it is that, to coin a cliche, the devil is in the details. If you give a short presentation in general terms to a non-scientific audience, you can prove just about anything you want, with no-one to say you’re wrong. The reason that Lindzen’s perspective is not widely accepted within climate science resides in details that are not in the talk, and which an audience unfamiliar with climate data would be unable to judge in any case.

    Although I wouldn’t be vehement about it, I tend to agree with Chris Colose and Jim D on many of the specifics. I find it particularly unfortunate that Lindzen seems to cling to the argument that aerosol cooling is just a fudge factor invented to make the mainstream arguments fit the observations, an argument that sometimes seems to have achieved mythical status in some blogosphere commentary. We don’t know everything about aerosols during the twentieth century, but we do know a lot, including evidence for their potent “global dimming” effects from about 1950 through the late 1970s. The legitimate grounds for discussion and disagreement would remain within the boundaries of the magnitude of aerosol effects, but not with the claim those effects were negligible or non-existent.

    This has actually been discussed fairly extensively in many past and recent threads, and interested readers might want to go back to look at those discussions and visit the relevant references to studies by Martin Wild, Gregory and Forster, and analyses by Isaac Held, among others. The relationship of aerosol forcing to model projections has also been discussed, and in addition to the above items, AR4 WG1 Chapter 9 is worth revisiting, even though the language is sometimes dense and ambiguous. There are uncertainties to be sure, but not at the level implied by Lindzen.

    • Fred –

      Are you saying that Lindzen shouldn’t give talks at the House of Commons?

      Is it much different from Al Gore touring the world giving talks to 10’s if not 100’s of thousands of people – with ‘information’ that Rich Muller at least describes as all either misleading, exaggerated or wrong?

      Lindzen is giving his opinion – it’s a talk, not a scientific paper, and if as Chris Colose so obnoxiously puts it, his ideas are “too stupid to acknowledge” then how is this talk going to affect anything?

      • Anteros – I’m not saying Lindzen shouldn’t talk to the House of Commons.

        On the other hand, by posting his talk here, Dr. Curry appears to be making its scientific content the focus for discussion, and her comments seem to support that inference. This troubles me because the scientific content of the talk was too general (as I mentioned above) to be dissected as a scientific talk as opposed to a politically oriented one, and so what we’re left with are simply the topics in the talk, for us to discuss on some other basis than what Lindzen said to the House of Commons.

        That’s fine, except for a few problems. First, the number of topics was far too great to address adequately in a blog thread. Second, and more important in my opinion, there exist numerous informative scientific publications addressing each of these topics, any of which would be a good starting point for fruitful discussion. It therefore concerns me that this blog recently appears often to be using news articles and public talks as a basis for climate science discussion rather than scientific communications replete with specific data. The latter do come up at times, but disappointingly seldom recently, in my perception.

        The use of news articles and talks generally provokes a great deal of arguing, but i believe more actual understanding would emerge if we started with published articles or other legitimate sources of data such as material presented at meetings, and occasionally, Internet content from individuals not involved in partisan controversy. Dozens of potential starting points are published every week, so there’s no dearth of material for serious discussion, if serious discussion is a goal here in preference to argumentation.

      • Fred –

        I take your point and don’t really disagree with you.

        However, what you’d like to see on this blog is, I think, very different from
        that desired by the majority. There are plenty of places where the recent literature of climate science can be discussed – I don’t think that is what Dr Curry’s blog is about, for the most part.

        It’s more to do with multiple different approaches and perspectives – perhaps many of which you find a bit trivial or superficial. It has its merits, though, for people looking to explore things other than consensus climate science.

        I think you’re right about Lindzen’s talk – it was perhaps too broad and discursive to be a useful blog subject – though I’m glad to have the links. Perhaps it would have been better for Dr Curry to excerpt 3 or 4 of the most interesting/convincing/contentious points and examined them in some detail. Or even just one!

        I think maybe I’m nearer the other end of the spectrum to yourself – I tend to get most interested here when the subject moves towards the history and philosophy of science and perhaps the psychology of our beliefs about the future. Less specifically of the ‘Climate’, and more of the ‘Etc’, even though I do have an interest in quite a few of the pure science topics.

      • Anteros – Thanks for your thoughtful response. I agree that it’s reasonable to have a mix of technical and non-technical subjects here, even if I personally would like to see the proportion shifted a bit more to the technical, which are currently in the minority. My only real complaint involves the use of non-scientific sources to launch a discussion of scientific topics at the technical level. If we’re going to discuss how aerosol forcing is handled, for example (a technical topic), I would prefer to start with recent data on this issue rather than a casual (and I believe inaccurate) remark by Lindzen suggesting that aerosols have been used simply to make predictions match observations. The same applies to newspaper articles as the basis for claims that models haven’t predicted recent temperature trends – a topic more complicated than implied by the news article.

        If the topic is philosophical or social, then sources relevant to that topic, including the popular media, would be perfectly reasonable.

      • Brandon Shollenberger

        Anteros, you say:

        I think you’re right about Lindzen’s talk – it was perhaps too broad and discursive to be a useful blog subject – though I’m glad to have the links. Perhaps it would have been better for Dr Curry to excerpt 3 or 4 of the most interesting/convincing/contentious points and examined them in some detail. Or even just one!

        I disagree. Lindzen’s talk was definitely broad, and it would be unreasonable to expect anyone to respond to everything he said in a single comment. However, it would be easy to respond to an individual point, or even several points, Curry highlighted. That it would be practically impossible to discuss everything Lindzen covered at the same time in no way makes it impossible to have valuable discussions of things he said.

      • Anteros –

        Coining a strategy from the warmistas (playing the authority card), Lindzen doesn’t just have AN opinion. As the Department head at MIT, his opinion is magnitudes more than even an informed opinion, much less merely “his opinion.”

        If the warmistas pretend like the Climate Head at MIT doesn’t have an opinion worth listening to, it say a lot more about the warmistas and their closed minds than it does about Lindzen. Yet they’ve tried to marginalize him since day one, pretending among themselves that they don’t see him or hear him. I imagine they all have their eyes closed and fingers in their ears and are going “Lalalalalalala” to keep from hearing him.

        Steve Garcia

      • Fred, Lindzen has a reference for the aerosol adjustment factor remark. It was made by a modeler in a refereed paper. Lindzen is good at this kind of thing, paying careful attention to detail. You should consider it as a technique for finding out the truth behind the science.

      • Fred,

        I got the distinct impression that Dr Curry pretty much skipped over the detailed science part of his presentation. The point under discussion I get has to do with how the debate is being framed, in particular the “doom” aspects that get promoted.

    • Fred,
      I share your wish for a higher proportion of technical discussion on this blog. But I’m afraid that I have a more cynical view than you of the (ab)use of aerosol forcing by climate scientists who wish to claim the climate sensitivity is relatively high.

      For instance, Hansen concluded last year that AOGCMs were mixing heat into the ocean too fast, and that must mean that their aerosol forcings were too low, since otherwise they would have produced too high a twentieth century warming. He ignored the possibility that the models’ climate sensitivity might be too high, because he is certain that about 3C is the correct value. What happens next? Surprise, surprise, the GISS model aerosol forcings are changed to make them much more negative – now rising to -2.4 W/m^2in 2010!

      And a peer reviewed study a year or two ago found a significant negative correlation between GCMs’ climate sensitivity and their aerosol forcings. The clearly implied that modellers were altering their aerosol forcings so aas to bring their model projections into line with other models (and hence the IPCC 2-4.5C sensitivity range).

      Aerosol forcing has in fact been tightly constrained by studies that estimated it simultaneously with climate sensitivity, using temperature measurements for several latitude bands. For instance, based on multiple temperature measurements, Forest et al. (2006) estimated total (direct + indirect) anthropogenic aerosol forcing in the 1980s (when it was probably at its highest) as -0.5 W/m^2, with a 5-95% range of -0.75 to -0.13 W/m^2 (relative to pre 1860 levels). That is way lower than the total forcing, of the order of -1.5 W/m^2 or higher, usually claimed by those who believe that climate sensitivity is high (3C or above). Even the IPCC’s own best estimate of total aerosol forcing is only -1.0 W/m^2 (change from 1750 to 2005, Fig. 2.20 AR4 WG1 report).

      • “And a peer reviewed study a year or two ago found a significant negative correlation between GCMs’ climate sensitivity and their aerosol forcings. The clearly implied that modellers were altering their aerosol forcings so aas to bring their model projections into line with other models (and hence the IPCC 2-4.5C sensitivity range).”

        Nic – I don’t think you understand how models are constructed. The study I assume you probably refer to was Kiehl 2007 (not one or two years ago), and there is no reason to believe that it has anything to do with modeler’s altering their forcings to match observations. For more on this, please see my earlier comment on the claims about aerosols as “fudge factors”. However, for a more expert source, you should consult people who actually construct models for a living. One of them, Andy Lacis, has been commenting in this blog, and you can also contact Gavin Schmidt or read the RC description of model construction.

        On the other hand, if you have direct knowledge of aerosol forcing beting prescribed in a model (as opposed to derived), and as a means of making the projections “come out right”, you should post the evidence in detail. I think you will find that to be a myth. There are still problems with getting aerosols right, but repeating myths about their use to match projections with observations won’t help solve them.

        I don’t think this has anything to do with how “cynical” one is, but simply about how knowledgeable one is in knowing how models are made.

      • Fred, You are right about the Kiehl study: I should have said a few years ago, not a year or two ago. A minor point.

        I realise that in many cases aerosol forcings are derived internally in GCMs; I didn’t imply otherwise. But the derived forcings can be changed by altering the relevant adjustable parameters, to make the model results more in line with what the modeller thinks they should be. I have not seen anything to convince me that it is a myth that this is done. To quote from Bender (2008) “A note on the effect of GCM tuning on climate sensitivity”:

        “At present, climate models are tuned to achieve agreement
        with observations. This means that parameter values that
        are weakly restricted by observations are adjusted to generate
        good agreement with observations for those parameters that are
        better restricted…”

        That statement fits the aerosol forcing case perfectly.Naturally, modellers would rationalise and defend adjustments that they make.

      • Nic – Two comments. First, as you point out, parameters are often tuned to observations within the limits of the underlying physics, but this is to ensure that they correctly simulate climate in its control state, without any imposed forcing from CO2 or other variable. The climate must correctly simulate seasonality, latitudinal differences, air and ocean circulation, and other attributes. Having done that, the modeler then “forces” the climate with the factor interest, e.g CO2., and asks how well it simulates the trend in comparison with observations. If it does well, that’s good. If it doesn’t, that’s too bad, but the model is not then tuned to match the trend, either by changing aerosol forcing or other inputs.. The notion that models are tuned to make their simulations “come out right” is one of those enduring myths that keeps surfacing like the Loch Ness Monster, no matter how many times it’s shot down.

        The above refers to simulations used as projections. Models can also be used to better define the values of parameters of interest, by “inverse modeling”, in which various values of the parameters are tested to determine which best permits the model to match observations. Note, though, that a model simulation performed for this purpose is not then cited as an example of how well models make projections. The model simulations referred to, for example, in the projections cited in AR4 WG1 Chapter 9 are examples of forward rather than inverse modeling.

        Second, and probably more important, the people to ask if you want further confirmation of this don’t include me, with my outsider’s knowledge, but rather the ones who construct climate models for a living. In particular, you should contact Gavin Schmidt, because Gavin is now accustomed to hearing this claim, and to explaining how aerosols are actually incorporated into the models, along with links to actual model details. I’m sure there are others who could do the same, but I’m most familiar with Gavin’s explanation on this topic.

      • Fred and Nik, As someone who is very familiar with turbulence models, the process is not that objective. Terms are added all the time to better correlate with specific cases. Tuning to match the current climate almost guarantees worsening the correlation in some other situations. The problem is that with the aerosol forcing 1.5+-1.1 W/m2, there is no rational way to set it except based on matching observations. This is what Lindzen is refering to.

        I would argue that any other method is even worse.

      • David – It’s important to distinguish the parameter tuning needed to establish a good simulation of control climates – something done routinely – from tuning designed to make a simulated trend better match the observed trend. The latter isn’t done.* In particular, aerosol forcing isn’t adjusted to make the simulations better match observations. Again, I think Gavin Schmidt would respond to direct inquiries on this matter – they needn’t be on RC or part of some ongoing debate. I also know there’s some archived RC material on this, but I don’t remember exactly where to find it.

        *More precisely, in previous discussions, including one with Dr. Curry on a different blog, he stated that he is unaware of any models where that has been done, and that includes the GISS models he has worked on.

      • It is good when discussions evolve to a technical level like this-

        I agree with Fred that it is important to distinguish between forward and inverse approaches to aerosol understanding, and the implications of each to, say, attribution. For example, the claim by the Curry and Webster uncertainty paper that inverse aerosol estimates represented a circular argument to the attribution problem was just wrong.

        However, there is in fact a large degree of inverse correlation between model estimates of climate sensitivity and aerosol forcing, at least up to CMIP3 generation models; there are some multiple interpretations in the literature, but because of possible conditioning of model ensembles to historical climate change, it is not appropriate to view the agreement in simulated and observed time-evolution of global surface temperature as a formal attribution. This, however, was not the basis for attribution in AR4, and this point will be emphasized even more in AR5. Formal attribution doesn’t concern the amplitude of simulated change, but the patterns between various forcings in time and space. Amplitudes are determined by regressions and model tuning has no significant impact on the detectability of a variety of forcings.

      • The distinction between cases used for parameter setting and actual “real” simulations is artificial and perhaps exists only in the minds of the modelers themselves. From an operational point of view, over time more data accumulates and the number of “tuning” cases increases. The problem then becomes more and more challenging because in the case of turbulence modeling the models are in fact much poorer than the “users” of those models realize, or more accurately poorer than they are willing to admit. I see no evidence that its any difference in climate science. There are only 2 possibilities:

        1. You add more terms and thus more tunable parameters to be able to fit more data.

        2. You accept a high level of error for cases other than those you used for tuning.

        I can’t show you here the data on turbulence models. There are literally hundreds, not counting many forms of each one. The better modelers set their constants based on cases where analytical solutions are available, for example an infinite flat plate in incompressible flow with no pressure gradients, a very special case. Relatively small differences in parameters can make differences of 20% in total forces, a very large difference, for even simple cases that are different than the “tuning” cases. The problem for climate science is that there are no simple cases for which analytical solutions are available. There is no alternative but to admit that your subgrid model parameters MUST depend on numerical artifacts and parameters, not a good situation. This is yet another form of circular reasoning.

        Climate models must use turbulence models, perhaps called subgrid models by the modelers, but the range of scales is much larger than for example for an aerodynamic problem. The prospect that they are even remotely accurate is nil. But yet Gavin Schmidt said to me that he had “never heard of Reynolds’ averaging as a significant source of error.” That’s totally understandable, its not his field, so one would expect him to rely on people outside of climate science. Who are they?

        My question to you Fred and you Chris, is what other technique would be appropriate for setting the aerosol forcings as a function of time? Should it be based on prejudice or a desire to make the sensitivity turn out a particular way?

        In any case, if the modelers are actually aware of the facts and data, they would realize that the subgrid models (and aerosol models can be considered one of these) can have a huge impact and are in fact pretty badly wrong if you stray far from the cases used to set them. The kind of tuning Lindzen talks about is far more “scientific” than the alternatives in my view. Basically, more and more data enables you to hopefully expand the range of applicability of the subgrid models. However, there is no guarantee of this. In other regimes, the assumption that the terms should be combined linearly has little justification and is based more on hope than science. How do you know the functional form of the terms is correct? The answer to this is usually that in a particular case, data seems to be reasonably accurately matched using this functional form. There are sometimes simple analytic theories that can be used, usually of very limited applicability.

        Anyway, I get tired of people who have no knowledge of subgrid models talking about them and how they are used and tuned. Climate scientists so far as I have been able to determine are merely users of these models and don’t understand their underlying “theory” such as it is.

        By the way, how is tuning an aerosol model any different than tuning the forcing scenario? They seem equivalent to me.

      • Chief Hydrologist

        ‘Extensive experience over several decades shows that computational atmospheric and oceanic simulation (AOS) models can be devised to plausibly mimic the space–time patterns and system functioning in nature. Such simulations provide fuller depictions than those provided by deductive mathematical analysis and measurement (because of limitations in technique and instrumental-sampling capability, respectively), albeit with less certainty about their truth.

        AOS models are widely used for weather, general circulation, and climate, as well as for many more isolated or idealized phenomena: flow instabilities, vortices, internal gravity waves, clouds, turbulence, and biogeochemical and other material processes. However, their solutions are rarely demonstrated to be quantitatively accurate compared with nature. Because AOS models are intended to yield multifaceted depictions of natural regimes, their partial inaccuracies occur even after deliberate tuning of discretionary parameters to force model accuracy in a few particular measures (e.g., radiative balance for the top of the atmosphere; horizontal mass flux in the Antarctic Circumpolar Current).’

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

        These are hollow men we are ‘debating’ – TS Eliot
        ….
        Between the idea
        And the reality
        Between the motion
        And the act
        Falls the Shadow

        For Thine is the Kingdom

        Between the conception
        And the creation
        Between the emotion
        And the response
        Falls the Shadow

        Life is very long

        Between the desire
        And the spasm
        Between the potency
        And the existence
        Between the essence
        And the descent
        Falls the Shadow

        It seems an impossible task to bring these people into a reasoned discourse – it is all shadow. It is a descent into madness – and of course they can’t see it. So why debate? We need to talk past these people and address the market place of ideas directly.

        We should be confident because we are right and they are just hollow men with an empty narrative.

        Robert I Ellison
        Chief Hydrologist

      • One other thing Chris and Fred, Subgrid models of turbulence use the doctrine that turbulent fluid has an effective viscosity higher than the laminar fluid, in some cases substantially higher. Thus, the models add dissipation, the ever present devil destroying accuracy of simulations. In fact, or course, the subgrid models are too dissipative, resulting in excessive damping of the dynamics. Rather like the leapfrog filter used in climate models that adds deadly dissipation to correct a well known issue with the leapfrog method, well known since I was a graduate school (and that was a long time ago). There is ample evidence that the models use the very best methods of the 1960’s. Pekka agrees about this incidently. Controling dissipation is critical to accuracy in any numerical simulation. Chris, I suggest you look up Runge-Kutta and Backward Differentiation schemes so you can straighten out the modelers. The problem here is that excessive dissipation produces exactly the outcome that Schmidt claims is the validation of the “doctrine of the attractor”, viz., that the models are totally wrong when integrated for a week, but if integrated for 100 years give a climate that “looks reasonable” and always seem to get the same statistics. This is circular reasoning if I have ever seen it.

      • Some of David Young’s recent comments are interesting, and I’m always glad to learn from his expertise in fluid dynamics. On the other hand, the topic has strayed a bit from the original. To get back to that, it’s simply worth noting that models aren’t tuned, by aerosol forcing adjustments or anything else, to make their projected trends match observed trends. The notion that Lindzen seems to have promoted that aerosol adjustments are used as “fudge factors” is incorrect.

        It will be worth getting further input from Andy Lacis or Gavin Schmidt, if they stop by, because they can not only describe the details of how aerosol forcing is in fact handled by models, but can also link to descriptions of model architecture to reinforce the point. In the meantime, if I can find an earlier discussion of this topic by Gavin, I’ll link to it.

      • Fred, The question is how is “tuning the aerosol forcings” any different that “tuning the aerosol subgrid model”? I claim they are probably mathematically equivalent. Regardless of the modeler’s rationalizations, mathematics is correct. If you allow me to tune the aerosol model, I can generate any forcing you want. If that weren’t the case, the subgrid model would be wrong. Bear in mind that the error bar is 200% of the median value. If you allow me to tune the constants in a turbulence model, I can get virtually any answer you want. You know Fred, you are using words that describe what the process the modelers go through, not the mathematical effects of what they are doing.

      • David – I guess I don’t really understand your point. The point I was making is that once the models are run and generate a trend, the modeler doesn’t go back and tweak parameters so that if it’s run again, it will match the trend better. In other words, it isn’t tuned to make it “come out right”. I thought I had made that clear, but maybe I didn’t.

        When it comes to constructing the model, a number of tunable parameters are adjusted so that the model can simulate the control climate – seasons, latitudinal differences, as well as some of the fluid dynamics you’re familiar with (although Gavin points out the tunable number is small). These adjustments must remain within the boundaries of what is physically and observationally plausible. However, this doesn’t guarantee that the model will simulate a CO2 forcing well, nor does it tell the modeler what the climate sensitivity of the model will be, and in fact, there is no way for the modeler to make the sensitivity come out to be some desired value.

        The result is that the modeler can’t dictate how skillful the model will be in predicting trends, and if it isn’t skillful, the modeler can’t do further tuning to fix that. Lindzen’s suggestion that models adjust aerosol forcing to make the modeled trends match the observed trends is false.

        I do, however, suggest, that further discussion would benefit from input coming from people who construct models for a living.

      • Fred,
        There’s a lot of discussion by Gavin in the comments in this thread, particularly in his comments to Judith Curry

        http://www.collide-a-scape.com/2010/08/03/the-curry-agonistes/

      • Also, maybe I’m belaboring the point, but remember that Kiehl showed that for good trend simulation, the ratio of model climate sensitivity to aerosol forcing should remain within certain limits. However, as I mentioned, the modeler has no idea what climate sensitivity will emerge from his/her model. Even if the modeler wanted to fit aerosol forcing to sensitivity, he or she wouldn’t know how to do it.

        But again, I’m hoping for some input from Gavin, Andy, or others.

      • Fred, You are missing the point. The distinction between tuning runs and “real” runs is totally artificial. If modelers are doing their job, for which I as a taxpayer am paying them a lot of money, they are constantly including more cases in their tuning runs. If they aren’t, they are using unscientific prejudice to set parameters. Trust me on this, subgrid models are a pseudo-scientific area where rigor is left behind and dogma prevails. The results are only to be believed within the range of the tuning runs.

      • Chief Hydrologist

        The problem with models and parameters is more then a tuning issue. Even if tuned to several observed variables as James McWilliams notes above – the Navier-Stokes continue to diverge into the future for which observations of course do not exist.

        This is because – with the best will in the world – the input parameters are not constrained sufficiently to constain exponential divergence in plausible solutions. We don’t know enough to the precision required to constrain the equations. This is the deterministic chaotic nature of the equations – an understanding of which is needed to understand climate models.

        The extent of divergence – or irreducible imprecision in the terms of McWilliams – can only be estimated from a systematically designed family of models. That is the models are run repeatedly with various combinations of feasible initial and boundary conditions – and the possible combinations of feasible formulations is a very large number thus systematic evaluation of irreducible imprecision is lacking in practice.

        Consequently the plausibility of the solution is determined on the basis of – wait for it – ‘a posteriori solution behavior’. That’s right folks – they pull it out of their arses.

        Robert I Ellison
        Chief Hydrologist

      • It’s getting late and my wine glass needs refilling with my Januik Cabernet, 23rd best wine in the world in 2011 according to Wine Spectator. Let me just say that a good reference here is Wilcox’s book on turbulence modeling. All the problems are laid out. The problem here is that practitioners of “colorful fluid dynamics” or “continuous fraud and deceit” or “climate modeling” are usually totally ignorant of these considerations.

      • Chris – Thanks for the link. It was exactly what I was searching for. You can start at about comment 334 to read the exchange between Gavin and Judith Curry. Anyone reading that and not convinced that there is absolutely no tuning to make model trends match observations must I believe have a mind set in cement. The tuning is a myth, and I would hope that exchange will settle it in the eyes of open-minded readers, because the myth is one that is often repeated in the blogosphere, and gets in the way of legitimate discussions of model design and the role of aerosols..

      • Chief, Your quotes from the literature are very illuminating. I think you and I agree about most of the important points. Now if we could just get the “pissants” to see the light.

        Best,

      • However, there is in fact a large degree of inverse correlation between model estimates of climate sensitivity and aerosol forcing, at least up to CMIP3 generation models; there are some multiple interpretations in the literature, but because of possible conditioning of model ensembles to historical climate change, it is not appropriate to view the agreement in simulated and observed time-evolution of global surface temperature as a formal attribution.

        The CMIP3 models were incorrect Ohmura 2009 ie early surface brightening .

        The models have a wide spread under both clear and all sky conditions.The reduction (non interactive) fail to capture obderved decadeal obsevations due to reduced DOF.or as seen in independent surface observations.

        Wild and Schmuki 2011 are critical of the models and thier application.eg

        The inability of climate models to simulate the full extent of decadal-scale variability is not just seen in SSR as documented in the present study, but also in other simulated climate elements such as the tropical top of atmosphere radiation budget (Wielicki et al. 2002), tropical precipitation (Allan and Soden 2007), the hydrological cycle in general (Wild and Liepert 2010), soil moisture (Li et al. 2007) and surface temperature/diurnal temperature range (Wild 2009b). Of course these elements may not be entirely independent, and misrepresentation of decadal variations in one of these, such as the SSR discussed here, may strongly impact the simulation of others. Further work is necessary to disentangle to what extent these underestimated decadal variations are due to an underestimation of
        forced or unforced climate variability.

        The inability of current GCMs to reproduce observed
        decadal scale variations does not imply that climate change
        scenarios (which typically target at more extended timescales)
        are biased. On these longer, multi-decadal to centennial
        timescales comparison with observations show
        good agreement where feasible, despite suppressed decadal
        variations (e.g. IPCC 2007; Wild 2009b). However, the
        shortcomings discussed here may have implications for
        shorter-term climate projections up to a few decades ahead
        where these strong decadal variations may dominate.

        Indeed Gavins and Chris arguments are incorrect,the difficulties with the surface radiation budget,has seen the concerns from a number to suggest changes the recommendations include (from GEB) include

        • The prominent picture of the Global Energy Balance in the IPCC report needs substantial revision. Particularly the surface flux estimates need to be revisited, and uncertainty ranges should be added to all components.
        • A continued and expanded operation and maintenance of a well calibrated network of long term surface radiation stations is required to provide direct observations and anchor sites for satellite-derived products and climate model validation, as well as for the detection of important changes in the radiation fields either not detectable by satellites or anticipated by models. The basic measurements include the four primary components (up and down, longwave and shortwave irradiance) with high temporal resolution (minute values) and known accuracy (BSRN accuracy standards).
        • These high accuracy observation sites should be expanded to under-represented regions of the globe (such as many low latitude areas) and particularly oceans where alternate or modified observational strategies might be necessary

      • Fred, I looked at that thread between Gavin and Judith. I must say that this definition of tuning is quite narrow. It is clear that tuning was not done to match the surface temperature record, nor to get a particular sensitivity, but in comment 338, Gavin does say that they do try to match the -1 W/m2 aerosol indirect (cloud) effect based on Hansen’s median estimate of this effect. This matching might be regarded as a tuning of some sort.

      • maksimovich

        The AR4 models typically underestimated the degree of decadal surface solar radiation variations, probably largely due to uncertainties in global emission inventories and indirect effects on clouds

      • Chief Hydrologist

        ‘A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere temperatures and winds, cloud heights, precipitation, and sea level pressure. Data–model comparisons continue, however, to highlight persistent problems in the marine stratocumulus regions.’ Schmidt et al 2006

        The models need to sucessfully mimic nature – this is especially the case fundamental physics are uncertain or measurement limitations exist – clouds and sulphates for instance. The fundamental principle of modelling is to make successful comparisons with empirical data – and that occurs by way of adjusment of parametised inputs. It is typical warminista nonsense to suggest otherwise.

        While affirming that models are a perfectly reasonable means of exploring the physics of the system – that by no means implies they can mimic such a complex system as Earth’s climate from first principles. Or that they have any worth at all in prediction for the reasons given above.

        Robert I Ellison
        Chief Hydrologist

      • Chief Hydrologist

        Chris Colose,

        I note also in Schmidt et al 2006 a reference to persistent problems in the marine stratocumulous regions. Most amusing.

        Robert I Ellision
        Cheif Hydrologist

      • Chris Colose.

        Indeed Hatzianastassiou 2011 found in the 21st century the SH SSR = -3.84 W m-2 or -0.64 W m-2/yr the NH -0.11 W m-2/yr.

        As the clouds are the pre dominent problem in the SH ,confidence is low in Hansons assumptions.

      • @ Chief Hydrologist | February 28, 2012 at 9:56 pm |
        Chief, does it say in those models: which horse is going to win the Melbourne cup in 2100?. Would be much easier to predict the cup winner than the exact climate in 82y from now. Because many more factors influence the CONSTANTLY changing climate.

      • David Young

        Fred, I note that the Lord Gavin has not come to your rescue on this thread despite your desparate pleas. Gavin’s a smart guy, but he has sold his soul to the idea of “communication of science”, a jealous god who generally rips to shreds his votives. I would suggest that there are a lot of other scientists who understand models at least as well as he does. Not that I claim to be superior to him, but you know science is about testing your mettle against other scientists. By the way, we need you to weigh in on the Judith’s latest post on models. Fred, where are you?

    • Actually Fred, Petr Chylek has done some good work on aerosols and shown them to have far less cooling impact than the IPCC would like to admit. The data is on Lindzen’s side.

    • Fred,

      Please see my response to Chris Colose:

      http://judithcurry.com/2012/02/27/lindzens-seminar-at-the-house-of-commons/#comment-178867

      Also, I would like to ask for your opinion about the aerosol question. Lindzen is quoted above by Judith as saying,

      “If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.”

      Judith says that this “is an oversimplification of how climate sensitivity is determined in the conventional way”. But is it? How can climate sensitivity be estimated without estimates of aerosol and solar forcing entering at some point?

      The AR5 ZOD Chapter 10 says,

      “The analysis of individual forcings is important, because only if forcings are estimated individually, can fortuitous cancellation of errors be avoided. Such a cancellation of errors between climate sensitivity and the magnitude of the sulphate forcing in models may have led to an underestimated spread of climate model simulations of the 20th century (Kiehl, 2007; Knutti, 2008)”.

      Later,

      “Knutti (2008) and others argue that the agreement between observed 20th century global mean temperature and temperature changes simulated in response to anthropogenic and natural forcings, should not in itself be taken as an attribution of global mean temperature change to human influence. Kiehl et al. (2007), Knutti (2008) and Huybers (2010) identify correlations between forcings and feedbacks across ensembles of earlier generation climate models which they argue are suggestive that parameter values in the models have been chosen in order to reproduce 20th century climate change. For example Kiehl et al. (2007) finds that models with a larger sulphate aerosol forcing tend to have a higher climate sensitivity, such that the spread of their simulated 20th century temperature changes is reduced. Stainforth et al. (2005) find that the spread of climate sensitivity in the CMIP3 models is smaller than the spread derived by perturbing parameters across plausible ranges in a single model, even after applying simple constraints based on the models’ mean climate. Schwartz et al. (2007) demonstrate that the range of simulated warming in the CMIP3 models is smaller than would be implied by the uncertainty in radiative forcing.”

      “Since in standard detection and attribution analyses the amplitude of the responses to various forcings is estimated by regression, the possible tuning of models to reproduce 20th century global mean temperature changes will have almost no effect on the detectability of the various forcings. Similarly this will have almost no effect on estimates of future warming constrained using a regression of observed climate change onto simulated historical changes. The spatial and temporal patterns of temperature changes simulated in response to the various forcings would be hard to tune in a model development setting, and it is these which form the basis of most detection and attribution analyses. Nonetheless, these results do suggest some caution in interpreting simulated and observed forced responses of consistent magnitude as positive evidence of model fidelity, since there is some evidence that this might arise partly from conditioning the model ensemble using historical observations of climate change (Huybers, 2010; Knutti, 2008).”

      While it is obvious that analysis of individual forcings is important, I fail to see how it defends against the bias of the researchers to find an answer within the canonical IPCC range (2 – 4.5 K). (Cue for someone here to tell me that IPCC scientists don’t have a bias. :)) Because, there is still a huge range of values in the literature to choose from.

      The Knutti (2008) paper argues that Kiehl (2007) has probably shown that the aerosol forcing is weaker than previously expected, although Knutti fails to draw the obvious conclusion, i.e. that this would imply lower climate sensitivity; the IPCC ZOD in turn fails to mention Knutti’s opinion at all. Huybers (2010) goes even further in suggesting that there is evidence that compensation between various feedbacks in the models may be the result of tuning during model development to find sensitivity within the expected range. Or to quote Peter Huybers,

      “More plausible is that model development and evaluation leads to an implicit tuning of the parameters, as suggested by Cess et al. (1996). As another example, of the 414 stable model versions Stainforth et al. (2005) analyzed, six versions yielded a negative climate sensitivity. Those six versions were apparently subjected to greater scrutiny and were excluded because of nonphysical interactions between the model’s mixed layer ocean and tropical clouds. Scrutinizing models that fall outside of an expected range of behavior, while reasonable
      from a model development perspective, makes them less likely to be included in an ensemble of results and, therefore, is apt to limit the spread of a model ensemble. In this sense, the covariance between the CMIP3 model feedbacks may be symptomatic of the uneven
      treatment of outlying model results.”

      In a very recent paper (Schwartz, 2012) it says,

      “Examination of the relation between the values of Str [transient sensitivity] and Seq [equilibrium sensitivity] determined by this analysis and the twentieth century climate forcing used to infer the sensitivity from the observed increase in GMST [global mean surface temperature] … shows distinct anticorrelation; that is, a low forcing yields a high sensitivity, and vice versa. … The anticorrelation between inferred equilibrium sensitivity and forcing found here indicates that the only way that Earth’s equilibrium climate sensitivity could be as great as the central value of the IPCC estimate, ΔT2× = 3 K, would be for the total forcing (recall that the forcing corresponds to the period 1900 – 1990) to be about 0.8 W m-2. Such a low forcing, which is at the low end of the IPCC “very likely” range, would require a rather large negative aerosol forcing to offset the forcing, by the well mixed greenhouse gases…”.

      Schwartz goes on to look at why related studies found much higher climate sensitivities. These studies were Gregory and Forster (2008) and Padilla et al. (2011). He writes,

      “The sensitivities determined in those studies are somewhat to substantially greater than the values determined for the forcing data sets examined here …. Correspondingly, the total forcings over the
      twentieth century employed in these analyses were lower to considerably lower…”.

      In the case of Gregory and Forster, who find a climate sensitivity of 3.5 K, he points out that they used a forcing data set that was even lower than the low end of ‘very likely’ range in the IPCC AR4.

      So, I fail to see how Lindzen’s point is not perfectly valid and supported by the literature.

      • Good post, with a good coverage of the literature.

      • Alex – You raise a number of points that might be addressed individually, but here I’ll only address the “tuning” issue, because it seems to be a source of many misconceptions. I’ll also repost the link to the collide-a-scape page where Gavin Schmidt and Judith Curry discuss it. The most relevant comments are from about 334 to 378. The bottom line is that there is no tuning of models to make their trend simulations match observational data.

        There have been suggestions that perhaps there was not explicit tuning, but rather a subtle, implicit form of tuning based on parameter choices made during model construction. For example, could the GISS modelers, faced with more than one realistic choice regarding aerosols, have picked the one they judged most likely to make their trend simulations best match observations? Unless, Gavin is not telling the truth, the answer is no. There is no explicit tuning and no implicit tuning.

        This doesn’t mean of course that modelers don’t make choices that affect model performance. What that discussion I linked to says is that those choices are based on a judgment of what choice best fits the available data, and not on what choice the modeler guesses might make the model trends “come out right”. Gavin gives specific examples of the sources used for aerosols in the GISS models. In addition, as I mentioned earlier, any attempt to guess would probably be unproductive, because making a particular parameter choice rarely gives modelers a clue as to how the model will behave in general. Modelers can’t make tweaks to have climate sensitivity come out the way they want, and since good model skill at trends requires a good balance between sensitivity and forcings, they therefore can’t tune the model to achieve that balance.

        The Lindzen suggestion that aerosol adjustments are fudge factors is either false or the modelers are lying. I don’t think either party is lying, but it appears that Lindzen isn’t telling the truth.

      • Fred,

        I really don’t believe that there are any models as complex and of a nature similar to the big climate models without implicit tuning. Anyone who is claiming otherwise without strong qualifications is telling untruths. Certainly very many model builders have not understood that but exactly those are most likely to draw erroneous conclusions concerning the effects of implicit tuning.

      • Pekka – Gavin Schmidt says there is no implicit tuning. If you disagree, you should write to him to explain why he is wrong, and if he responds, share the response with us.

        It appears from the link I cited that there is no implicit tuning designed to improve the model simulations of trends. Until contrary evidence is presented, I have to assume that the experts who design models for a living know what they are talking about, and that claims for tuning are therefore wrong. Parameter choices done to get the best fit to existing climates are not tunings of this type.

        Finally, in the dialog cited, there is a reference to a Hansen et al 2007 paper on forcings that includes a small section on inverse modeling of some aerosol choices. It appears that different levels of aerosol forcing in that model had only very minor effects on trend performance.

      • Fred,

        My view is based on very generic thinking of the processes used in creating large models. Every single choice that the modelers do having any idea of its influence on the outcome involves implicit tuning. It is well known in many fields that the ultimate influence of these innumerable choices is large and that it’s essentially impossible to tell what all effects it has. What I know about the climate models tells clearly that they must be influenced by these issues more than models in many other fields where the issue is severe enough.

        The simple well known fact that there are many different models with significantly differing amounts of forcing by aerosols which agree better in final outcome tells that estimating the effect of aerosols is one of those things that cannot be based on success of final results until there are non-disputable explicit and independent reasons to tell that all models with the “wrong” aerosol effect are irrelevant anyway.

        There may be a point beyond which no subjective input is out into the models. For the stages of work beyond this point it may be possible to say that there’s no implicit tuning. Up to that it’s always present, but putting enough effort in studying the arguments and consequences of the subjective choices it may be possible to get some rough hold on the size of the resulting uncertainty. Claiming that the problem does not exist is equivalent to admitting that all is open and unknown.

      • Fred, “Cloud feedbacks were identified as a major source of uncertainty in climate model simulations of climate change more than 20 years ago and still remain so. In attempting to simulate the climate of the past century, climate modelers have been forced to adjust direct aerosol forcing in their models to compensate for climate sensitivity due to cloud feedbacks.”

        http://www.atmos.washington.edu/~ackerman/Barcelona.html

        You should straighten this guy out, he is teaching non-sense :)

      • Fred,

        I add one piece of more specific evidence (although I don’t remember the exact reference). Some time ago a paper was discussed here, where Hadley Center modelers discussed, how they are trying to gradually “de-tune” their models, i.e. get rid of many types of tuning that has gone in to improve performance and replace that by more equations based on fundamentals. The told, how that will worsen the agreement with some existing data or over short term, but they must do that, because the tuning may have a worse effect on the reliability of long term projections. This is work in progress and takes long to be completed. Even then much tuning will certainly remain.

      • Dallas – the misconception that aerosol forcing is adjusted to make the model simulations perform better is widespread, which is why it has achieved the status of myth in many quarters. The sources I linked to and the discussions show that the it’s a false claim. Either that or the experts who do this for a living are making false statements. Given that they provide direct evidence for the means they actually use to address aerosols, which doesn’t involve choices based on how they will affect modeled trends, I expect they are telling the truth.

        Some of the confusion arises because modelers do make choices. It’s just that they don’t make them with an eye to how they will affect the ability of the model to simulate temperature trends.

        While there many web myths in circulation, I think it’s unfortunate that someone like Lindzen would help perpetuate this one. I believe this reflects careless thinking on his part rather than deliberate deception, but it’s unhelpful in any case.

        I also believe that since none of us here is nearly as knowledgeable about this as the modelers I’ve mentioned, it would be useful to have further input from them on the topic. The dialog I linked to, however, is a reasonable substitute in the meantime.

      • Fred I think is hung up on a semantic difference that is required for “communicating” in a way that makes things seem not circular. The desired semantic effect outways what every modeler knows. Not tuning parameters would be scientific malpractice. Fred, the errors are large because the problem is tremendousky complex. Without tuning, we would off by orders of magnitude. I’ve explained it as clearly as I can. As you Fred are fond of saying, base statements on the literature, NOT blog posts. Wilcox’s book on turbulence is an excellent place to start.

      • David – I did look up the literature to confirm Gavin’s statement. But again, since he knows more about this than you, I, or others who don’t construct climate models, his statements are a good starting point, with the literature as further reinforcement..

        There’s nothing semantic about it, David. Either the models are tuned by adjusting aerosols and other variables to improve their trend simulations, or they aren’t. It appears that they aren’t. They are tuned to get the basic starting climate right, but once that’s done, the model either does or doesn’t perform well on simulating trends, and if it doesn’t, it’s not tuned to make it do better.

      • Fred, I am not sure there is a misconception. While the models are not tuned on the fly, they are initially tuned to better match observation. The 1910 to 1940 period required strong aerosol and solar “tuning” which we have discussed in the past. Gavin stated that the 1910-1940 period was mainly solar and reduced volcanic aerosols. That is the assumption they made while setting up the model. I even noticed that positive aerosols, black carbon, was used at the end of that period.
        As Pekka said, some assumptions have to be made since there are unknowns, which is effectively “tuning”, adjusting, tweaking or any other similar term. It is just part of the process.

        I am not particularly sure why this is an issue. Skeptics just generally consider that the aerosol adjustments or estimates if you will, are over stated relative to the cloud feedback and CO2 forcing.

      • While the models are not tuned on the fly, they are initially tuned to better match observation.

        No, they are not, if by observation, you mean the expected temperature trend. They are not tuned in order to get that right, which is one of the main points Gavin Schmidt makes, along with references to back it up.

      • Fred,

        I’m not proposing that the models would be tuned adjusting aerosols, but I describe something which might well have happened. This is certainly highly simplified, but the basic idea is fully realistic.

        1. Based on earlier analyses and their ideas of the most likely properties of the climate system they conclude that a rather large influence of aerosols is likely and that the climate sensitivity is also relatively relatively large.

        2. When that has been concluded the input assumptions concerning aerosols a chosen and other subjective choices are done consistently.

        3. The resulting model behaves essentially in agreement with expectations and additional tuning of the model makes this agreement even better.

        4. When this model is used in further testing it gives results which are largely confirmatory.

        The point is that there was already a lot of knowledge available at the time of the first step and that the modelers did certain choices at that step. They could have done such other choices that have never been studied and it’s quite possible that the later steps would have been as successful, but the resulting model still quite different and the role of aerosols as well different. A fundamental problem is that it’s impossible to prove generally that no other set of original choices would not lead to successful further steps.

      • Chief Hydrologist

        ‘Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures. Where precision is an issue (e.g., in a climate forecast), only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’ http://www.pnas.org/content/104/21/8709.full

      • This exchange of comments is growing rather long, and I’m not sure much more will come out of it without further input from professional climate model designers. They (or at least Gavin) state that when models are designed, the choices regarding aerosols and other relevant parameters are made on the basis of physics and the observed properties and concentrations of the aerosols, and are not based on assumptions about how the choices will affect the ability of the models to simulate temperature trends. In other words, the modeler does NOT say, “well if I input this level of forcing, the aerosol effect won’t be sufficient to make the model perform well, and so I’ll choose one of the other available options because I know we need substantial aerosol forcing to get the simulations right.”.

        Unless the modelers are falsifying what they actually do, none of the tuning in models to get starting climates right is done with the object of making the simulation of trends from CO2 forcing come out right. It’s simply done as the best fit to the physics and observed climate properties (not trends).

        This appears to invalidate claims by Lindzen and others that aerosol adjustments are used as fudge factors to improve model performance, but not being a climate modeler, I can’t add much to how the modelers describe what they do.

        I’ll look forward to anything the modelers have to say here, but also repeat the recommendation to review the collide-a-scape dialog where this is discussed.

      • Fred, I think there is a subtle point being missed here. While Gavin may not call it tuning, is was assumed that 1910 to 1940 was natural and due to the change in volcanic aerosols, man made aerosols and increasing solar, which at that time was based on the older Lean, Holt and Wang solar reconstructions. The initial estimates of those factors are what we consider tuning. Times have changed and the data quality has changed, so the initial estimates have changed. So should modelers “adjust” to improve the model output or just assume that they hit it right the first time?

        I think GISS has a new paper on solar cycle impact in the northern hemisphere http://wattsupwiththat.com/2012/02/29/giss-finally-concedes-a-significant-role-for-the-sun-in-climate/

        One of the points is that natural internal oscillation amplify solar forcing changes. If that is true, it is not included in the the models to my knowledge, should the models be adjusted to consider the natural internal oscillations which may amplify solar variability impact on surface temperature?

      • Dallas – the 1910 -1940 warming involved declining volcanism, some solar increases, and a significant contribution from CO2 and other anthropogenic ghgs. I don’t think this is relevant to alleged tuning of current GCMs to make their trend simulations accurate, which appears to be a misconception. It’s probable that some inverse modeling may have been done for that interval to get a better handle on the forcings, but that’s a different subject.

      • There are two possibilities concerning the aerosols.

        The first is that their role is well understood based on empirical data and physics. Thus their influence is known without the help of the models. Then they can be included in the model based on this information.

        The other possibility is that aerosols are not understood that well and modelers are forced to make assumptions on their properites. They know already, how their choices will affect the resulting climate sensitivity when the model is developed to agree with temperature history of latest decades. Thus they make assumptions that they know to determine largely the climate sensitivity in their model.

        Based on what is generally stated about the level of understanding, which of these choices is closer to the truth?

        Are there really any more possibilities? If there are, I’m unable to figure out, what they could be.

      • Parameter choices done to get the best fit to existing climates are not tunings of this type.

        That’s fair enough. The Standard Model of the particle zoo had I think 19 particles a couple of decades ago, and grew to 23 a decade later, no idea what it is now. Data increases parameters and theory decreases them again so it could have gone either up or down a parameter or three in last decade.

        What would be a reasonable number of parameters for a model of long term global land-sea climate, defined as everything slower than the solar cycles, both TSI and magnetic or Hale? (I’m assuming all parameters on which the model depends are counted; e.g. if changing the acceleration due to gravity, radius of the Earth, etc changes the model’s behavior then those should be counted.)

        And should “reasonable” be a function of unexplained variance (uv = 1 – r2)? An excellent question for our resident statistician. Matt, is there some general rule in statistics for a reasonable number of parameters as a function of anything including unexplained variance (uv = 1 – r2), or uv as a function of number of parameters?

        If one could get the uv for such a model down to 1% with 6 parameters all told, .1% with 9 parameters, and .01% with 12 parameters (so 3 parameters for each decimal place of accuracy of fit), I’d call that a perfectly reasonable number to count as fitting. If the current models have 20 parameters or more however then unless it’s giving 6 decimal places of precision I’d be more inclined to call it tuning.

        Where does Gavin Schmidt draw the line between reasonable fitting and unreasonable tuning?

      • Pekka – Modelers make choices, but unless they are not telling the truth, they don’t make those choices based on how they want climate sensitivity or temperature trends to come out, but on how their choices best fit the physics and existing observations.

        Specifically regarding sensitivity, I don’t understand your point. How will a choice about aerosols affect the climate sensitivity to CO2 doubling that emerges from a model? Aerosols are not a significant feedback on CO2-mediated warming as far as I know, at least when Charney feedbacks are considered. For Earth System Sensitivity over multiple millennia, aerosols may play a feedback role, but that isn’t part of the standard sensitivity estimates.

      • Fred, You are repeating yourself. Tuning is essential, even of “forcings”, even in simple aerodynamic simulations. Pekka is right that implicit tuning is both necessary and standard practice. The only reason to say otherwise is for “communication of certainty”. You use as large a tuning suite as possible and hope for the best.

      • Fred,

        I didn’t repeat one point from my earlier comment here.

        Again I describe, how things easily proceed. I don’t make specific claims about the extent they have influenced climate models. The mechanisms are, however, very common and any claim about their small role should be based on specifically on knowledge about these stages of work.

        I’m also led to simplify the argument the more I’m forced to explain, what I mean.

        The point is that the modelers must tune the model in the next steps to get it working reasonably well and that they have many opportunities for that.

        Thus choosing little aerosol influence the natural level of temperature is higher around 1960 and less climate sensitivity is needed. Thus the additional tuning creates a model of lesser sensitivity. The opposite is true, if the aerosols are assumed to have a strong influence. The tuning that I discuss above is not considered tuning, because it is done at an early stage of development as it’s known already at that point that the model will fail otherwise.

        The statements about the absence of tuning refer to latest stages of working with the models. Many choices are done earlier at stages where the need of these choices is noticed. At the very early stage the modelers have more freedom of choice as there are still many more opportunities to compensate for many of their consequences. Sometimes these early choices are done without much knowledge about their influence, but here we have choices whose consequences were already largely known. In such cases the choices are often made in a way that ultimately confirms what the modelers believe to be true even if that belief is not on strong basis.

        What we know about the differences between different climate modes with respect to the role of aerosols appears to confirm that this is not only a theoretical worry, but a real problem in assessing the reliability of climate models.

      • David – You seem to setting up a straw man. Everyone agrees tuning is necessary and is performed. The point is that models aren’t tuned to make their simulations of trends come out right, contrary to what Lindzen alleges. I do hope one or more modelers stops by to confirm this, because it will dispel a myth about aerosols as “fudge factors”. Unless the modelers are deliberately falsifying what they do, aerosols aren’t adjusted – they aren’t fudge factors.

        Pekka – My earlier point remains. I don’t see how making choices about aerosols affects the climate sensitivity to CO2 that emerges from models. In fact, if you read what Hansen and others say about this, they point out that their model has a particular sensitivity (e.g., 2.7 C), and that changing aerosols affects projected temperature. It doesn’t affect the climate sensitivity to CO2., which doesn’t depend on aerosols.

        I have to say that I believe the modelers have the last word on this, and although I can’t continue here to restate what they say, they are worth listening to.

      • Fred

        What they say is correct for their present models. What I claim is that making a different choice at an early stage in the process leads trough normal tuning practices lead to a different model.

        None of the statements of the modelers that you have told addresses this fundamental issue of model development. They all appear to apply only to situations where all those choices have already been made.

        I wrote an independent comment before coming to this subthread. There I mentioned specifically that one of my main worries is related to the apparent ignorance of this issue.

        In spite of these worries I’m not as skeptical of the model results as some others including Judith, if I have interpreted her writings correctly. Most certainly I would like to have more information on these issues and I really hope that the modelers don’t avoid discussing them limiting their comments to those applying only to the present models rather than discussing also, how different the models might be, if the development history had been different. (Different with respect to early assumptions and subsequent implicit tuning.)

      • I have to depart for a while. If anyone wants to go back to read the dialog between Gavin and Judith Curry, I think they will find that Gavin describes parameter choices in models as unrelated to how they will affect the ability of the models to simulate observed temperature responses to CO2 (or other forcings). Instead, the choices are based on the relevant physics and the properties of the particular item (e.g. aerosols). According to him, models are neither “retuned” after a run to make them perform better, nor are they “pretuned” before being tested with the goal of making them perform well.

        If anyone has contrary evidence to indicate that he and others are not telling the truth, or that tuning for the purposes I mention has somehow “crept in unnoticed” at some stage, it hasn’t yet been presented here. I conclude that the Lindzen claim that aerosols are adjusted to make the model simulations come out right is false, but if some expert modelers can contribute further to this discussion, I’ll look forward to it.

      • Fred, Pekka’s description is correct. Tuning should be done as problems arise or new data is available. Not tuning based on your ideas about outcomes is foolish. The assertion you make is not credible, and if true would make me consider hiring a new batch of modelers.

      • Hi David – I think you may still be feeling the effects of last night’s Cabernet. Seriously, please read what i’ve written (and what Gavin has written). It shows that aerosols are not adjusted as fudge factors, unless you think he is deliberately telling an untruth. I don’t see much ambiguity in that claim, but readers should judge for themselves rather than merely going by the comments in this thread.

      • Fred,

        Equations based on physics are the starting point for the models. The equations include conservation laws and other fundamental equations like those describing the radiative interactions, thermodynamics and fluid dynamics. It’s, however, not possible to solve anything realistic without additional input like various parameterizations of processes of smaller spatial scale or otherwise not covered by the fundamental equations. Furthermore the discretization and related issues of numerical methods influence also the outcome

        Due to all these extra factors the modelers must do very many choices and they do them in a way that is expected to lead to best model based on their professional judgment. What the choices will be depends then on the situation where they are made. If earlier choices have led too far in one direction the later ones are made to compensate for that. Therefore changing the assumptions on aerosols at an early stage will influence later choices on other points. The choices affect each other in any normal model development process, but how much and how they affect varies widely and depends on the goals and nature of the model development project.

      • Returning after a few hours away, and rereading my comments, I should apologize for the short-tempered tone of some of them. I think I was motivated by the sense that I was defending climate modelers against attacks on their veracity rather than simply describing my own views. None of us here knows as much about how climate models are designed as do the people who design them professionally, nor does Richard Lindzen. I take the modelers at their word when they state that the parameter choices (tuning) they make are designed for purposes that don’t include helping climate projections match observations. Also, in general, I take people at their word about their intentions in the absence of good evidence to think otherwise. Gavin was pretty unequivocal about the basis for parameter choices, and readers should visit his comments rather than make judgments based on mine.

        I think Pekka, as always, has tried to consider all possible explanations for what goes on, and that’s appreciated. David Young is appreciated for his comments on fluid dynamics among other things, but here I thought he was being too resistant to the possibility that the modelers were acting appropriately, even though David is undoubtedly correct in some of his other concerns about model design.

        Finally, even if Pekka is right about possible subtle biases creeping into the data on which the modelers subsequently base their choices, there is no question in my mind that Lindzen’s claim to the effect that aerosol forcing is adjusted as a “fudge factor” is false, and he should know better than to keep repeating it.

      • Fred, Pekka and I have done complex modeling. Climate models are largely complex fluid dynamics models. You are getting lost in semantics. Whether Gavin’s very narrow assertion is true is independent of the truth of the assertion that aerosol models and forcings are ” an essentially arbitrary adjustment factor.”. When models are built, and as they evolve, tuning is done. The aerosol forcing is 0.4 – 2.7 W/m2. How would a modeler make a choice based “solely on physics?”. Someone claiming that is not very trurhful or not very bright.

        This whole attempt to discredit Lindzen seems to be too emotional to be purely scientific. It is quite possible that both he and Schmidt are roughly right. Schmidt is however leaving out infornation people who do modeling know to be important.

      • David – I think you’re way way out of your depth if you think you can compare your knowledge of model construction to that of Gavin Schmidt. It truly makes you look foolish, and that won’t happen if you stay within your limits. I’m more patient than Gavin, but I can see why the people at RC might become exasperated enough to want to see no more of you despite the fact that you could say something useful.

        Pekka has raised some interesting points. I”m not completely convinced by them, but his perspective is always worth considering. On the other hand, Lindzen’s notion of aerosol adjustments as fudge factors is unequivocally false, unless you think the modelers are lying. There’s no evidence they are, and good evidence they aren’t.

        If I were you, I would stop digging.

      • Fred, You are right back to the ignorant authority citing grumpy Fred. You are ignorant of fluid dynamics and subgrid models. But I’m sure Schmidt knows more about turbukence. Get a life Fred. If Gavin has read Wilcox, I’ll stand corrected. If not then you are very impolite in assuming others knowledge is as limited as your own.

      • Pekka – You state, “It’s, however, not possible to solve anything realistic without additional input like various parameterizations of processes of smaller spatial scale or otherwise not covered by the fundamental equations.”

        You also say, “If earlier choices have led too far in one direction the later ones are made to compensate for that. Therefore changing the assumptions on aerosols at an early stage will influence later choices on other points.”

        I don’t think anyone disagrees with the first point, but I’m not sure whether you had something specific in mind with the second. Do you have an example of that happening in the way aerosol data have been handled?

        It seems to me that some of the preceding discussion has been occurring at cross purposes. It’s argued that parameter choices must be made and will affect model performance. It’s also argued (e.g., by Gavin) that those choices are made independent of how they will affect the ability of model simulations to match observed trends.
        These two arguments are not in conflict, but the second falsifies Lindzen’s claim that aerosol forcing is adjusted as a “fudge factor” to make the simulations come out right. Even if some “assumptions… at an early stage” might have been made differently, Lindzen is still wrong in claiming aerosol forcing is adjusted for the purpose he claims, as long as neither assumptions, parameter choices, or anything else affecting model simulations are made with the goal of influencing those simulations in a desired direction. Because Lindzen has a reputation as a respected scientist, for him to make false claims strikes me as more irresponsible than the same claims coming from people with no name recognition.

        I’d still like to hear more from Gavin, Andy, or others, because they know much more about their intentions and much more about climate design than you, David Young, I, Lindzen, or other relevant individuals, but the dialog in the collide-a-scape link cited above gives us a good idea of what they are likely to say. Here is the collide-a-scape link again, with the relevant discussion at about 334 to 378.

      • Guys, If is any help Gavin said that the models were not adjust to fit observation “before 2000″ and are not adjusted to match trends, but average conditions. They do get adjusted though.

        “Some of the most interesting conclusions of the study include those relating to the Arctic. For example, we estimate that black carbon contributed 0.9 +/- 0.5ºC to 1890-2007 Arctic warming (which has been 1.9ºC total), making BC potentially a very large fraction of the overall warming there. We also estimated that aerosols in total contributed 1.1 +/- 0.8ºC to the 1976-2007 Arctic warming. This latter aerosol contribution to Arctic warming results from both increasing BC and decreasing sulfate, and as both were happening at once their contributions cannot be easily separated (unlike several earlier time periods we analyzed, when one increased while the other remained fairly constant). Though the uncertainty ranges are quite large, it can be useful to remember that the 95% confidence level conventionally used by scientists is not the only criteria that may be of interest. As the total observed Arctic warming during 1976-2007 was 1.5 +/- 0.3ºC, our results can be portrayed in many ways: there is about a 95% chance that aerosols contributed at least 15% to net Arctic warming over the past 3 decades, there is a 50% chance that they contributed about 70% or more, etc.”

        http://www.realclimate.org/index.php/archives/2009/04/yet-more-aerosols-comment-on-shindell-and-faluvegi/#more-672

        Hmmm? 1.1C +/-0.8C of warming in the Arctic from 1976-2007 possibly due to positive aerosol forcing, that might tend to de-emphasize CO2 radiant forcing a touch. I seem to recall with the exception of the Arctic, sensitivity to CO2 is rather small other than the mid latitude agricultural belt.

        As Fred said, Lindzen “was” a respected scientist at one time. I wonder if he really has lost his mojo and gone Emeritus?

      • “Gavin said that the models were not adjust to fit observation “before 2000″ and are not adjusted to match trends, but average conditions. They do get adjusted though.”

        Dallas – I think you succinctly stated the critical point. I bolded it to make clear the distinction between the different things adjustments are designed to match. “Average conditions” refers to the average climate behavior in the absence of a forced trend – i.e., the control climate. An example would be parameter choices made to ensure the seasons come out right, that the Sahara desert is dry, that the monsoons come on schedule, etc.

        The post-1976 role of aerosols is somewhat unclear, but there is good evidence that declining cooling aerosols (e.g., sulfates) played, a role, with perhaps black carbon also contributing (but probably not too much if overall aerosols were decreasing). In any case, this is one of the reasons why it’s difficult to make attributions for the post-1976 interval. Post-1950 is clearer in supporting the dominant role of anthropogenic ghgs.

      • Fred, Of course you won’t respond to me directly, but your last emotional response is full of misrepresentations and ignorance. First, I left RC, they didn’t leave me. Just ask Vaughan, Pekka, or MattStatt why they post here and not at RC. It’s because RC is a hypocritical place, censoring people they disagree with while posting very vile stuff from the peanut gallery. Also, RC is trying to control the message, that’s their explicit purpose. What’s the point of posting there? Why aren’t you posting there Fred?

        Your assertions about my knowledge are odd. You are in fact far more ignorant of models of fluid dynamics (and climate is a very complex one of these) than Pekka or I. Whether Schmidt knows more than Pekka and I, I’m not completely sure. He is somewhat knowledgable about the fundamentals, more so than most climate scientists. However, he has made some comments that are clearly wrong, even though perhaps they were not well considered. One that I recall was the claim that he had never heard anyone say that there were significant errors associated with Reynolds’ averaging. I understand why you haven’t gone to some of the references I have suggested and that’s OK Fred, even you have a contribution to make, but you should really stop the Gleick like temper tantrums and impugning of people’s knowledge.

        On the substance, it is quite possible for Schmidt’s statement to be technically true and for Lindzen’s statement to be operationally true. The easiest way to resolve this is for someone to tell me how you would set the aerosol subgrid model constants and forcings based on physics when the range of uncertainty is huge. Bear in mind that aersol forcings vary a lot over time. The only scientific way to set them is to try to match some data that you have more confidence in, whether that is current climate, hindcasting, or whether they give a “realistic” sensitivity, etc. For the novices in the field, that’s called “tuning”. Virtually any other data is more accurate than aerosol forcing numbers, which are essentially unknown.

        Further up in the thread there were numerous citations from the literature about some of the problems with the models by Chief and others. You of course ignored them, preferring to try to claim that Lindzen knows nothing about modeling, another claim based on ignorance.

        Let me repeat the basic point about models in a concise form (in contrast to your typically long winded convoluted posts). All complex models require many choices in their construction. The better modelers make different choices and add terms when there are problems or new data comes to light. In virtually all cases of complex subgrid models, there are parameters that are essentially arbitrary and are “tuned” to match data. There is nothing at all wrong with this. It is the best we can do. In some cases, different parameters are used for different modeling situations. The fact that you make such a fuss to deny it is a bad sign Fred.

      • “Whether Schmidt knows more than Pekka and I [about climate models], I’m not completely sure.”

        David – If you’re not sure, I guess you’re the only one.

      • Ok, so let me get this straight. You have no response to the substance but are into grumpy insulting Fred mode. Your Gleick is showing!! You of course must fraudulently insert words into my sentence that were not there. Fred, did you send that Heartland strategy memo to Gleick? Fred, you are getting desperate and you are really being a jerk.

      • Vaughan Pratt

        @Fred Pekka – Modelers make choices, but unless they are not telling the truth, they don’t make those choices based on how they want climate sensitivity or temperature trends to come out, but on how their choices best fit the physics and existing observations.

        Fortunately for physics, Fred, there exist physicists that don’t think like you. (There are also physicists that do, but they play a rather different role and are unlikely Nobel material.)

        A great example is Planck’s law for black body radiation. In 1900 Planck was confronted with two conflicting laws, each based on physics, namely the Rayleigh-Jeans law that worked great at low frequencies of radiation, and the Wien law that worked great at high frequencies.

        Each law tended to infinity in the domain where the other law tended to zero. For laws of physics, that’s seriously messed up. If that’s not obvious to you then you shouldn’t be theorizing about radiation physics. There is no possible way of using least squares fitting to reconcile two laws that are inconsistent to that degree!

        Planck had to invent something outside the known physics in order to reconcile these two absurdly inconsistent laws. Eventually he came up with a really cute little formula that brought the two laws together, but that had no physical explanation.

        He then developed a version of statistical mechanics that explained his formula. In due course this explanation became the accepted physics underlying what was going on.

        The key point here is that the formula came before the physics, the formula being Planck’s law. Planck did not simply fit to known physics, he invented physics.

        Substitute geophysics for physics and we have the Atlantic Multidecadal Oscillation, AMO. Unless the geophysical reasons underlying phenomena like the AMO are clear, modelers are winging it when they try to incorporate the AMO into their model. The idea that it is based on known geophysics is ludicrous. Only until we understand the AMO’s mechanism can we say it is based on known geophysics. Until then there are all sorts of possible geophysical explanations, and any model that commits to one of them is simply flying on a wing and a prayer.

        This is no small point given that the amplitude of the AMO oscillations is on the order of a tenth of a degree. In the grand scheme of long-term climate change, that amplitude can drown out a host of other thermal phenomena that we’d love to be able to see.

      • Fred, Gavin’s tuning to average instead of to trend has more to do with the type of model than a hard and fast rule. You may have missed it, but the IPCC discusses tuning in AR4 and it depends on the model complexity.

        I included the bit on Arctic aerosol forcing, because that impacts the average which Gavin would tune his model to. The models would also be tuned to the lack of radiant forcing in the Antarctic and the tropics.

        http://i122.photobucket.com/albums/o252/captdallas2/polesandtropicsRSS.png Or at least should be, since they are or the verge of being falsified.

        Since positive aerosol forcing is partially responsible for near 2C of warming in the Arctic, it is a tuning issue, because it is an issue.

        Did Gavin happen to mention that Antarctic polar amplification is non existent and the warming in the Antarctic shown in GISStemp is likely an artifact of smearing? I doubt he would bring that up, but it appears to be one of the next shoes to fall which might require some more “tuning”.

      • Vaughan Pratt

        @David Young: All complex models require many choices in their construction. The better modelers make different choices and add terms when there are problems or new data comes to light.

        But David, that was how the Ptolemaic theory evolved. Astronomers kept adding terms as new data (planets, longer observations) came to light.

        The best modelers look for opportunities to simplify the model at hand, as the Copernican theory demonstrated for the Ptolemaic theory. Planck’s law demonstrated something similar for black body radiation, displacing what was at risk of evolving (as radiation physics matured) into a blend of Wien’s law at high frequencies, the Rayleigh-Jeans law at low, and an ad hoc piece in the middle that could have smoothly connected them to make a “Ptolemaic Planck’s law” had not Planck found his uniform law just as applied radiation physics was starting to feel the need.

        Complexity can easily be an illusion. Sine waves, commonly encountered in nature, are specified by their period, phase, and amplitude, three parameters. And sums of waves also arise naturally. If you add three sine waves together the result can easily look inscrutably complex over any period shorter than the least common multiple of their three periods. That multiple will be finite when the periods are rational, but can be extremely large compared to the individual periods. For example lcm(13/15, 11/10, 7/6) = 1001 which is 858 times the longest period, 7/6, whence one must wait through many hundreds of cycles of the components to even start to detect any periodicity. Yet this seemingly non-periodic sum is modeled with only 9 parameters, and small rationals at that! Science might go for years modeling such a curve with 20 or 30 parameters while not getting as good predictive power as with the simpler and more accurate 9-parameter model.

        Furthermore once you’ve found the minimum number of parameters, there is a much greater chance that each term of the sum will correspond to a natural phenomenon, possibly unrecognized before, than if you artificially force every term of a 20-parameter model to the Procrustean bed of some known phenomenon.

        New science is much more likely to be discovered by modelers who try to simplify their model without adhering to old science, whether physics, chemistry, geophysics, or whatever.

      • MattStat/MatthewRMarler

        Vaughan Pratt: Matt, is there some general rule in statistics for a reasonable number of parameters as a function of anything including unexplained variance (uv = 1 – r2), or uv as a function of number of parameters?

        There is a plethora of general rules, and they include the number of observations as well as the r^2, and the correlations of the parameter estimates (stability of the estimates.)

        However, data sets can be constructed to defeat any general rule, and in practice good models are selected after a thorough hashing out of all the issues, like here, and after determining which models are confirmed by other data and have correct predictions.

      • MattStat/MatthewRMarler

        Fred Moolten, quoting Dallas: “Gavin said that the models were not adjust to fit observation “before 2000″ and are not adjusted to match trends, but average conditions. They do get adjusted though.”

        I think that it is impossible to tell from the published record how much tuning has occurred. It is seldom the case that authors publish exactly what they have done, partly due to page constraints, occasionally a self-delusion that a choice early on does not matter, at times a self-delusion that only parameter values that get the correct result are physically real — the list of large and small flaws is long. Fred has a confidence that no important tuning to get desired results has been done, at least not in the work of Gavin Schmidt and colleagues. Most of the rest of us who have more experience in modeling and publishing than Fred has are much more skeptical than he is.

        On a previous thread I defended my use of “ad hoc” with reference to a post-prediction (post incorrect prediction) of a re-examination of the effects of aerosols in one model. I share Lindzen’s suspicion that there is more ad hoc fitting than what has been explicitly disclosed. This is one of those things on which I would like to be wrong.

        The truest test of the models is in the accuracy of their predictions. So far, none has been shown to be very accurate at making predictions. It is possible that they could be accurate over some long run while being inaccurate over the short run, but that has not been demonstrated either, and until it is demonstrated there is no reason to believe it

      • David Young

        Vaughan, I can’t get this below your comment, so its lower down. Your post on complexity is correct. This is what we really need in nonlinear systems is a simplifying theory that can explain things. I am a big fan of simpler models within their range of validation. One advantage of these is that they tend to be inexpensive to run and so they can be subjected to much more rigorous validation. Anyway, thanks for posting this insight.

      • David Young

        Fred, This thread has become unreadable because of the constant recapitulation of a single talking point taken from a literalminded legalistic interpretation of something Lindzen may have said. What I’ve heard him say in the past is merely that each model uses a different value for the aerosols, and that given the lack of understanding, it can be viewed as an essentially arbitrary adjustment factor, that can cancel most of the greenhouse forcing. That is far different than your prosecutor’s focus on one interpretation. This is just so much focus on “atoms of scripture cast as dust before mens eyes” and not on the “main design.” The fact of the matter is that there should be a lot more to the aerosol model than just the gross forcing. There is also the spacial distribution of the forcing, a critical input and the subgrid model which I assume must be pretty complex. But then again, given the level of ignorance perhaps its just a specified forcing. Let me say that the “real physics” must be very complex and involve such things as clouds, convection, etc.

        As I summarized on the following modeling thread, those of us who have done complex modeling of similar systems to the climate system know that there are many serious problems having to do with tuning subgrid models and the other thousands of choices modelers make. The only way to rise above this nitpicking and uninformative vague statements approach is for modelers to examine rigorously the sensitivity of results to these choices. That’s my whole purpose for being interested in this is to try to show people that this needs attention.

        The broader picture is a lot more important and actually involves trying to understand subgrid models.

      • David – If you hadn’t specifically addressed me, I wouldn’t add to this overlong thread. I’ve made clear the evidence I find convincing that Lindzen has been making false statements to the effect aerosols are adjusted to make models come out right. You may not be convinced. Readers can judge for themselves. I don’t know of further evidence to add, and I agree with you that there are other aspects to aerosol modeling that probably deserve more attention. I’ll leave it at that.

    • If climate science really understands the various factors and relative weights of those factors and if their models were accurately representing the actual climate system, why would the models of the system be inaccurate for near term predictions?

      A simple question that none of those trusting climate models can provide an adequate answer to. The accurate answer is the system is not sufficiently understood.

      • +Lots

        If your model can’t forecast a short time ahead, like next year, how can it possibly be expected to be right in 50 or 100 years?

      • Vaughan Pratt

        @Latimer If your model can’t forecast a short time ahead, like next year, how can it possibly be expected to be right in 50 or 100 years?

        Wow. I think you’ve hit the nail on the head here, Latimer. This seems to be the basic sceptic argument.

        Without claiming its conclusion is right or wrong, one can at least see that the reasoning leading to that conclusion is illogical as follows.

        Consider the religion whose deity is M*D (Maxwell’s Demon to you gentiles). In Chapter 7 of the Book of Reynolds we read “Each year M*D tosses a coin. Heads is hotter, tails is colder. M*D Himself cannot foretell the outcome of that toss. Climate hath no other driver but M*D.”

        Long term, climate as governed by M*D is going to follow a random walk. While it will drift, it won’t drift rapidly, according to the nature of random walks. This makes it possible to bracket where temperature will be a century from now within reasonable error bounds.

        Yet anyone selling a model that can forecast next year’s temperature is committing heresy by claiming greater clairvoyance even than M*D!

        You may well not believe in M*D, Latimer. But do you still believe in your reasoning? (You did use the word “possibly”…)

      • Peter Davies

        Latimer, weather/climate prediction is certainly rife with wide error bands but it has always been my understanding that the shorter the time span the more unpredictable is weather/climate.

        On the other hand, the longer the time span, the lesser the degree of unpredictability. The longer the time span the narrower becomes the error band – does it not?

      • The random walk argument is a powerful one because it can also be used in a pinch to cover for all the chaotic parts of the model. Consider that chaotic motions can go in any direction, but in the end if they do follow what looks like random trajectories, then those can be modeled as a random walk that reverts to a mean value (aka the Orrnstein-Uhlenbeck RW process). The process will appear to randomly walk, but without a non-physical change in the free energy, that cumulative energy will remain what it was when it started.

        The only events that can cause a reversion away from the mean are external forcings such as GHG increases, albedo changes, and a few minor behaviors that act like triggers.

        I follow this line of thinking because when all is said and done, the diagnosis will show the net energy change and any hidden sinks will be revealed. It might take decades, but I can follow along in my spare time.

      • Chief Hydrologist

        But L*D is one up on M*D – because he can predict the toss.

        Climate is not a random walk.

        ‘Most of the studies and debates on potential climate change have focused on the ongoing buildup of industrial greenhouse gases in the atmosphere and a gradual increase in global temperatures. But recent and rapidly advancing evidence demonstrates that Earth’s climate repeatedly has shifted dramatically and in time spans as short as a decade. And abrupt climate change may be more likely in the future.’

      • Chief Hydrologist

        Oh and Webby – let me give you a clue – otherwise you might remain clueless . The mean is halfway between a glacial and an interglacial. It last happened on a Tuesday.

      • MattStat/MatthewRMarler

        Rob Starkey: If climate science really understands the various factors and relative weights of those factors and if their models were accurately representing the actual climate system, why would the models of the system be inaccurate for near term predictions?

        It is possible that the model of the trend is correct (influence of CO2 change on temperature change), but that the model of rest of the climate is unknown, and that the rest is cyclic, and entered a “low” epoch of the cycle just after the predictions were made. If so, the temperature will shoot up again at a rate higher than the forecast rate, starting perhaps 2025, and by 2050 the temperature will be close to the model prediction.

        Now back to your exact wording. “If climate science really understands the various factors … ,” then this won’t happen: short-term forecasts should be more accurate than long-term forecasts in that case. The lack of close agreement of temperature to forecast shows that “really [understanding] the various factors ” does not characterize current knowledge.

    • Fred,

      I appreciate your respectful tone and patience in replying to so many responses that make similar points.

      You seem to have a lot of unjustified faith in the opinion of just one person – Gavin Schmidt. Aren’t the opinions of people I cited – like Jeffrey Kiehl, Peter Huybers, Reno Knutti, Stephen Schwartz, and not to mention the AR5 Chapter 10 authors – more likely to be neutral than a scientist like Gavin Schmidt who runs an advocacy blog like RealClimate? I am not suggesting that Gavin is dishonest, but he is hardly neutral.

      The GISS model is just one GCM out of more than 30 and would be the work of hundreds if not thousands of scientists and engineers. Moreover, Gavin himself has only been around 15 or 20 years – compared with Lindzen who has been around since the 1960s. Lindzen was contributing to GCM development in the 1970s. Why claim that Gavin should know more?

      As Huybers points out, as does Kiehl, and I saw Held point this out too, that many of the choices made in the model development are simply undocumented. No one can claim to know whether or not there was tuning. Yet analyses such as Kiehl 2007, Knutti 2008, Huybers 2010 – even Dessler 2010 I noted – more or less prove that there has been tuning in the model development process.

      Have you actually read the papers that I cite? They are now widely cited and discussed – especially the original paper by Kiehl.

      Finally, Lindzen’s point is not explicitly about model tuning, so I think the points I made that you didn’t respond to may be more important.

      • Alex – I’ve read some of the papers, and I’ve quoted Kiehl. Gavin Schmidt has made the points I emphasized in many places, including the collide-a-scape site, but other modelers have made the same points (e.g., Jim Hansen and I think Andy Lacis). These people aren’t necessarily more expert than others in all aspects of climate, but they know much more than Lindzen and the others you mention about how climate models are constructed.

        I don’t want to belabor the point, but there is no disagreement about the need for modelers to make choices. What Lindzen wrongly stated is that the adjustments (of aerosols) were made for the purpose of improving the match between simulation and observations – i.e., that they were fudge factors. I don’t believe any modeler has suggested anything but the opposite of that, and in the absence of evidence the modelers are deliberately untruthful, I think we can conclude that Lindzen has no basis for that allegation, and shouldn’t make it.

      • Fred, how are the aerosol parameters and forcings set then? If its not to match observations, what else is there? The physics is essentially unknown according to the IPCC.

      • David – Please read the collide-a-scape exchanges, where Gavin goes into some detail about how aerosols are handled.

      • “the physics are unknown”?

        What are you talking about, David? This seems to be a caricature of some of the sillier contrarian arguments that hold that if we don’t know everything, we know nothing.

      • Fred, You of course take single phrases out of context. Another Gleick tactic. The physics is very uncertain and is ESSENTIALLY unknown. Let’s see 0.4 – 2.7 W/m2. The upper range is higher than all GHG forcings and the lower bound is smaller than solar variations.

      • Brandon Shollenberger

        David Young, it is much worse than you say:

        Fred, You of course take single phrases out of context.

        He didn’t take a phrase out of context. He fabricated a quote and attributed it to you.

      • Fred, It took me 30 seconds to find the single sentence in Schmidt’s very long dissertations that gives the method for setting aerosol forcings. They simply took the mean of the literature estimates, i.e., about 1.0 W/m2. That may explain why their sensitivity is 2.6K well below the IPCC mean. You know Fred, you could have just said that if you really understood it. Just taking the “median” of the literature estimates is a punt when the range is so large and the understanding so low. But its certainly a legitimate way to do it. I do think that using it to match data would be a better method from a scientific point of view. That’s what is done in most cases by modelers. If something has an error bar of 160% of the median value, you treat it as somewhat adjustable within that range.

      • David – I think you should have read further. The models didn’t take that value as the median for aerosol forcing, but for the aerosol indirect effect. If you follow the references, it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.

        The justification for the choice is reasonable, but the point is somewhat irrelevant. Lindzen’s claim that aerosol forcing is adjusted to match trends is a false statement based on everything cited in the way of evidence, unless the modelers are being deliberately untruthful about how they designed their models.

      • “they simply took the mean of the literature estimates, i.e., about 1.0 W/m2. That may explain why their sensitivity is 2.6K well below the IPCC mean.”</I.

        David – Unless I misinterpreted your statement, you also don't seem to understand that the climate sensitivity they cite is not based on how the aerosols behave. Your statement linking the two suggests that you are not familiar enough with the concept of climate sensitivity, how it's derived, and the process by which it emerges from models.

      • sorry the italics weren’t closed

      • What Fred!! “it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.” Are you telling me that they saw that the effect was small on temperature, what are they doing using tests against real data and looking at sensitivities of model outputs?? I thought it was set from first principles physics!!

        Fred, its late and past your bed time. Suffice it to say that however the values and the subgrid models are set, they are essentially arbitrary adjustment parameters, just as Lindzen said. Each model uses a different value for the unknown.

      • Fred, If you had read the references earlier in the thread, there was one that examined the relationship between the aerosol forcing assumptions and the sensitivity I think. It’s late and I don’t have time to track it down. Perhaps someone else will. To assert that they are independent assumes that modelers don’t do “implicit” tuning to get a reasonable sensitivity. Something that I think is pretty likely.

      • Fred, Just to be clear. I enjoy arguing with you and like you. I can just imagine being on the patio with you smoking a cigar and enjoying a fine bottle of wine and arguing about these issues. I do get a little upset when you assume that I am ignorant of a field where I have quite a bit of expertise. And your idolization of Schmidt is somewhat odd. He is a good scientist who is perhaps too involved in “communicating” to control the message. Cheers.

      • David – the Kiehl reference is one I cited earlier and I explained why it doesn’t tell us anything about model adjustments to match trends. You can find my comment elsewhere in the thread.

        Your other points have already been addressed as well, including the use of inverse modeling to arrive at the best estimate for a parameter. You should read those comments too. No-one has ever claimed that one can derive aerosol effects from first principles without utilizing observational data. However, our knowledge of both the physics and the observed properties of aerosols are used for model inputs, but these aren’t adjusted with the goal of arriving at a particular trend line.

        It’s midnight here, so I’ll stop for now. Despite the heated discussion, I think I got something useful out of it. In particular, I think Pekka made a good point about the possibility that subtle biases can creep into mainstream assumptions, and when these are then used by modelers, the model itself can be biased. I don’t know whether that pertains to aerosols, but it’s a valid general point.

        On the original and more specific question of whether, as Lindzen asserts, aerosol forcing is adjusted to make model trends match observations better, I think the evidence in unequivocal. Lindzen is wrong. Parameter choices in model development are made for a number of legitimate reasons, but not for the reason Lindzen claimed, and I think it’s unfortunate that he has continued to make that claim.

      • David – I wrote my last comment without having seen the gracious one you wrote ahead of mine. I too enjoy our discussions, and I have great respect for your knowledge. I will probably disagree with you often on matters where I think your knowledge is only part of the recipe for a good understanding, but it will still be worthwhile.

      • I’m a little bit less assertive than Fred about aerosol tuning, since some evidence exists to suggests that model aerosol parameters might have been conditioned based on the modelers understanding of historical climate change. One cannot assume that modelers are completely ignorant concerning existing literature on sensitivity or observations, so choices can inherently be made, even if unconsciously, on a basis such as that. His point that people have not played with aerosols as fudge factors to get observations right, etc, however, is correct. It’s also wrong to say none of the physics is known, though large uncertainties remain, particularly with cloud indirect effects.

        It should be kept in mind that since the AR4, there have been a number of advances in monitoring and quantifying aerosol effects. There have been several measurement studies for aerosol effects, though these usually are not completely independent of modeling. It should also be kept in mind that the big issue is not necessarily how radiation interacts with aerosols, but understanding and monitoring the aerosol distribution and the environment in which they are in on a global scale. In fact, the time evolution of aerosol forcing is an even more uncertain quantity than the current aerosol forcing. When aerosol properties are known, there is skill in modeled vs. observed shortwave fluxes. In the AR5, new direct effect RF results are based largely on simulations in AeroCom (an inter-comparison of many global aerosol models that includes large evaluation against measurements, such as AERONET, MODIS, and MISR data).

        Regardless of any of this, it does not excuse Lindzen’s incorrect statements about aerosol treatment by modelers, nor does he get any credit for picking the very high end of the ~1-3 W/m2 uncertainty range in total RF (2010 relative to 1750). Note that the AR5 will also define a so-called ‘Adjusted Forcing’ (AF) that has a different definition than RF (allowing atmospheric and land temperature to adjust while ocean conditions are fixed), which has usefulness in aerosol discussions due to various semi-direct rapid responses, though this quantity is also largely uncertain. Regardless of how one feels about the ability of models to get aerosols down, no one would have gotten the impression from Lindzen’s talk that he carefully picked the extreme tail end of plausible forcing values to get the lowest sensitivity he could get, and then couldn’t even get into the transient vs. equilibrium issue.

        This is inexcusable. As Andy Lacis mentioned, Lindzen is selling a good story, he is not selling objective science, or giving an honest representation of how the scientific community thinks about this topic.

      • Vaughan Pratt

        @Fred What Lindzen wrongly stated is that the adjustments (of aerosols) were made for the purpose of improving the match between simulation and observations – i.e., that they were fudge factors. I don’t believe any modeler has suggested anything but the opposite of that, and in the absence of evidence the modelers are deliberately untruthful, I think we can conclude that Lindzen has no basis for that allegation, and shouldn’t make it.

        Fred, your third sentence beginning “I think we can conclude” appears to be based on your second sentence, “II don’t believe any modeler has suggested anything but the opposite of [adjustments serve to improve the match between simulation and observations].”

        I’d be fine with this with a really tiny edit: “we” –> “I”.

        You have some gall attributing illogical reasoning to the rest of us. If you seriously believe the modelers have a clue about what aerosols have been doing since 1960, I would say it was time for Judith to open up a thread on that topic. (Or reopen it if we’ve already had at least one, I haven’t been keeping track.)

        Can the modelers say what the effective altitude of “the aerosols” was between 1960 and 1980? Was it 2 km, 8 km, or 15 km? The first would heat the surface, the last would cool it. Is that what the models say? If not then I’d love to understand why not.

      • Vaughan – You objected to implications of illogicality, although they weren’t aimed at you, but I see some evidence of illogicality in your comment, in the form of non-sequiturs. The point I wanted to make in representing what the modeler’s state is that they don’t adjust aerosol inputs in order to make projected trends come out right. This is not the same as saying that aerosols are understood perfectly (nor that they are understood not at all). That’s where the non-sequiturs come in.
        If the modelers don’t adjust aerosol inputs to improve performance, but merely handle aerosols on the basis of what is known about them, plus their observed concentrations and distribution, then Lindzen’s claim that aerosols are fudge factors is false, and I believe irresponsible.
        I think others believe it to be wrong and irresponsible as well, so I probably should say we believe it to be wrong and irresponsible.

      • Rob Starkey

        Fred
        I believe you are absolutely incorrect in your assumption that modelers do not “tune” their models in regards to various aerosol forcings. That is exactly what they do in order to get the models to meet what they know about historically observed conditions.

      • Rob – Please see my recent comment #179684. Basically, you are suggesting that Gavin Schmidt is either a liar in stating that there’s no such tuning, or else that he doesn’t know what he’s talking about. Well, that’s fine, but don’t you think you owe it to him to say that to his face.

        One way to resolve this is to contact Gavin and repeat to him what you’ve just stated, explaining why his statement is false. Then, if he responds, I hope you’ll share that with us here so that we can judge who knows more about the subject, who is telling the truth, and who is making false statements either through ignorance or design. I’m willing to assume ignorance rather than dishonesty in the absence of evidence to the contrary.

        Alternatively, if you’re not willing to do that, perhaps the best thing is to avoid making definitive statements about the subject.

      • MattStat/MatthewRMarler

        Fred: If you follow the references, it turns out that the value is based on evidence, not assumptions, including inverse modeling, and that if somewhat different values are tested, the effect on temperature change is small.

        Could you explain what you understand by “inverse modeling”, and why that does not undercut your whole argument about the lack of tuning of free parameters? It could be something simple like the “inverse modeling” that is included in calibrating measurement instruments, or it could be just the kind of fudging that you claim is not there.

      • David Young

        It appears that a good night’s sleep has seen the cranky Fred replaced with the careful and long winded Fred. I’m not sure which one I prefer.
        So in the spirit of long winded posts, I think it will be good to put this Lindzen vs. Schmidt issue in perspective.

        This business of modeling complex systems (in fluid dynamics, we do chemistry, multi-phase fluids, thermodynamics and forcings too) is still in its infancy. The issue that I think is underappreciated by climate scientists is how sensitive their results may be to modeling “choices.” Trust me on this, there are thousands of choices. Climate science is probably no worse than others in this area, but it does seem to be rare to systematically look at the sensitivity of results to these thousands of choices. Some of the simpler ones are easy to do, but it gets harder as the models get more complex. Believe it or not, there is a rigorous theory for calculating these sensitivities for systems of partial differential equations in a fast and systematic way. It is becoming more widely used in simple applications like aerodynamics or structural analysis, but even here the field is still dominated by codes that are too numerically sloppy for it to be applied in a meaningful way.

        Once you start to apply this rigorous theory, and there is a big investment in code rewriting required to get to that point, you see all kinds of interesting and informative information. For example, you can actually use sophisticated optimization to determine parameters based on data. This is done for example all the time in geology, where seismic data is used to infer underground properties.

        There is no evidence that I’ve seen that climate scientists are aware of this theory. That’s understandable since they have so many pressures to just make more runs and add more “physics” to their models.

        In any case, I do think Fred would benefit by looking into Reynolds’ averaging if he has the mathematical training to understand it to get a better feeling for how subgrid models are constructed and tuned and how more terms are added over the years and the immense problems of validation and verification are handled (often not very well). It is fine to just repeat the words of others, but real understanding can enable you to go much further.

        I still think that the focus on discrediting Lindzen is strange. Like any scientist, he is clearly wrong about some things. What is strange about it is that he I think he has a perspective that could be very valuable to the field.

        Whether aerosol models and forcings are “tuned” to match trends is a rather narrow issue without much relevance to the larger issue of model tuning and looking at sensitivity to these choices. By the way, Fred seems to have given us no insight into the aerosol interaction subgrid model itself, which surely must be complex and have lots of parameters. Tuning this model can have the same effect as tuning the aerosol forcings. So Schmidt’s comment may be technically true, but of no real significance. At least that is my suspicion, but I could be wrong on this.

        The problem here is that the understanding of complex models is very difficult to acquire. I am constantly learning new things myself. The issue of the models is not well suited to the “communication of science” mode of operation. The communicator inevitably is rather ignorant of a lot of details in other parts of the models. However, the idea of sensitivity of results to inputs or choices is more easy to understand. Then you can present a range of results that conveys uncertainty more effectively. It’s a constant problem, modeling is constantly used in industry and government. Those doing the modeling have a vested interest in certain outcomes and there is an incentive to present the results as more certain than they are. This is also true in medicine even though there are more controls in place there and a wider recognition of the conflicts of interests.

        The bottom line here is that regardless of whether Schmidt’s or Lindzen’s statements are narrowly true or not or maybe half true is a very minor issue except to those like Fred involved in the climate war as combatants. My suspicion is that both Schmidt and Lindzen have a contribution to make. The larger point is that in fact there are serious problems with the way complex models are built, run, and their results conveyed. This explains the narrow focus on this largely irrelevant issue in this thread, its something we can argue about superficially rather than getting to the real issue, which requires a more serious and rigorous learning experience.

    • The whole debate about what constitutes tuning and what the intention of the modellers is, is just another semantic debate that has nothing to do with substance. If Joshua were here this thread would be three times as long, though Fred Moolten is doing his best to pinch hit.

      That the adjustments are made is not apparently in dispute. Even how they are adjusted similarly does not seem to be the issue.

      WHY they are adjusted seems to be the ball game here.

      In my opinion, who cares? Modellers can’t model the climate to a degree sufficient to justify large scale policy changes yet anyway. If someone came up with a model where you could input data from any given period, and it produced a reasonable track of what actually happened thereafter, over numerous time periods, that would be of interest.

      In other words, when they have a model in which they can input the initial conditions (as best we know them) of 1000, and a model run tracks reasonably well how the climate changed over the next 100-200 years, that would be of interest. But only if it also worked when you input initial date for 1500, 1750, 1000 BC, 1500 BC, etc.

      But from everything I see, they haven’t even been able to model out 10-20 years from the present with any real accuracy, a period for which we have a much greater quantity and quality of data.

      If the tuning regardless of motivation created models that were useful, and verifiable (kinda the same thing), then how they got there seems rather irrelevant.

      I followed Gavin’s discussion of the issue on Collide-a-Scape, and I don’t remember a single skeptic or lukewarmer, at any level of sophistication, changing position based on the semantics. I don’t see the CAGW believers here doing any better job of it.

      • Gary – Please see the above discussions, where your points have already been addressed.

      • Fred,

        Thanks for the condescending reading advice, but I had read the thread previously. Which is why I wrote the comment I wrote. I followed the discussion by Gavin as Collide-a-Scape as I said, and I see nothing added by you to what he wrote there. In fact, he explained his position re models much more coherently in my opinion.

        You suffer from the same myopia Gavin did. After days of participating in open discussion on numerous issues on Kloor’s blog, Schmidt was amazed that others, Dr. Curry in particular, still disagreed with him.

        Not because they didn’t understand what he said, not because they were pawns of big oil, but because they came to different conclusions after reviewing the same facts he did. It seemed a novel concept to him.

        What he failed to understand, and you do as well, is that there is a great deal of subjectivity in coming to ultimate conclusions. He came from the perspective that those who disagreed with him either did not know what the facts were, were not sufficiently qualified to properly analyze the facts, or were simply too biased to realize their errors.

        He was, quite simply, befuddled by Dr. Curry’s responses in particular, which met none of those stereotypes. You are in the same position. You’re just a lot more verbose about it.

      • Brandon Shollenberger

        GaryM, you say:

        You suffer from the same myopia Gavin did. After days of participating in open discussion on numerous issues on Kloor’s blog, Schmidt was amazed that others, Dr. Curry in particular, still disagreed with him.

        This is a common thing for Gavin. He did the basically the same thing the on the very same blog, back when Mann2008 was criticized over the Tiljander issue. He repeatedly expressed confusion and amazement at the people who disagreed with him, though as was noted at the time, he didn’t actually address what they were saying.

        On an interesting note, he’s since admitted what they were saying was right (in a couple comments at RealClimate). He’s never retracted anything he said on the issue previously, and he’s never gone back to Kloor’s blog to say, “Hey guys, you were right.” In fact, he’s pretty much never discussed the topic again.

      • Gary,

        Sorry, the relativistic viewpoint gets no points. A lot of interpretations of data come down to being subjective, but what is or is not done in the GISS model is not one of them. That our understanding of internal variability in the beginning of the 20th century does not reflect on attribution efforts for climate change in the later half of the century is just a fact. Judith Curry did not understand the science or her own logical fallacies in both of those points. Some things are good to debate, others just require familiarity with what is done in a particular field. As it happens, Gavin works with the GISS model extensively and also works on implementing and understanding various solar reconstructions, for example. Other people don’t get a free license to make stuff up, even if some of the science is uncertain.

      • Vaughan Pratt

        What he failed to understand, and you do as well, is that there is a great deal of subjectivity in coming to ultimate conclusions

        One’s scientist’s subjectivity is another’s illogic. Maybe one day logic will have room for subjectivity, but as any Star Trek fan will tell you, that day is still well in the future on Vulcan. As well as in the faculty lounges of the physics departments of MIT, Harvard, Stanford, Caltech, Princeton, Chicago, etc.

        No conclusion that admits subjectivity deserves the epithet “ultimate.”

      • OK penultimate.

        My TV tells me that means “almost”.

      • Chris Colose,

        “Maybe one day logic will have room for subjectivity….”

        CAGW has nothing to do with logic. AGW yes, CAGW, the obsession with taking control over the energy economy no. The reason Gavin and his acolytes here cannot understand why others can reach different conclusions from theirs, is that they refuse to acknowledge the political nature of so much of what they claim as science.

        Why does Gavin defend to the death the dishonesty of the hockey stick, and proclaim its continued viability while simultaneously claiming it is irrelevant? Why does he admit that there is “tuning” of climate models as they diverge from actual data, but deny that the tuning is done to make the models better match the data?

        In both cases, and in many other arguments in the climate debate, the reason has nothing to do with science. In this raucous political debate, the fear of conceding any dispute to the other side is sacrilegious. Particularly where every statistical jot and tittle in any opposing research is declaimed as evidence of the falsity of CAGW skepticism in its entirety.

        “No conclusion that admits subjectivity deserves the epithet ‘ultimate.’”

        Precisely, but CAGW is an “ultimate” conclusion. But for the need to win the political debate, CAGW advocates would not be fighting like Custer at Little Big Horn on virtually every hill in the climate debate, including the issue of how to characterize the tuning of climate models. All the battles of models and their tuning or validation, paleo climate reconstructions, whether there has been “statistically significant” warming in the last 15 years, etc., become boring and mundane, if you remove from the equation the threat of massive economic dislocation required to decarbonize the economy.

        Almost all of the debates in “climate science” devolve into proxies of that “ultimate” decision, that CAGW advocates have all already made. To badly paraphrase the bard:

        To decarbonize or not to decarbonize, that is the question. Whether tis nobler in the mind to suffer the slings and arrows of outrageous skepticism, or to take arms against a sea of skeptical arguments, and by opposing them, end the economy.

        This debate has so much drama because of the massive political stakes. Constantly dressing up political arguments (such as how to describe the reason models are tuned) as “science” and “logic” does not change this.

      • MattStat/MatthewRMarler

        GaryM: In my opinion, who cares? Modellers can’t model the climate to a degree sufficient to justify large scale policy changes yet anyway.

        To me, the second sentence is key.

        However, Lindzen did claim that model parameters have been tuned to provide a better match to the recent past, instead of from independent evidence, and if that is so (and especially if they used many possible parameter values and reported only 1 or a few — a common practice) then there is even less reason to think that the forecasts might be reliable.

        If the tuning regardless of motivation created models that were useful, and verifiable (kinda the same thing), then how they got there seems rather irrelevant.

        I agree again for the long run. Modelers claim now to have models that are good for the long run, but if they based parts of the model (or parameter estimates) on recent data, they have most likely “overfit” the models to random variation (variation unrelated to the main trends and relationships), and there is less reason to think they’ll be good models for the future.

    • Alex Harvey

      Fred,

      But I quoted the IPCC AR5 ZOD chapter 10 on tuning. The IPCC authors agree it is possible, at any rate, that models have been tuned in undocumented ways to reproduce the 20th century temperature record. There would need to be reasonable evidence before the IPCC would concede this much. Yet you say it is not possible because Gavin Schmidt, Jim Hansen, and Andy Lacis say it is impossible – all three being outspoken advocates on climate change action, and perhaps more importantly, are the most likely to be embarrassed by the discovery of tuning in models. I don’t think this is convincing.

      Certainly, others have interpreted Kiehl’s widely cited paper as evidence of GCM tuning, e.g. Eduardo Zorita.

      http://climateaudit.org/2007/12/01/tuning-gcms/

      In any case, if you say there wasn’t tuning then how do you explain the observations that Kiehl, Reno Knutti, Peter Huybers, Andrew Dessler and others have observed?

      Kiehl – “there is a clear inverse correlation between the forcing and the climate sensitivity”.

      Huybers – the cloud feedbacks tend to compensate for the sum of all other feedbacks to keep climate sensitivity within the canonical IPCC range.

      Dessler – models with positive LW feedback tend to have a negative SW feedback; models with negative LW feedback tend to have a positive SW feedback.

      Peter Huybers looks at this question carefully and concludes that tuning is the most likely explanation. So what does he overlook if you insist that he is wrong?

      – Huybers, P., 2010: Compensation between Model Feedbacks and Curtailment of Climate Sensitivity. Journal of Climate, 23, 3009-3018.

      • Donning an asbestos suit, let me re-ask the naive question :
        Isn’t fiddling with models to get them to match observations a perfectly valid activity ?
        Isn’t that just what Planck did as per Vaughn’s comment – eventually coming up with a “really cute little formula that brought … laws together, but that had no physical explanation”. Which then presumably told people where to start digging to look for a physical explanation.

        (btw I take fully the point that present-day models are nowhere near being “cute”. And are thus, inter alia, absolutely no basis whatever for imposing any new and massive economic and political burdens on the world).

      • Alex – the original question may have gotten lost in some of the discussion. It was whether, as Lindzen claims, aerosol forcing is adjusted to make model projections match observed trends. The answer is no, based on the best sources available – the description by the modelers of how they actually go about determining the aerosol input. I would recommend going back to the Schmidt/Curry collide-a-scape dialog for that discussion and references.

        “Tuning” (parameter choices) is a necessity in models, but it is done for reasons other than to match trends with observations. Is there evidence to the contrary? Inferences drawn by others who are not modelers don’t constitute evidence, because the various correlations have many possible explanations.

        To illustrate, I’ll use the most often cited example – the Kiehl 2007 reference . In your earlier comment, you quoted a ZOD statement: “Kiehl et al. (2007) finds that models with a larger sulphate aerosol forcing tend to have a higher climate sensitivity, such that the spread of their simulated 20th century temperature changes is reduced.” One reason it’s called a zero order draft is that it’s written before the errors are corrected, and the above is a big one, because Kiehl reported exactly the opposite – an inverse correlation between forcing and climate sensitivity in a subset of models chosen because of good matches to observations. It was the models with the lowest climate sensitivity that had the highest total forcing, and the aerosol forcing was positively correlated with total forcing (see Kiehl figure 2). Apparently some of the other authors you cited got that wrong (e.g., Knutti).

        Now this creates a problem for anyone proposing that multiple modelers decided to “adjust” aerosols in hopes of making their projections perform better for a number of reasons.

        1. It’s not intuitively obvious why an inverse relationship should exist between high aerosol forcing and climate sensitivity in models that perform well, and therefore not obvious why modelers (who were unaware of Kiehl 2007) would adjust aerosols upward if their models emerged with a low climate sensitivity. Kiehl gives no explanation, and Knutti had the wrong explanation – he thought the high aerosol forcing reduced the total net forcing (positive minus negative) but Figure 2 shows the opposite. It’s not at all clear that the inverse relationship involves direct causality – for example, the relationship might in part reflect other factors including differences in ocean heat uptake. In any case, there was no reason for modelers to anticipate it and plan their aerosol forcing in advance.

        2. Inverse modeling shows that reducing the cooling aerosol input causes the projected temperature trend to be magnified. Many of the claims based on Lindzen use this relationship to argue that aerosol forcing is adjusted upward to permit the observed trend to be as low as it was while preserving the modeler’s claim for high climate sensitivity. The Kiehl study shows the opposite – high sensitivity correlated with low aerosol forcing.

        3. It’s almost universally understood that model climate sensitivity is a model output, not an input, and that modelers can’t dictate how it will come out. It is therefore unlikely that a modeler would know in advance what aerosol forcing to input based on the climate sensitivity that would later emerge.

        4 . As both Kiehl and Gavin note, models typically don’t enter aerosol forcing as a value, but let it emerge from the data on aerosols that they enter. See Gavin’s description as to how this is done independent of any goal involving final magnitude or trend matching. Since he and others are the ones doing it, their description should be the accurate one unless they are deliberately untruthful.

        5. The Kiehl study (and others) selected a subset of models that performed well in matching observed trends. If, for any reason, there is indeed an inverse relationship between a model’s climate sensitivity and aerosol forcing in that subset, it follows mathematically that a high sensitivity would be matched by a low aerosol forcing, but this is a property of the selection process. If all models, including those that performed poorly, were tested, there is no reason why the same inverse relationship should necessarily hold. In that sense, selection for good performance dictated the observed relationship, and attributing it to intent on the part of the modelers is unnecessary. They were simply the ones who happened to get it right, and the ones who got it wrong were not evaluated by Kiehl.

        6. Most important is the question of truthfulness. Either modelers (Gavin, Hansen, etc.) are telling the truth when they say they don’t do any tuning before or after a model is run for the purpose of making its projection perform well, or they are telling untruths. The notion that a large number independently, or through conspiracy, do something different from what they claim is a serious charge. The fact that others who don’t design models have implied this type of untruthfulness shouldn’t be given credence in the absence of evidence for their claim. No observations that have been reported require that to be the case. Lindzen and others should refrain from suggesting this type of “fudging”. As best we know, it isn’t done.

      • Oops. In reviewing Kiehl, I found that my points 1 and 2, and my criticism of Knutti and the ZOD were wrong, because there was in fact a loose positive correlation between aerosol cooling and climate sensitivity. Therefore, a physical rationale does exist for Kiehl’s findings. However, the claim of deliberate adjustment can’t be justified, for reasons I give in points 3 through 6.

      • Fred et al – – it would help to clarify in the above, what is meant by a “large” aerosol forcing. Does that mean more negative, or less negative?

        Also, OT, but you have probably seen Isaac Held’s latest. I would be interested in your opinion of my comment.

      • Bill – Your comment is very pertinent. Part of the problem I had was what appeared to me to be a misleading statement in Kiehl – “Figure 2 shows the correlation between total anthropogenic forcing and forcing due to tropospheric aerosols. There is a strong positive correlation between these two quantities”. Actually Figure 2 shows a negative correlation – higher forcing (negative forcing) from aerosols is negatively correlated with total forcing. Presumably, Kiehl intended to mean that an aerosol forcing that was less negative was positively correlated with total forcing, but it was confusing when that was expressed as a strength of aerosol forcing. If I had been less careless in reading the numbers on the x axis, I wouldn’t have misunderstood that. Kiehl also points out that “Some of the models used in these simulations employed only the direct effect, while others used both direct and indirect effects of aerosols, which makes a more detailed comparison of simulated aerosol forcing difficult.” I think it actually makes it impossible, because the magnitude of the indirect effect is significant.

      • I haven’t had a chance to look at Held yet.

      • Bill – A constant or near constant relative humidity in a warmer atmosphere means a higher specific humidity – i.e., more total water vapor. This would serve as a warming influence and constitute a positive feedback.

      • Fred,

        Well when you get time check out Held,it’s interesting, he has a pointed reply to my comment, and I’m fine with that, but I still think what I’m describing will happen and will have to be dealt with carefully.

        This thread is getting unbearable, but your notes about the direct and indirect aerosol effects (of similar magnitude and sign in the GISS forcing time series) reminded me that I was going to say: While I am making no claim about tuning or lack of tuning, it does seem to me that comments along the lines of Pekka’s above are much more pertinent to the indirect effect, where it seems to me the estimates would be much harder to correlate with observational data (but not impossible).

      • Bill – Pekka made good points, but they can’t be used to excuse Lindzen’s claim that aerosols are adjusted for the purpose of making model projections come out right. The aerosol indirect effect is relevant, because it involves considerable uncertainty, as well as an extensive literature trying to narrow the plausible range. Has the magnitude of this effect been deliberately chosen in models with an eye toward making the models perform better? That would require choosing from the higher rather than the lower end of the estimated range in an effort to reconcile only modest observed warming with typical climate sensitivity estimates. This is the kind of claim Lindzen and others make – the aerosol forcing is chosen too high in order to make the models look good.

        As an example of what is done, however, here is a quote from Gavin Schmidt on the issue of “tuning”: ”However, Judy’s statement about model tuning is flat out wrong. Models are not tuned to the trends in surface temperature. The model parameter tuning done at GISS is described in Schmidt et al (2006) and includes no such thing. The model forcings used in the 20th Century transients were also not tuned to get the right temperature response. Aerosol amounts were derived from aerosol simulations using the best available emissions data. Direct effects were calculated simply as a function of the implied changes in concentrations, and the indirect effects were parameterised based on the median estimates in the aerosol literature (-1 W/m2 at 2000) (Hansen et al, 2005; 2007).”.

        If you look up the literature on the indirect effect (e.g., via Google Scholar), the range is extensive – from perhaps about -0.2 W/m^2 to more than -4 W/m^2. Much of the variation is toward the high end – i.e., above the median value. The choice of -1 W/m^2 is therefore conservatively low, such that even lower values within the range would have relatively minor effects on trend simulations, whereas higher values would make the models significantly underestimate observed warming trends. That choice is not one that would be made if the purpose were to prevent simulations from coming out too high. The more recent literature has begun to converge toward the -1 W/m^2 value, excluding the much stronger negative forcing, further justifying this choice based on evidence rather than “fudging”.

        Of course, this requires us to believe first that Gavin is telling the truth, and second that he is correct when he asserts (elsewhere in the discussion) that he is unaware of any group that engages in tuning to match observed trends. At some point, someone might present evidence that these statements are false, but until that is done, no claim for fudging can be justified. It looks like many groups are simply trying to arrive at the best values they can, and the models are using those data simply in order to be as accurate as possible in the light of some uncertainty.

      • Rob Starkey

        Fred
        “The model forcings used in the 20th Century transients were also not tuned to get the right temperature response. Aerosol amounts were derived from aerosol simulations using the best available emissions data.”
        The modelers would still be “allowed” to tailor the relative levels of each aerosol within the margin of error of the specific item without that statement being untruthful. There is a large margin of error in the estimated aerosol levels. In addition, the relative impact of each aerosol on the others and on the system as a whole can (and I expect were) adjusted so that the models would meet the observer criteria that were available.

      • Because of line breaks in my above comment, it may not be clear that the lowest (weakest) end of the indirect aerosol forcing range is still negative, at minus 0.2 W/m^2. This would be consistent with the physical principles involved.

      • Rob – It now appears that you’re simply calling Gavin a liar when he says flatly that this kind of “tailoring” isn’t done – choices are not made with any intention to get the trends right. I urge you to contact him and explain your position and if he responds, share it with us.

        Preferably, though, I think you should acknowledge that your claim is wrong, and that you had been misinformed on this topic. That would be honorable.

      • David Springer

        Fred,

        So which do you think happens more – adjusting the model to match observations or adusting observations to match models?

        Or perhaps you believe computer programs are born perfect and don’t need adjustments?

        LOL – you’re funny.

      • After reviewing many of the comments in the several exchanges above, I thought I’d summarize my own perspective on what I believe we can say with confidence and what’s less certain. We can conclude confidently that Lindzen and others are wrong in claiming aerosols are adjusted to make model projections match observations.

        A number of individuals (Pekka Pirila, Chris Colose, Alex Harvey) have suggested that despite the lack of intentional tuning for that purpose, some bias can creep into the literature so that the data that modelers use will act to make models look better than they are. I think the possibility is legitimate, but we also have to ask whether the evidence supports it. I don’t know all the evidence, but Gavin Schmidt, in discussing the GISS models, describes processes that seem fairly independent of that bias. In the case of forcings for which considerable uncertainty persists, such as indirect aerosol effects, the chosen values were conservative and would have introduced little or no favorable bias for the models. This example may not be representative, but it would be useful for anyone knowing of contrary examples to cite them. My impression at this point is that the problem may exist, but probably exerts only minor effects. We need more data on this.

        A point has been raised that several studies show fairly good matches to observed trends despite significant variation in the way they arrived at those matches. For example, models with higher sensitivity exhibited stronger negative aerosol forcing, models with weaker cloud feedbacks exhibited stronger feedbacks of other types, and so on. Is this evidence for implicit, perhaps unconscious, tuning to get the right trends?

        In the absence of direct evidence for tuning, I think the answer is probably no, because I think there is a good alternative explanation that requires no manipulation on the part of individual modelers – selection bias in the choice of models to look at. If a model is going to simulate trends well, it can do it in different ways. Some will do it with higher forcings, others with higher feedbacks, and so on, so that they differ from each other. What they have in common is that they are selected for getting the right answer, and that excludes models that don’t have the forcings or the feedbacks operate to give good results. If no models were excluded, would we still see strong forcings matched with weak feedbacks, or weak forcings with strong feedbacks? Presumably less so, because the models that perform poorly would probably fail to achieve that balance. This is tentative, because I can’t tell from the literature how much selection was actually imposed. Even so, it’s consistent with the reported results, and doesn’t require us to conclude that modelers have either unconsciously or dishonestly made choices to make their models perform better, while stating that they aren’t doing that. This too is something worth exploring further before drawing firm conclusions.

      • MattStat/MatthewRMarler

        Fred Moolten: This would serve as a warming influence and constitute a positive feedback.

        That is assumed but not known. The increased water vapor could produce an increase in the rate at which heat is transferred from the surface and lower troposphere to the upper troposphere, an increase in cloudiness (negative feedback) and increase in rainfall.

        It is known from CERES data that cloud cover is greater in the warmer months and lesser in the cooler months, so the possibility of the negative feedback that I described is concordant with extant data. Cloud formation and radiative/convective transfer of heat from lower to upper troposphere is discussed at Isaac Held’s blog, and there is much uncertainty about what would happen next if temperature or CO2 concentration increased.

      • @Fred Moolten,

        First of all, I really respect your tone in your communication on this issue and have understood, that although your own background is way off this topic (MD?), you done quite a lot of reading. Despite all this, I have a strong gut feeling that this model tuning and probably entire modelling discussion as a whole is an mostly out of your area of competence, despite all the literature you’ve gone though. In order to see what’s going on behind the curtains, which is highly relevant in interpreting and weighing the value of model outputs, you need to have relevant maths, physics/engineering background and preferably some real-life numerical modelling experience. Of which, Fred Moolten, to my knowledge you really have none.

        Fact is that current knowledge does not allow us to construct a computer model from first principles and parameters initialized from precise satellite measurements, but we really need a great deal of parametrization. These parameters are always subjective selections. Vast majority of these parameter vectors/matrices, equations, their numerical solving methods are not based on first principles and/or direct measurements or values directly derived from such. The fact that we have the radiative part roughly correct does not mean much more that we have a good start. Of course I needs to be pointer out that even a small inprecision in radiative model alone could lead to a wildly different outcome over longer time. I think this is clear for most, and already discussed to death in countless threads on this site alone.

        My main point here is that the process by which models are initialized (both the initial state and state invariant parameters) is just something you cannot possibly know by just reading journal articles. A claim made by one modeller (out of hundreds) of a single modelling group (out of dozens) does not change this fact, especially as this person is widely known as very active, but not exactly unbiased person on related climate change discussions. I might be mistaken here, and will stand corrected if necessary, but this one statement by this Schmidt seems to be your core argument vs “classic” sceptic claim of curve fitting.

        I’m not implying any cover up or conspiracy by stating that how models are initialized and parametrized is something not found from literature but rather stating the obvious. It is unevitably a unprecise and subjective process by nature, and one method of tackling the issue is to make several runs; unfortunately the massive scale (i.e. number of grid cells, calculation steps and number of parameters) is so huge that subjective decisions about parameters needs to be made.

        Personally, for what it is worth, my interpretation about the current state of predictive skill of GCMs remains very low, and this “tuning process” is indeed one of the primary reasons for my sceptisism. Of course there are numerous other, perfectly valid reasons for suspicion, but they which have been discussed by many also in this thread and need not to be repeated here.

      • Anander – Thanks for your comment. You are correct that I have only an outsider’s knowledge of how models are designed, although I understand the basic principles. pretty well. However, it’s important not to set up a straw man argument suggesting that I or anyone believes models are designed simply from first principles without parameterizations, and without testing against observational data to ensure that the parameters are as accurate as possible. Much of this is done to get the basic climatology right – seasons, latitudinal variation, winds, ocean currents, etc. In other words, tuning is an accepted reality in model design and initialization – that is not an issue.

        The issue is whether they are tuned for the purpose of making the projected trends match observed trends. In particular, Lindzen has claimed that aerosol forcing is adjusted to make the projections come out right. You seem to be suggesting that Gavin Schmidt, in stating this isn’t true, in describing how it is actually done, and in indicating that he is unaware of any model group that does what Lindzen claims, is either being untruthful or ignorant of how the modeling community acts in general.

        That’s possible, but I think it’s more likely that he is correct and that Lindzen’s claim that aerosols are adjusted to make the modeled trends match observed ones is false. He clearly knows much more about this than you or I, and his statements on this issue are unambiguous and have been made on more than one occasion when the issue has arisen. The only further way I think we could get more information is to contact him for additional input, or find other modelers who will confirm or contradict what he has to say.

        This brings up another point, that I’ll make here rather than on the new thread on models that started yesterday. In my view, this further discussion of models and their virtues and limitations is severely limited if the only people discussing it are non-modelers. For that discussion to be more than an exchange of unverifiable opinions, some well-informed, some less so, the dialog should include one or more people who construct models for a living. I suppose it’s fantasy, but I would have loved to see participation by Jim Hansen (after he was asked not to discuss “death trains” or the pipeline from Canada but only model construction). Barring that, there are some other good people who have occasionally participated here who would be valuable, including Gavin Schmidt. Discussions with Gavin can get heated and contentious on various issues, but when it comes to constructing models, I have no doubt he will truthfully tell what he knows, and it would be informative.

        Without someone like him, the discussion won’t be nearly as useful.

      • Fred Moolten, thank you for your reply. No strawman intended really.

        Although my view generally is more complex on this tuning issue that e.g. Lindzen proposes, I see some truth in his interpretation, and on the other hand, it is totally understandable for a modeler and especially one as outspoken and active as Dr. Schmidt to respond to this claim in a way he have done. Neverthless I see the real truth here is somewhere in between — in order to realistically be able to model the climate system and for example just to keep the intermediate values within reasonable ranges etc, some (most probably very heavy) tuning is inevitably required. I’m quite sure this is something most people who have insight about the inner workings of the models very much agree on, but unfortunately this discussion has been so polarized that they wouldn’t ever say so in public — nobody wants to give any talking points to sceptics-

        Generally speaking and stepping bit away from the pure modelling anyway, isn’t it so, that the varying aerosol forcing anyway the most important official explanation of the 20th century temperature variations (post-WW2 especially), and if the current interpretation is shown to be false, there will be quite a lot to explain and most important hypotheses of the 20th century climate variations would pretty much be sent back to the drawing board. This is my general understanding about the significance of this issue, but again I will stand corrected if this entirely false.

        And as you said, continuing the discussion about details without participation from modellers actually doing the work becomes rather fruitless quite soon. There isn’t too much point in going to details really.

        By the way, effectively disputing the widespread sceptic legend, there are climate model codes available in public domain (CESM for example). At least I have found studying the real code behind these discussions rather insightful – as (we) say, the truth is always in the code, specifications (not that there much to be found, at least in traditional computer engineering sense about academic software) are just paper. Most of course don’t have the data, computing resources, (personal) time and even skill to actually run these programs, but just reading the code will give you quite a lot of information that won’t certainly come up from the literature, for instance about data structures and degrees of freedom involved.

      • Your call, Fred, is to an authority bound to an ideological site. But you might be right.
        ==============

      • Latimer Alder

        @kim, fred

        Could Gavin Schmidt survive the heat of a site where he doesn’t hold the keys of moderation in his hand? He hasn’t shown much desire to venture out of such a comfort zone in the past.

        Difficult question? Zaparoonee and you’re gone.

      • “isn’t it so, that the varying aerosol forcing anyway the most important official explanation of the 20th century temperature variations (post-WW2 especially), “

        “Official” or not, I agree, Anander, that aerosol cooling is accepted as a major factor in the “global dimming” from about 1950 to the late 1970s, and in the “global brightening” due to reduced aerosol negative forcing subsequently, at least up to about 2000, when some possible further aerosol increases have been suspected. That this is a valid phenomenon is not in doubt, with evidence from multiple regions and multiple time points – mainly in the Northern Hemisphere but also to a lesser extent in the SH. The dimming was associated with reduced surface solar radiation resulting from a reduced transmission of a given solar irradiance to the surface, and was seen in both all-sky and clear-sky conditions, excluding cloud changes as the only operative factor. The subsequent brightening partly but not completely reversed some of the cooling effects that preceded it.

        Without the aerosol dimming, as you suggest, the warming post-1950 would have been expected to be greater based simply on GHGs and other warming factors, and so the aerosol effect

        Also, as you suggest, there is uncertainty about the magnitude of the negative forcing. This required efforts to arrive at the best input possible in models. It did not mean that modelers made choices designed to make the model projections fit the observed warming. The choices that Gavin Schmidt described for the GISS models were based on the aerosol data he had available, and were conservative rather than at the high end of plasuible aerosol negative forcing values. If that is considered “tuning”, it was not tuning designed to ensure a favorable outcome in the projections, unless Gavin was not telling us the truth.

      • I wrote a couple of sentences with tortured syntax above. The dimming was the aerosol cooling effect. The subsequent brightening partially reversed the cooling and was in part due to reduced aerosols. The net anthropogenic aerosol effect for the entire post-1950 interval, based on the observational data, was cooling, although the post-1976 effect may have involved a warming.

      • Fred,

        When we are discussing the logic in the role of aerosols, we must ask what was the basis for the estimate of the strength of the aerosol dimming. Was it really knowledge about the physical mechanism and amount of aerosols or was it determined from earlier analysis of temperature time series?

        If the basis was the analysis of temperature time series then there is a circular argument: Aerosol dimming is determined from temperature time series and it’s used to reproduce the time series. When that is done it’s also to be expected that the model will be implicitly tuned to reproduce the climate sensitivity that was used in the earlier analysis of the aerosol dimming.

        I don’t know what really happened. Thus the above tells what could have happened, not whether it really did. Showing that this is not the right explanation would require at least that it’s shown, how other information could at the time tell the strength of the dimming accurately and reliably enough.

      • “When we are discussing the logic in the role of aerosols, we must ask what was the basis for the estimate of the strength of the aerosol dimming. Was it really knowledge about the physical mechanism and amount of aerosols or was it determined from earlier analysis of temperature time series?”

        Pekka – It appears primarily to be based on physical mechanisms and aerosol amounts as incorporated into the GISS Model E

      • Fred,

        That conclusion may be right, but it would be necessary to know more about the details of the model to really conclude. The paper tells a fair amount about physics that’s taken into account, but only a real specialist could tell what that really means.

        The main reason for being a bit skeptical is in the fact that other main stream sources including several other modelers and the IPCC Reports emphasize that the strength of the aerosol forcing is not known at all accurately. The overview of radiative forcings for year 2005 in AR4 tells that the uncertainty range for the direct effect is -0.9 .. -0.1 and for the cloud albedo effect -1.8 .. -0.3. If there would really be a well justified physical understanding, how could the uncertainty ranges be so wide.

    • Alex Harvey

      Fred,

      People from the consensus side of the argument interpret Lindzen’s statement as an accusation of fraud, or something close to it. I do not read it that way.

      Lindzen is actually quoted as saying,

      “The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments.”

      You have paraphrased this as,

      “…aerosol forcing is adjusted to make model projections match observed trends.”

      If you look carefully, that’s not an accurate paraphrase.

      I would compare Lindzen’s state with one of Kiehl’s statements,

      “models with low climate sensitivity require a relatively higher total anthropogenic forcing than models with higher climate sensitivity.”

      So what does Kiehl mean by “require”? I think it is either a physical requirement or an arbitrary requirement. No one is suggesting that there is a physical reason why forcing and sensitivity should compensate; so without such a reason you are left with only a few other possibilities – sheer chance, which would be extraordinary; and an unconscious tuning in response to expectations of the model developers – which is less extraordinary.

      I also note you find the tuning argument implausible because climate sensitivity is an emergent property of the models. Sometimes the forcing is too. So, I would direct you to a paragraph from Huybers:

      “Covariance could also arise through conditioning the models. A dice game illustrates how this might work. Assume two 6-sided dice that are fair so that no correlation is expected between the values obtained from successive throws. But if throws are only accepted when the
      dice sum to 7, for example, then a perfect anticorrelation will exist between acceptable pairs (i.e., 1–6, 2–5, etc.). Now introduce a 12-sided die and require the three dice to sum to 14. An expected cross-correlation of 20.7 then exists between realizations of the 12-sided die and each of the 6-sided die, whereas the values of the two 6-sided dice have no expected correlation between them. The summation rule forces the 6-sided dice to compensate for the
      greater range of the 12-sided die. This illustrates how placing constraints on the output of a system can introduce covariance between the individual components. Note that this covariance can be introduced, albeit not diagnosed, without ever actually observing the individual values.”

      In the case of climate models, models may have only been accepted only when they reproduced aspects of the historical climate – in particular the surface temperature record. (Or, indeed, if their sensitivity lay outside the Charney range of 1.5 – 4.5 K.)

      (By the way, I put my own view more fully at Michel Crucifix’s blog –

      http://mcrucifix.blogspot.com.au/2012/02/ahem-few-clarifications.html.)

      • Alex – Lindzen stated that aerosol forcing is “unknown” and that the models made an “arbitrary adjustment” to make them match observations. This is almost certainly false for reasons stated several times earlier. I also explained why it isn’t necessary to invoke “adjustments” to explain the Kiehl findings, which can be explained on probabilistic grounds involving selection.. Since these comments are already in the thread, I won’t repeat them here, but I believe it will be possible to find additional support for them in a more thorough scrutiny of the models, and I’ll post further evidence as it emerges.

      • Fred, there are those of us who predicted that aerosols would be used as a bodge. Welikerocks saw it years ago, and I suspect Steve Fitzpatrick expected it, also.

        It’s all about the albedo. Learn what is there, don’t just imagine what is convenient.
        ======================

  25. Couple of points about the talk:
    — the second half is aimed at MPs, not scientists. Hence the lack of references and “rigour”. Consider the audience.
    — “Changes are not causal but rather the residue of regional changes.” This is the POV that climate is inherently regional, not global. Global numbers and effects are the sum and interaction of regional processes, in the main. That is, there are not “global changes” driving regional, but the reverse.

    • Completely agree with both of your points

      Energy entering and manifesting itself in the system is a regional event as radiation doesn’t possess heat. So the changes are regional like ENSO which in turn triggers a cascade of events.

      CO2 ppm is not the same at each test location — also takes time to migrate around the system.

      • Re Co2 levels, John, you might find the links here interesting. Callendar 1938 is the one who determined CO2 levels in the 19th century. Slocum 1955 showed how Callendar cherry-picked the data, arbitrarily leaving out over 1,000 measurements. Callendar’s 290 ppm is the accepted 19th century level. Slocum showed that it should have been 335 ppm, using the very same data. Slocum, playing nice, did everything but call Callendar a data fudger and a scientific fraud. But he left nothing to the imagination.

        Callendar was trying to be a warmist alarmist, long before Hansen. And he was caught out by Slocum, but no one today knows the history, so they use the cherry-picked value..

        Both papers can be found using Google Scholar.

        Steve Garcia

  26. CC says about about Lindzen- “Im not saying “Lindzen is wrong because he’s boring and no one likes him” but rather pointing out that he has lost credibility in the community. That’s the same old consensus argument which is political, not scientific. CC, I bet you and others can come up with dozens of examples in many scientific fields where a scientist had lost creditbility, but turned out to be correct. Also, your assertion that Lindzen has lost credibility is itself suspect. Perhaps that is the case among your cohort of climate science buddies, but there are a lot of scientists out there, and I doubt that you have polled them. Also, what you consider boring, others may consider gravitas, very different and from the rants or dismissive arrogance which characterizes much of the debate.

    • Andrew Russell

      The reason CC and other warmists say Lindzen has “lost credibility” is because (unlike CC’s “climate scientist”) he follows the Scientific Method. Unlike “climate scientists” he doesn’t hide his data and algorithms, cherry-pick tree ring series, turn varve data upside down, or engage in ‘pal review’.

      To have “credibility” in CC’s eyes, you have do all the things Lindzen won’t do.

    • Steve Milesworthy

      I’m not “in the community” but he loses credibility with me because I am capable of doing the simple calculations of forcings and temperature rise, and spot the flaw in the argument which is to ignore the other forcings. He has been making this argument almost unchanged for years because, I guess, he knows he can get away with it with certain audiences.

      Focussing on Chris’s complaint that he has lost credibility in the community is merely to avoid the discussion of the *reasons* why he has lost the credibility. Reasons being that his theory has not had any good support, his recent papers had some obvious flaws that were quickly spotted and he has a tendency for putting forward the same points to many meetings of non-experts without accounting for the fair criticisms his points have received.

    • Steve Milesworthy:
      It would be interesting to learn what you imagine Lindzen’s “obvious flaws” are.

    • Steve : are you seriously saying Lindzen routinely ignores non-CO2 forcings ?? That claim sounds like drivel to me. Is your “obvious flaws” claim any better grounded, I wonder ?

      • Steve Milesworthy

        Punksta, I did not say he “routinely ignores non-CO2 forcings”. I said he routinely gives presentations to non-experts in which he highlights a low sensitivity which is obtained if one ignores non-greenhouse gas – ie. aerosol and solar – forcings:

        “If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C”

        You should make sure you understand a point before you conclude it is “drivel”. I guess it means he has pulled the wool over your eyes.

        An “obvious flaw” in his latest paper (Lindzen & Choi) was that you got the opposite result to his if you changed the range of his arbitrarily chosen sampling regions by as little as one month.

      • Steve, I retract my comment about your contribution being mere drivel. It’s positively disingenuous. How much wool is being pulled over people’s by such an open statement ? Your claim of Lindzen’s dishonesty is itself just dishonest.

        An “obvious flaw” in his latest paper (Lindzen & Choi) was that you got the opposite result to his if you changed the range of his arbitrarily chosen sampling regions by as little as one month.

        Can you elaborate on this obvious waffle ?

      • Steve Milesworthy

        Punksta,

        I didn’t use the word “dishonest”. Stop putting words into my mouth.

        “If one assumes…” directs an inexperienced audience to assume exactly that. For the statement not to be misleading it should be followed by a clear explanation that nobody seriously assumes that the statement is definitely correct, and that even if the aerosol inputs are “arbitrary” they are sizeable.

        “Can you elaborate on this obvious waffle ?”

        How is such a clear statement “obvious waffle”?

      • Steve
        Oh do stop feigning innocence now, it’s pathetic.

        And don’t be ridiculous – “If one assumes” is not an invitation to assume.

        And since you accept the effect of aerosols, clouds etc is “arbitrary” – indeterminate as of now? – how can we also know they are “sizeable” ?

        And as regards the waffle, what claim do you refer to ?

      • Steve Milesworthy

        Punksta, you are looking all ways to pretend that Lindzen is “innocent” and assuming I am “feigning innocence”. But I have engaged with people who *have* been misled by Lindzen’s line so it is legitimate to point this out.

        Rather than vent your frustration at the valid points I am putting to you, why not use your energy to investigate Lindzen’s claims.

      • And don’t be ridiculous – “If one assumes” is not an invitation to assume.
        Steve M : [non-responsive]

        And since you accept the effect of aerosols, clouds etc is “arbitrary” – indeterminate as of now? – how can we also know they are “sizeable” ?
        Steve M : [non-responsive]

        And as regards the ‘waffle’, what claim [of Lindzen’s] do you refer to ?
        Steve M : [non-responsive]

      • Steve Milesworthy

        Sometimes it is difficult to summon a desire to respond to someone who can’t follow a thread. I’m having to guess on how to italicise here – apologies if it doesn’t work:

        [I]”And don’t be ridiculous – “If one assumes” is not an invitation to assume.”[/I]

        Yes it is.

        [I]”And since you accept the effect of aerosols, clouds etc is “arbitrary” –
        indeterminate as of now? – how can we also know they are “sizeable” ?”[/I]

        You misread for about the fifth time. I did not *accept* that the effect of aerosols is “arbitrary”.

        [I]”And as regards the ‘waffle’, what claim [of Lindzen’s] do you refer to ?
        Steve M : [non-responsive]”[/I]

        This is very confusing, because it is you who is accusing me of “waffle”. I have not accused Lindzen of waffle.

      • Sometimes it is difficult to summon a desire to respond to someone who can’t follow a thread.
        Well spotted – this is indeed the big problem with your posts.

        And don’t be ridiculous – “If one assumes” is not an invitation to assume.”
        SM: Yes it is.

        Obviously not. If one assumes != Assume.

        And since you accept the effect of aerosols, clouds etc is “arbitrary” –
        indeterminate as of now? – how can we also know they are “sizeable” ?”
        SB: You misread for about the fifth time. I did not *accept* that the effect of aerosols is “arbitrary”.

        I did not misread – “arbitrary” is the word you actually used.
        But, if you actually believe the effects of aerosols and clouds are settled science, do let us know these important finds.

        “Waffle”. You made a vague claim about some claim of Lindzen’s being obviously false. Which one/s ?

        Hint:
        Use html tags for italics etc

      • Steve Milesworthy

        OK. What you consider to be waffle was my reference to something that is clearly described in this section:

        “The LC09 results are not robust.”

        of:

        http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/

      • I didn’t mean that your claim itself was waffle. I meant you were waffling as to what the claim is.
        Give us a one paragraph summary so we can see if it’s worth following the RC link.

      • Steve Milesworthy

        [QUOTE]The result one obtains in estimating the feedback by [Lindzen’s] method turns out to be heavily dependent on the endpoints chosen. In [Trenberth et al] we show that the apparent relationship is reduced to zero if one chooses to displace the endpoints selected in LC09 by a month or less. [/QUOTE]

        The RC article includes a plot that compares Lindzen et al choices with Trenberth et al choices. The Trenberth choices look equally as reasonable or slightly more reasonable than the Lindzen choices and come up with a different result.

        So the result is not “robust”.

        There are a number of other problems listed though the others are more waffly ;)

        Lindzen has apparently accepted that there were “obvious flaws” in a follow up paper in the “Asia-Pacific Journal of Atmospheric Sciences”. The abstract does not appear to claim it is rebutting the criticisms.

        (Googling around Lindzen seems to be moaning that JGR rejected the paper and PNAS refused to use the reviewers he wanted (Will Happer and former colleague Dr. Chou) on this latter paper. Also

        http://judithcurry.com/2011/06/10/lindzen-and-choi-part-ii/

        )

      • Steve Milesworthy

        I did not misread – “arbitrary” is the word you actually used.
        But, if you actually believe the effects of aerosols and clouds are settled science, do let us know these important finds.

        Missed this post earlier.

        I said: “that even if the aerosol inputs are “arbitrary” they are sizeable.”

        Note the “if”. You know, the “if” that would make it obviously conditional. I think you’ve sort of made my point about Lindzen’s “If one assumes…”

        As it happens, arbitrary and sizeable is not that inconsistent when it is understood that they may cause both sizeable negative and positive forcing, and that the forcings are uncertain such that (in total) they could add up to a low number.

  27. I’d like to repeat this from 25 minutes ago because It’s a real question. Has the following been well considered in the literature with comparisons of Arctic and Anarctic carbon soot emissions and albedo effect on temperature trend or not? I hopeful that carbon soot emissions are an AGW forcing we might all agree on, thereby actually doing something positive despite the uncertainties about CO2 atribution and climate sensitivity-
    Doug Allen | February 27, 2012 at 8:35 pm | Reply
    Good points, but I think regional temperature trends are much more complex. The Anarctic, itself land covered by snow and ice and surrounded by mostly ocean, has warmed very little. The Arctic, on the other hand is ice and snow, surrounded mainly by land. The anarctic is far from centers of industry and soot emissions, and soot emissions fall out of the atmosphere fairly quickly, probably not crossing over the equator to any great extenct. The arctic is close to 90% of the world’s industry and receives a lot of the carbon soot fallout. I think the difference in albedo, from soot fallout, plus the positive feedback of albedo change when Ice and snow becomes water, may explain in large part the differences in Arctic and Anarctic temperature trends and by extension the differences in northern hemeisphere and southern hemisphere temperature trend. If I am wrong about this, Dr. Curry and others, give me some scientific studies and data that refute this or bring it into question. I have seldom seen this hypothesis considered, and it has a very strong bearing on the competing roles on CO2 and carbon soot emissions.

  28. Fred writes; “If you give a short presentation in general terms to a non-scientific audience, you can prove just about anything you want, with no-one to say you’re wrong. The reason that Lindzen’s perspective is not widely accepted within climate science resides in details that are not in the talk, and which an audience unfamiliar with climate data would be unable to judge in any case.”

    I’m not understanding what your point is, Fred. Are you saying LIndzen shouldn’t be able to give talks to present his point of view. If not, what are you saying? Should Al Gore be allowed to speak? Ultimately this is about social policy, and like it or not we’re living in a democracy. You seem to be pining for some sort of egghead-ocracy whereby no one short of a PH.D. in physics is allowed to vote.

    So what would you suggest? How would you go about educating the great unwashed to your satisfaction?

    As to no one being able to stand up and explain why Lindzen’s wrong, I can only say it’s rather a shame that no one on your side of things is willing to debate. It’s my understanding that professor LIndzen has no qualms at all about facing those who disagree with him.

    • My response appeared below. Sorry it wasn’t nested under your comment, which had been my intention.

      • Steve Milesworthy

        Fred, please don’t lose your patience. I enjoy your input and am capable of scrolling quickly over GaryM et al. and other intemperant people.

    • Pokerguy,

      I am beginning to get a grasp on Fred’s bizarre world view.

      For Fred, CAGW is a scientific fact (including the high probability of C). Therefore, anyone who says or writes anything inimical to CAGW is dishonest, because the only way you can disagree with CAGW is to outright lie, or lie by omission. Thus the WSJ graph Dr. Curry cited in an earlier thread that accurately depicts the IPCC’s warming predictions is dishonest because it doesn’t explain that the newer models are supposedly better than the older models. And Lindzen’s presentation to the House of Commons is dishonest because there are other CAGW talking points that, if Lindzen had discussed, would have proved how stupid Lindzen’s point is.

      In other words, if you disagree with Fred on CAGW, you are either stupid or dishonest, and likely both.

      Of course progressives think like that on virtually every issue, but for Fred it is an article of faith and a point of personal obsession.

  29. Scientists don’t even know how to deal with the complex climate science adequately.

    We ‘laymen’ know that inside every complicated idea is a simple idea trying to escape. Once you ‘scientists’ have finally freed the ‘simple idea’ we will understand.

    Think the convoluted orbits of the planets and stars before someone figured out the earth was not the center of the universe.

  30. pokerguy – Your questions, posted at 9:15 PM, were answered by me at 8:58 PM. Perhaps it took you more time than that to compose your comment, so I’m not accusing you of disregarding what I had written, but in any case, you can go back to my earlier response to Anteros for the relevant points.

  31. Judith,

    What is really being reflected in this debate is the age old debate between theorists and experimentalists except the theorists are today avoiding real world scrutiny and testing by substituting computer models.

    If they had powerful computers in 1904 the plum pudding model of the atom of J. J. Thomson might still be being defended with wonderful results from tortured computer models. Geiger, Marsden and Rutherford would have been dismissed as sceptics and the Royal Society would be saying there is a “concensus” and the science is “settled” and dismiss Rutherford as a mere upstart scientist from the colonies. Even then Rutherford had to get Geiger and Marsden to do the experiment and then “interpret” it to minimise the fall out from demolishing the “settled science”.

  32. Brandon Shollenberger

    I find the comments on this page interesting. So far, I have seen four people say Lindzen is wrong (not counting stefanthedenier). Oddly enough, none of them have discussed anything our host highlighted. Consider Pekka Pirilä:

    The most obvious questionable trick that Lindzen made in this presentation is concentrating in several places on the period of 150 years. As nobody thinks that the first half of that period is strongly affected by anthropogenic influence he effectively doubles the denominator and halves the average human contribution. I think this is done by purpose and is dishonest.

    Lindzen referred to the warming observed in a period of 150 years. Pirilä claims this is dishonest as most of that warming was (he says) observed in the last 75 years. This is an extremely weak basis for an accusation of dishonesty, and it’s the entirety of Pirilä’s response. We then have Chris Colose who begins with:

    I completely disagree that Lindzen’s speech will have any impact outside brief blogospheric discussion. Most of the scientific community, even at MIT, no longer thinks Lindzen has any credibility left on climate science issues…

    Colose begins by “poisoning the well.” Before discussing anything Lindzen says, he denigrates Lindzen. He then goes on to say things like:

    But it’s easy to see why his speech will have little influence. On many occasions, he steps well outside his expertise, and makes claims which experts in those areas already know full well or are completely wrong…

    This seems almost meaningful, except nothing Colose refers to is anything Curry highlighted. Instead, he refers to relatively obscure arguments which no average reader is likely to know about, research about or even care about. Instead of discussing the core arguments of the topic, Colose relies on dishonest rhetorical tricks and discussions of peripheral arguments. We then have Jim D, who says:

    This material is largely recycled from previous talks, so we don’t have anything new to address in it. Lindzen stays clear of the last 30 years for good reason. Had he calculated how much warming his 1 C sensitivity would have given, it would have been less than half of what was observed. He then would have had to say where he thought the rest came from, which he has no idea of, at least that he has spoken about. For 1900-2000, his expected warming would have been near 0.35 C, only half of what actually occurred, even with the negative effect of aerosols that he doesn’t believe in (somewhat in a minority there).

    There is an implicit accusation of dishonesty here, but it is nowhere near as prominent as in the previous two posters. Unfortunately, Jim D’s comment seems to make no sense. Curry highlighted Lindzen saying we’ve seen almost one degree of warming, we’ve had almost a doubling of effective CO2 concentrations, and the planet’s sensitivity to such a doubling is about one degree. This is all perfectly consistent, yet Jim D comes up with radically different numbers, and he does so without providing any calculation or source. We then have Fred Moolten who offers the only reasonable disagreement on the page:

    The problem as I see it is that, to coin a cliche, the devil is in the details. If you give a short presentation in general terms to a non-scientific audience, you can prove just about anything you want, with no-one to say you’re wrong. The reason that Lindzen’s perspective is not widely accepted within climate science resides in details that are not in the talk, and which an audience unfamiliar with climate data would be unable to judge in any case.

    He doesn’t actually say why any of Lindzen’s points are wrong, but he explains the way in which they are (supposedly) wrong. This isn’t much, but it is something, and he offers it without any derogatory remarks. That makes it the best response offered on this page.

    Ultimately, Lindzen’s presentation makes a number of very simple points which Judith Curry highlighted. Despite a number of people disputing them, nobody responded to them. I find that fascinating. To any uninformed viewer, there would be absolutely nothing on this page to indicate Lindzen’s position was wrong.

    • Denigrating the opposition is what Chris Colose and most AGW supporters do. It is their trademark. If someone starts out with an attack on the person and not the science, I know without even reading any of their points that they are an AGW supporter

      • Brandon Shollenberger

        I don’t think your description is accurate. I’ve seen the same the same sort of behavior from people on both sides. In fact, I’ve probably been guilty of it myself.I understand why people do it, and I don’t think it is inherently wrong.

        The problem comes when people attack a person without actually addressing any substantive points. I don’t care if Chris Colose or others make fun of Lindzen (or anyone else). I care that they do so while not contributing to the discussion at hand.

        Quite frankly, I find it mind-boggling such simple points aren’t getting any substantive responses by people who disagree with them. If you can’t actually discuss simple points, why should anyone listen to you?

      • Brandon, though I have not yet offered anything substantive to this discussion, I often get the feeling that people don’t read Judith’s comments or certainly don’t take them up as point of debate, except in cases of extreme agreement or diagreement.

      • Rob Starkey

        Fred Molton

        I wrote: “The modelers would still be “allowed” to tailor the relative levels of each aerosol within the margin of error of the specific item without that statement being untruthful. There is a large margin of error in the estimated aerosol levels. In addition, the relative impact of each aerosol on the others and on the system as a whole can (and I expect were) adjusted so that the models would meet the observed criteria that were available.”

        Now I acknowledge that I do not know much about programming a GCM, but I wrote what I did because it seemed like a very reasonable way for the modelers to develop their GCM‘s. The criteria the GCM’s are trying to accurately forecast are not these forcings so adjusting them in the past seemed a reasonable way to potentially increase accuracy in the hindcast.

        You wrote I was wrong and wanted me to admit such.

        Please look at what the IPCC said about model development: Looks like the IPCC is writing the same thing that I wrote Fred. The modelers allowed the aerosol forcings to vary with the range of uncertainty.

        http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter8.pdf

        “Models have been extensively used to simulate observed
        climate change during the 20th century. Since forcing changes
        are not perfectly known over that period (see Chapter 2), such
        tests do not fully constrain future response to forcing changes.
        Knutti et al. (2002) showed that in a perturbed physics ensemble
        of Earth System Models of Intermediate Complexity (EMICs),
        simulations from models with a range of climate sensitivities
        are consistent with the observed surface air temperature and
        ocean heat content records, if aerosol forcing is allowed to
        vary within its range of uncertainty.”

        Fred- could it be more plain that you made a mistake?

      • Rob, what you quote is irrelevant to the point I made, which is that the modelers don’t tune aerosols to match observed trends. You should contact Gavin Schmidt as I suggested.

        What you quote is what I have mentioned in several places above. Inverse modeling is often used to estimate the value of a parameter. It can be used to test different aerosol forcings to see which allows a model to best match observations, but as I mentioned, inverse modeling results are not used to make projections, which require forward modeling. The latter is done, as Gavin mentions, without trying out different forcings to see which one performs best, but based on the criteria he describes in the part from him I quoted rather than the results from an inverse model exercise. I described some inverse modeling results in an earlier comment, including the observation that utilizing a somewhat smaller aerosol forcing changed temperature projections to only a minor extent (this is from one of the Hansen et al references).

        You have acknowledged your lack of understanding of this issue. If you contain Gavin as I suggest, I’m confident he will confirm the points I attribute to him, and he can also explain why what modelers do for trend simulations is not the same as inverse modeling for parameter estimation, since he has experience with each.

      • Rob Starkey

        OMG- Fred Molton

        You are completely incapable of admitting you are wrong.

        Fred- why do you the modelers allowed the aerosol forcings to vary within the range of uncertainty if it wasn’t to help the model perform better in meeting observed conditions?

        Do you think they did it for fun?

      • Rob – I’m not sure what combination of stubbornness and ideological fervor prevents you from reading what other people write and trying to learn from it. I just finished describing the different uses of inverse modeling (where different parameter values are tested) and forward modeling, which is used to make projections without knowing in advance which values will perform best but must derive them from data and physical principles. The latter doesn’t involve trying out different values to see which works, and there is no tuning. You then ignored what I wrote and repeated your previous misconception.

        Whether you want to understand or not is not a problem for me, Rob, because I can live with your remaining misinformed. It should be a problem for you, if you care to improve your understanding.

        Incidentally, here is a nice paper on the use of inverse modeling for Cloud-Aerosol Interactions. It illustrates the principle, and I hope you’ll understand why it can’t be used for future projections when there is not yet an observational trend the different values can be compared with.

    • If Lindzen thinks the other GHGs apart from water vapor have more than doubled the effect of CO2, and that aerosols have not had any effect, he is running counter to the IPCC estimate that the other GHGs have had 50% of CO2’s effect and aerosols have about canceled this. Also, if the other GHGs are more than doubling the effect of CO2, the future warming is worse than we thought, but he has no support for this statement (and Judith questions it because she hasn’t seen these numbers before either). So I think he is just being alarmist here.

      • Brandon Shollenberger

        Jim D, I want to thank you for actually making a substantive response this time. You did not address much of what I said, but you did at least give a real point to discuss. First, you say:

        If Lindzen thinks the other GHGs apart from water vapor have more than doubled the effect of CO2, and that aerosols have not had any effect, he is running counter to the IPCC estimate that the other GHGs have had 50% of CO2′s effect and aerosols have about canceled this.

        According to the IPCC AR4, the forcing from CO2 (as of 2005) was 1.66. The total forcing from greenhouse gases was 2.63, meaning non-CO2 GHGs had ~58% of the impact of CO2. If you include solar irradiance, that goes up to 2.75 and ~66%. It then goes up to 2.82 if you include the stratospheric water vapor directly caused by methane, giving ~70%. Finally, if you include tropospheric ozone changes, you get a total forcing of 3.1, or a non-CO2 forcing of ~87%. If you use that proportion and update the CO2 forcing for 2012 levels, you get ~3.35 as your total forcing. That’s 90% of the generally accepted 3.7 so Lindzen’s comment is reasonably accurate.

        The aerosol forcings from the IPCC AR4 are only -.5. This is nowhere near the positive non-CO2 forcings which have a total of 1.44. Moreover, the error margins on aerosol forcings are 80%, meaning the IPCC says they could be as small as -.1.

        For the shortened version, there are positive forcings other than greenhouse gases. When you include those, the best IPCC estimate for aerosol forcing is less than half that of positive non-CO2 forcings. Moreover, that estimate has such wide error margins that it almost includes 0. These facts largely invalidate your response.

        In actuality, the total forcings seen so far could be reasonably close to the forcing expected from a doubling of CO2. Lindzen’s comment was not precise, and it does rely upon some assumptions, but it is not unreasonable.

      • Brandon Shollenberger

        Yikes. I just realized the table I referred to has two lines for aerosol forcing. This increases the central value to -1.2 W/m^2 (not -1.5 as Jim D claims upthread). This is somewhat bad for Lindzen’s position, but it is compensated by the fact the uncertainty increases a great deal.

        The total uncertainty of aerosol forcings not only includes 0, but it even ranges as high as +.3 W/m^2. That’s right. The aerosol forcings which are said to cancel out the non-CO2 forcings may actually contribute to them instead.

        I apologize for missing that line in the table, and I apologize for the mistakes which crept into my comment because of it. However, the effect of including the line I missed only serves to strengthen Lindzen’s position.

      • Brandon, AR4 has versions of that forcing diagram that sum the bars up, and you see that the total is very similar to CO2 alone. This implies other GHGs tend to cancel aerosols. For Lindzen to make his case, he has to say what he thinks aerosols are doing. It is not just a model argument. There is a lot of physics that explains why aerosols make not only clear sky but also clouds have higher albedo. People study this, measure it with satellites, write papers on it. It should be considered and not just dismissed as a model invention if that is what Lindzen is doing. Maybe nobody at MIT is in that area of science, or he doesn’t talk to them, but he seems a bit isolated on this matter,

      • Brandon Shollenberger

        Eek. I need to learn to read charts better. I thought the range given in error estimates in that chart gave the total forcing, not the error margin. Silly mistake, I know. In my defense, I am the only person who has actually referred to the real numbers, so it’s not like I’m doing worse than anyone else.

        Anyway, with that change, Lindzen’s position is not as simple to support. Even if you take the least damning estimations (for Lindzen) from the AR4, the total forcing from aerosols is -.4 W/m^2. This means it isn’t consistent with 0, and it cannot simply be ignored. However, I believe Lindzen explained in his speech why he disagrees with those estimations. If so, he didn’t just ignore them. Beyond that, at the far end of the error margins given by the AR4, we’re still seeing 80% of the forcing expected from a doubling of CO2. This means his comment is still relatively reasonable even if we accept the AR4 estimates (though it would have more certainty than it should).

        With all my mistakes corrected (I hope!), the central thrust of my response doesn’t change. Lindzen’s comment may not be a great answer, but it’s also not anywhere near as unreasonable as portrayed by Jim D.

        Speaking of Jim D, may I ask why you’d refer to a visual diagram when I provided a direct link to the actual numbers? Why would you rely on estimates derived from reading a picture when you can see the actual values? Would you also explain how you can say this:

        Brandon, AR4 has versions of that forcing diagram that sum the bars up, and you see that the total is very similar to CO2 alone. This implies other GHGs tend to cancel aerosols.

        You say the aerosols are about as strong as CO2 and they tend to cancel out the “other GHGs.” For that to be true, the forcing from “other GHGs” would have to be approximately as strong as the forcing from CO2, something you directly disputed when you said:

        the IPCC estimate that the other GHGs have had 50% of CO2′s

        You’ve changed your position from saying “other GHGs” have 50% the forcing of CO2 to 100% without any explanation. Is there one?

      • Jim D, What about the reference to the literature where modelers discuss how they make use of this adjustment factor. It’s in Lindzen’s FermiLab talk if you are interested. So, according to JimD, just how to modelers choose their aerosol forcing, given the range in AR4 between 0.4 and 2.7 W/m2. Surely, it makes a HUGE difference. Taking the upper value, there should be no warming at all!

      • Brandon Shollenberger

        David Young:

        Taking the upper value, there should be no warming at all!

        This is nonsense. Sure, aerosols cancel out all of the anthropogenic influences, but everyone knows the observed warming is due to natural fluctuations!

        Sorry. I couldn’t resist.

      • Brandon, you can see the IPCC bar charts by just doing an image search of IPCC Forcing. These usually also have numbers too. CO2 is near 1.6, the total is near 1.6, and the GHGs and aerosols are near 1 and -1. (OK, more than 50% of CO2). This is consistent with what I said. Lindzen doesn’t explain his aerosol view except as a way to get at modelers, not mentioning the people doing the aerosol observations.

      • Jim D –

        Whether Lindzen is right or wrong about the aerosols, don’t you think someone should be challenging the models, to make them back up what they show? Shouldn’t the two sides then go at each other with the best conclusions winning? Or some third conclusions come out of it?

        And if you do think that, don’t you think it would be in the spirit of open inquiry that the modelers let the challenger at least see what it IS that they are doing, so the other side can have an informed basis for getting to the root of the situation?

        Are the modelers more interested in keeping the status of their models, or in getting at the truth of the matter? (I don’t mean truth here as final truth, but as a next step with a solid basis.)

        Steve Garcia

      • David, yes, I am familiar with Kiehl’s words on this as a way to get the sensitivity down to values more consistent with observations. Without this, the water vapor feedback is too strong to explain the relatively weak warming of the later 20th century that you would get without aerosols. They had no way to change the water vapor feedback, because that is basic water saturation physics, but aerosols were uncertain and generally reflective. This is the part of the model where there is least certainty, because of the detailed chemistry involved, and there are only general observations to support model parameterizations. This science is in a better state than it was only a decade ago, and improving with more research and observations.

      • Brandon Shollenberger

        Jim D, you have an annoying habit of not responding to what I say. For example, you say:

        Brandon, you can see the IPCC bar charts by just doing an image search of IPCC Forcing. These usually also have numbers too. CO2 is near 1.6, the total is near 1.6, and the GHGs and aerosols are near 1 and -1.

        There is no doubt I can see those charts. In fact, I had them open in one tab when I typed my response. However, what I asked you was:

        Speaking of Jim D, may I ask why you’d refer to a visual diagram when I provided a direct link to the actual numbers?

        I didn’t ask why you were using a chart. I asked why you were using a chart when I gave you a link to the actual numbers. I asked why you would pick a chart over the numbers the chart is made from, and you respond by not answering anything I said. Instead, you just say the chart is readily available and continue to use it to estimate the actual values I gave a link to.

        You do it again when you say:

        (OK, more than 50% of CO2).

        Here you admit your earlier comment was wrong, but you don’t actually respond to my comment about it. Neither of these cases cause any real problems, but it is annoying to have you respond to me while mostly ignoring what I say. Anyway, apparently the main point you want to make is:

        Lindzen doesn’t explain his aerosol view except as a way to get at modelers, not mentioning the people doing the aerosol observations.

        I haven’t looked at the entire presentation from Lindzen, so I don’t know whether or not he did explain his view on the aerosol issue. If not, he certainly should have. On the other hand, you didn’t even raise this point in your initial comment, and you basically didn’t respond to anything I said about that comment.

        Perhaps Lindzen does need to do better, but apparently, so do his critics.

      • Although Lindzen is entitled to his interpretation of evidence, I don’t believe he’s entitled to misrepresent aerosol forcing as a fudge factor used to make model predictions conform to observations, and it’s unfortunate that myth has become a staple in some blogosphere discussions. The question is not whether different models use different aerosol forcings – they do – but whether aerosol forcings are adjusted to “tune” the models to the observed temperature trends – they aren’t.

        Some of the problem is a misrepresentation of the Kiehl GRL paper. From among multiple models, Kiehl selected a subset that agreed fairly well with temperature observations, but with different climate sensitivities (climate sensitivity is an emergent property of models and not an input). He found an inverse relationship between sensitivity and aerosol forcing. This is unsurprising given that the subset was selected for good predictive skill. However, the inference that each model had been tuned is false, based on the descriptions of how the models were constructed and parameterized. If all models, rather than just those with the selected attributes (good match to observations but differing climate sensitivities) had been evaluated, there is no reason to expect the same result.

        There are remaining uncertainties about aerosols, but non-negligible aerosol negative forcing is not one of them, and it seems to me that Linden’s perpetuation of the “fudge factor” myth is an impediment to attempts to focus discussion on how best to resolve the uncertainties.

      • Fred –

        You say that climate sensitivity is an emergent property of models and not an input. That surprises me. Isn’t it the case that Jim Hansen’s predictions of 1988 were ‘based’ on a model with a climate sensitivity of 4.2C/2xCo2?

        Similarly, didn’t the IPCC FAR specify that its prediction of 0.3deg per decade of warming was based on a model with a sensitivity of 2.5C/2xCo2 – and that the “limits of uncertainty” were two other models that used sensitivities of 1.5 and 4.5C/2xCo2?

      • Anteros – Climate sensitivity is an output that arises from model inputs including basic physics and known properties of CO2, water, hydrostatics, etc., plus parameterizations designed to match the properties of starting climates before a simulation of trends is attempted. The modelers don’t actually know what their model’s climate sensitivity will be when they input the relevant variables,. Furthermore, the models are so complex that they can’t really tweak parameters with the expectation of changing it in a predictable way. That’s one of the reasons the sensitivity range is as broad as it is. There are many good sources describing this, and RC is one place to look (search for models), because Gavin Schmidt is an expert in this area. TI don’t think the above is a matter of controversy within the science itself on the part of individuals who are intimately familiar with model construction.

        When you quote model sensitivities, as you have done, you are referring to the outputs. In other words, Hansen’s early model emerged with a sensitivity of 4.2 C/CO2 doubling, but that figure wasn’t something he knew in advance.

      • The point I would like to emphasize that I mentioned above is that aerosols were deemed necessary in models because without them, the water vapor feedback was too strong to account for the recent temperature trend. If they could have tuned the water vapor feedback they may have tried, but the fact they didn’t is because it is defined by somewhat fundamental physics, like Clausius-Clapeyron, which you can’t change. Aerosols were considered to be generally reflective, especially sulphates (as seen from the measurable effects of Pinatubo for example), so, no surprise, aerosols had the right properties to avoid the overwarming. However, they are complicated as emissions aren’t known accurately and chemistry has a way of converting aerosols, while their effects on clouds also leads to higher albedos but it is somewhat dependent on details of cloud microphysics, so many factors confound the issue. Hence, since it can’t be derived from first principals the way radiation and thermodynamics can, a certain amount of ground-truth in observations is needed to constrain the chemistry. This could be called tuning, but really it is constraining a complex system. The aerosol forcing uncertainty bars shown by IPCC reflect this.

      • MattStat/MatthewRMarler

        Fred Moolten: From among multiple models, Kiehl selected a subset that agreed fairly well with temperature observations, but with different climate sensitivities (climate sensitivity is an emergent property of models and not an input). He found an inverse relationship between sensitivity and aerosol forcing.

        thanks for the clarification.

  33. 1) Statement 2 of Slide # 3 can be shown to be completely false with a simple “back of the envelope” calculation.

    2) The “work” used to compile this presentation was not peer-reviewed, would not pass a peer review, and is based on several false premises. No wonder Lindzen is usually dismissed in the AGW proponent crowd.

    3) When, just below Statement 2 of Slide 3, Lindzen states “Given the above, the notion that alarming warming is ‘settled science’ should be offensive to any sentient individual, though to be sure, the above is hardly emphasized by the IPCC,” he is clearly attempting to bully an audience with little scientific knowledge. This is a typical AGW denialist strategy…a very unprofessional strategy at that.

    • Brandon Shollenberger

      Pierre, you make the fifth commenter to fit my description. Similarly to Jim D, you say:

      1) Statement 2 of Slide # 3 can be shown to be completely false with a simple “back of the envelope” calculation.

      Unfortunately, you do not explain this. This is particularly problematic as Lindzen clearly justifies that statement when he says:

      There has been a doubling of equivalent CO2 over the past 150 years

      You could argue his justification is wrong (perhaps by saying that supposed doubling didn’t happen). You could also argue a different reason for him being wrong (such as by saying more warming is “in the pipeline”). Instead, you simply dismiss that statement out-of-hand even though you don’t respond to any of the justification for his statement.

      That you do this and then denigrate him means you clearly demonstrate what I discussed.

      • For Pierre’s sake ;-) , if you can’t do this very simple calculation, you have no business commenting on GW, pro or con. Sorry! I’m accustomed to “debating” with deniers, rather than someone who is apparently open-minded.

        Regarding Lindzen’s comment (that you provided) that “there has been a doubling of equivalent CO2 over the past 150 years,” this is simply not true. The baseline pre-industrial CO2 concentration is very widely accepted to be 280 ppm and we are now slightly above 390 ppm. A doubling of CO2 would be 560 ppm, so we are *only* about 40% about above the pre-industrial CO2 concentrations. So Lindzen is wrong on the comment you provided. The CO2 concentration a century ago is not far from the pre-industrial value. The temperature increase from a century ago is also just about 0.8˚C, and we know that there is still more temperature increase to come from the CO2 NOW in the atmosphere, even with only a 40% increase in CO2. Yes, this is more of a qualitative argument, but it is valid. Based on this empirical evidence, Lindzen must be wrong. A more recently (Schmittner et al, 2011) calculated climate sensitivity, is 2.3˚C for a doubling of CO2, not too far from the IPCC’s best estimate of 3˚C, and within the IPCC’s likely range of (2-4.5ºC). The Schmitter paper cautions “Our uncertainty analysis is not complete and does not explicitly consider uncertainties in radiative forcing due to ice sheet extent or different vegetation distributions. Our limited model ensemble does not scan the full parameter range, neglecting, for example, possible variations in shortwave radiation due to clouds.” It does become a problem in interpreting results from different research because of what the researchers have included as the cause of climate sensitivity.

      • Brandon Shollenberger

        Pierre, it’s disturbing you say people “have no business commenting on GW” if they can’t do a calculation which is nonsensical. Specifically, you say:

        The baseline pre-industrial CO2 concentration is very widely accepted to be 280 ppm and we are now slightly above 390 ppm. A doubling of CO2 would be 560 ppm, so we are *only* about 40% about above the pre-industrial CO2 concentrations. So Lindzen is wrong on the comment you provided.

        This comment shows a severe lack of understanding of what Lindzen said. This is extremely confusing as you even quoted what he actually said:

        Regarding Lindzen’s comment (that you provided) that “there has been a doubling of equivalent CO2 over the past 150 years,” this is simply not true.

        Lindzen did not merely discuss the change in CO2 concentrations. His comment covers all increases in greenhouse gases. If you’d like to argue he is wrong about that total increase, you can, but the simple fact is you’ve grossly misrepresented what he said. A mistake like that is understandable, but that you so grossly misrepresented such a simple point while making a claim as to who ought to be discussing matters is extremely disturbing.

        Whether or not Lindzen’s comment was correct, it is clear you did not understand it before dismissing it. That is a bad sign.

      • Lindzen said what I quoted; it’s REAL simple. He also said what you quoted…real simple as well. If you don’t understand climate change issues, you should not comment. I apologized for the comment about you not being allowed to post. I retract that apology. I’m thrilled you’re posting at the judithcurry.com for idiots only site. Curry did not participate in the analysis of the BEST data and was apparently unable to contribute in any meaningful way. She’s not a significant player in the field of climate change. But she is qualified to start a Web site/blog to mislead the hopelessly naive wrt climate change.

      • Brandon – You might be interested in my comment at Brandon 11:58pm.

        In it I point to two papers, one which casts aspersions on the one which set the 19th century CO2 level at 290 ppm, when with the same data it should have been 335 ppm.

        Steve Garcia

      • For Pierre’s sake , if you can’t do this very simple calculation, you have no business commenting on GW, pro or con.

        Ignorance? Check!

        Arrogance? Check!

        Carry on.

      • Brandon Shollenberger

        Pierre, you say:

        Lindzen said what I quoted; it’s REAL simple. He also said what you quoted…real simple as well. If you don’t understand climate change issues, you should not comment. I apologized for the comment about you not being allowed to post. I retract that apology. I’m thrilled you’re posting at the judithcurry.com for idiots only site.

        The distinction between CO2 forcings and effective CO2 forcings is quite simple. It is one even made by the IPCC. The fact you refuse to acknowledge it indicates you have a poorer understanding of climate change issues than I do. Given that, by your own standards, you should stop posting. Since you seem to think this site is a waste of time, perhaps that will actually happen.

        Until then, we people who like to discuss things rather than simply insult anyone who intelligently disagrees with us will continue discussing things, and if you’d like to participate, you can. You’ll have to actually read what is said, but I’m confident you can do that.

        Or maybe that’s something only us “idiots” can do. Maybe we’re just not smart enough to dismiss what people say without reading it.

      • Brandon Shollenberger

        D’oh. I really am terrible with blockquotes. Oh well, my comment should still be easy enough to read.

        feet2thefire, I’m not actually familiar with either of the papers you mentioned, but I also don’t think they’re particularly relevant. That issue has been examined by many papers since then, and I think time is better spent looking at them. Early results could have been gotten incorrectly, yet still have been accurate due to luck. If the early paper you mention was wrong, I can ignore it’s conclusions, but I can’t simply ignore conclusions of other papers because they happen to be similar.

        Mind you, it’d still be an interesting thing to learn about, and because of that, it’s worth reading them. I just don’t think flaws in papers from 60+ years ago are going to alter my understanding of things very much.

      • Brandon Shollenberger –

        I’m not actually familiar with either of the papers you mentioned, but I also don’t think they’re particularly relevant. That issue has been examined by many papers since then, and I think time is better spent looking at them. Early results could have been gotten incorrectly, yet still have been accurate due to luck. If the early paper you mention was wrong, I can ignore it’s conclusions, but I can’t simply ignore conclusions of other papers because they happen to be similar.

        I’ve seen the data myself. And the data can’t change. It was taken back then and that’s it. They can’t have new data. There WAS no more taken. It’s not like there were lots of CO2 detectors back in 1880 and that Callendar missed them.

        I invite you to look up the papers on Google Scholar and then read them. They aren’t that long, nor are they incomprehensible. You will see that Callendar lest out all the data that didn’t fit the conclusion he came to. There were a LOT of them that he left out. Slocum’s work looks at the same data, and Slocum concludes that Callendar had no justification for excluding the data he left out. In any discipline that is called cherry picking, when it is a biased data set that remains.

        Also at http://www.warwickhughes.com/icecore/, in Figure 2, Dr Zbigniew Jaworowski graphically shows the cherry picking of Callendar.

        Steve Garcia

      • Brandon –

        For blockquotes just put

        before and

        after. What is between them will be blockquoted. The after version simply has the “/” before the “b”. Just make sure of your spelling of “blockquote”.

        Steve Garcia

      • Brandon – Hahaha – I screwed THAT up!…LOL

        Crap! I inadvertently USED

        and its ending counterpart. Dumb, dumb, dumb…

        The bracketed one AFTER your passage is the same as

        , but inside the brackets is “/blockquote”, not “blockquote”. Check your spelling!

        Steve Garcia

      • Brandon Shollenberger

        feet2thefire, you’re wrong when you say:

        I’ve seen the data myself. And the data can’t change. It was taken back then and that’s it. They can’t have new data. There WAS no more taken. It’s not like there were lots of CO2 detectors back in 1880 and that Callendar missed them.

        Ice cores provide records of atmospheric gases. Many have been drilled since the mid 1900s. This gives the new data you say can’t exist.

        Also, I haven’t read those papers so I don’t know what periods their measurements cover, but it’s worth remembering CO2 levels were rising well before 1880. It’s possible the “correct” value given the data set used in them was higher than 290 because the atmospheric levels had risen above 290 by that time.

      • Brandon – Point taken about the ice cores. Jaworowki takes those to task, too, and he has dealt with many, MANY of those. He argues that the assumptions that the gases in any layer are pristine are simply wrong, and he says why. But, yes, that data can be added. But don’t forget that Antarctica and Greenland are not very good representations of the rest of the world, especially Antarctica.

        Co2 levels rising prior to 1880 is true, but probably in lock step with the massive aerosols, so any CO2>temp correlations have that BIG complicating factor mixed in, and driving temps down, if I am not mistaken.

        Most of Callendar’s data was in Germany, which is significant. In the early 20th century they had 10,000 data points (several times what other data Callendar had for the 19th century) – and the average of them was 438! In the 19th century Germany’s still substantial data showed a level of 400. I have NO info on what environmental effect those levels produced.

        Also, I’d be interested if more recent papers have at all used Callendar’s data, and if so if they used his cherry-picked set or all of it. It can’t be ignored. Can you point me to any papers, to save me time searching for them?

        Steve Garcia

      • Brandon Shollenberger

        feet2thefire:

        Jaworowki takes those to task, too, and he has dealt with many, MANY of those. He argues that the assumptions that the gases in any layer are pristine are simply wrong, and he says why. But, yes, that data can be added. But don’t forget that Antarctica and Greenland are not very good representations of the rest of the world, especially Antarctica.

        I’ve seen similar arguments before, but they are things the people drilling the cores take into consideration. I can’t say with certainty those arguments are wrong, but I don’t have any confidence in them as is. As for how representative cores may be, CO2 is a well mixed gas in the atmosphere. As long as there aren’t any sinks/sources influencing the area a sample is taken in, it should be fine. I believe that is the case for the ice cores used.

        Co2 levels rising prior to 1880 is true, but probably in lock step with the massive aerosols, so any CO2>temp correlations have that BIG complicating factor mixed in, and driving temps down, if I am not mistaken.

        I wasn’t looking at any relationship between temperature and CO2 there. I was just pointing out CO2 levels in the 1880s would not be expected to be as low as in preindustrial times. I have no idea if you’re right about aerosols in that period, but it doesn’t impact my point.

        Most of Callendar’s data was in Germany, which is significant. In the early 20th century they had 10,000 data points (several times what other data Callendar had for the 19th century) – and the average of them was 438! In the 19th century Germany’s still substantial data showed a level of 400. I have NO info on what environmental effect those levels produced.

        I don’t know what factors would be involved in samples taken from Germany, but I’m positive it wouldn’t be representative of the globe as a whole. There is far too much vegetation and urbanization there (with no ocean winds to remove the impact) to get pristine samples.

        Also, I’d be interested if more recent papers have at all used Callendar’s data, and if so if they used his cherry-picked set or all of it. It can’t be ignored. Can you point me to any papers, to save me time searching for them?

        I know the work underlying the major CO2 records don’t use that data. I’m not sure what other papers might do, but I don’t think it matters for the point we’re discussing. You can find the major CO2 measurements used here. I believe the most important of those for historical CO2 records is the Etheridge data set, primarily relying on the 1988 paper.

      • [blockquote]Lindzen said what I quoted; it’s REAL simple. He also said what you quoted…real simple as well. If you don’t understand climate change issues, you should not comment. I apologized for the comment about you not being allowed to post. I retract that apology.[/blockquote]
        How arrogant. Just admit that you misinterpreted Lindzen and that you claimed he (or commenters here) do not understand GW because of your gross misinterpretation of what he said.

        Also, you do not decide whom free speech applies to.

        [blockquote]I’m thrilled you’re posting at the judithcurry.com for idiots only site. Curry did not participate in the analysis of the BEST data and was apparently unable to contribute in any meaningful way. She’s not a significant player in the field of climate change. But she is qualified to start a Web site/blog to mislead the hopelessly naive wrt climate change.[/blockquote]
        This part of your comment just reinforces the fact that too many of the pro-AGW-scientists are arrogant intellectual thugs who will never admit they were wrong on something (because you are afraid that you will lose your ‘authority’ or what’s left of it).

        You are not helping your cause with that attitude.

      • Pierre is right that has not been a doubling of CO2 equivalent since 1750. Only about 76% of a doubling approximately. The way it is worded though, Lindzen may have included water vapor which would put the equivalent forcing of all greenhouse gases at a doubling. Kind of sneaky, but possible.

  34. Shaminism’s 1st Law, broadly translated by Kim somewhere, (guilt and maidens.)
    Hey, man(n), catastrophe’s imminent, fire,famine flood, flux, not to mention pestilence, and you’re to blame!
    But we can save you.

    • Beth, I over looked this bit of wisdom.
      It is fantastic.
      +10 to kim for writing it, and =10 for you catching it.

  35. Lindzen makes the point that recent human activity has actually changed the average temperature of the planet by close to 1 degree C.

    Every time I read that fact it really takes me aback that we can actually change the climate of a planet this size just by tossing molecules into the air. Wonders never cease to amaze.

    • Brandon Shollenberger

      Actually, he doesn’t. He says there has been close to one degree of warming, but he doesn’t say it was caused by human activity. This is made clear by comments like (emphasis mine):

      If one assumes all warming over the past century is due to anthropogenic greenhouse forcing…

      Just saying.

      • Sorry, Lindzen said this in the first paragraph:

        “It is not about whether the increase in CO2, by itself, will lead to some warming: it should.”

        Then his last line in the presentation is:

        “I avoid making forecasts for tenths of a degree change in globally averaged temperature anomaly”

        indicating a lower level in which he doesn’t want to go under, thus protecting his intellectual honesty. So he must have some value he believes in with some error bars attached to that number.

        I haven’t made temperature projections myself because I am still pulling together the pieces of the puzzle, but quite obviously people that have thought about this a long time, like Lindzen, think that humans are capable of changing the climate.

  36. Fred writes: “The use of news articles and talks generally provokes a great deal of arguing, but i believe more actual understanding would emerge if we started with published articles or other legitimate sources of data such as material presented at meetings, and occasionally, Internet content from individuals not involved in partisan controversy. Dozens of potential starting points are published every week, so there’s no dearth of material for serious discussion, if serious discussion is a goal here in preference to argumentation.”

    Fred, I’m trying hard, honest, but what does this even mean in a real world sense? You argue that debates aren’t worthwhile because the non-scientists in the audience aren’t equipped to judge who has the more persuasive arguments. And talks like the one Lindzen gave are no good because there’s no one there who can point out the speaker’s errors. We could fix that problem it seems to me with debates, but then you’ve already ruled those out.

    So now you’re suggesting some sort of meetings in which “serious discussion” could take place. Presumably this serious discussion would be between the scientists. But that brings us back to the same problem, that warmists will not even get into the same room with skeptics. (You still won’t tell me why this is so by the way) So who would be at these meetings of yours besides wall to wall warmists. And would these meetings be open to the public? I’m guessing no, because as you’ve stated several times, the public is unable to comprehend what’s being talked about…

    • I meant serious discussion on this blog. My recommendation was to start with some actual data source rather than a news article or a talk to a political entity. It could be a journal article, a meeting report, a Web article outside the partisan wrangling (e.g., from Isaac Held’s blog), and then what follows would be what we ordinarily do here, except it would start on a sounder basis.

      • Steve Fitzpatrick

        Fred,
        “a Web article outside the partisan wrangling (e.g., from Isaac Held’s blog),”
        I suspect that you will find considerable disagreement on what constitutes “outside the partisan wrangling”; some would argue that a fair amount of what is published in the field is nothing more than a continuation of partisan wrangling. I do agree that Lindzen’s talk covers too many subjects in too little detail to be discussed in a technical blog thread. Still, I agree with Judith that Lindzen makes a couple of fair points, specifically, that the real disagreement is over feed backs and net climate sensitivity, not about the basic physics. The repeated “98% of scientists agree” argument grows tiresome, even while I am one of the 98%.
        You noted above that there is some knowledge of aerosols. Well, perhaps, but limited. It is also true that different climate models do use substantially different levels of assumed aerosol effects, and that those assumed aerosol effects are inversely related to each model’s diagnosed sensitivity. So I think Lindzen is correct that climate models use aerosols as a fudge to more or less fit historical data.
        But Lindzen’s most important point related to models is that he sees models as having taken on an inappropriate role in climate science, with the focus being on ‘validation’ rather than ‘testing against data’. I would be a bit more specific than Lindzen: any real validation of a model involves making accurate predictions about the future, and for a significant period of time. By this measure, they appear to not be doing so well, and indeed, to be significantly over-predicting the temperature trajectory.

      • Markus Fitzhenry.

        “By this measure, they appear to not be doing so well, and indeed, to be significantly over-predicting the temperature trajectory.”

        What a quaint way of saying that the anthropogenic forcing of climate is over-predicting the temperature trajectory.

        Or did they just included to much Sun or not enough clouds?

      • Honestly not trying to play up to our host, but, something like this?

        http://www.sciencedaily.com/releases/2012/02/120227111052.htm

      • That takes away one of the big skeptic canards about recent snowy winters being a sign of no climate change. Judith should do a post on this.

      • Markus Fitzhenry

        Jim D | February 28, 2012 at 1:54 am |
        That takes away one of the big skeptic canards about recent snowy winters being a sign of no climate change. Judith should do a post on this.’

        Excepting, Antarctic sea ice has been increasing whilst Australia has had it’s coldest summer for decades.

      • I dunno Fred, I like picking out the technicalities from the bigger picture stuff and then getting into the details downthread.

      • Jim D, only in Orwellian language, spoken by warmists. Snowy winters are sign of climate change (cooling).

      • Edim,

        On the surface, I don’t think the hypothesis that melting Artic icepack due to warming could be causing an increase in NH snow fall is that far fetched. I recall seeing comments discussing how this is one of the mechanisms by with climate readjusts. More water vapor in the NH leading to increased snow fall, ultimately leading to increasing ice pack and cooling temperatures.

      • timg56, I agree – it’s not that far fretched. There’s something to it. Earth is very old and there has been many global warmings and coolings. Every warming so far was followed by cooling and vice versa. No exception.

    • Pokerguy, yet mosh and McIntyre gladly walk into the den of denialists for the Heartland Institute conflags. And Scott Denning also if memory serves me right. So there are some who do not fear open debate/discussion.

      • Nor does Lindzen. In 2007 he and Michael Crichton (yes, that Michael Crichton) were on the skeptical side of a debate at MIT. View it at http://tiny.cc/3ncsn – Part 1 of 10 (about 90+ minutes altogether).

        The audience was polled before and after, so as to score the debate. The skeptics picked up 35%, if I recall. The skeptics kept talking about the specifics of the science, while the pro-AGW side kept referencing authority. The latter was not a winning strategy.

        Warmists might want to watch it – to see what not to do in a real live, fair debate, when the other side gets equal time.

        Steve Garcia

  37. I hope Lindzen got a guffaw or two when he showed NASA/GISS data manipulation that yielded an additional 0.14 Kelvin/century”
    QUOTE
    We may not be able to predict the future, but in climate ‘science,’ we also
    can’t predict the past.
    UNQUOTE

  38. Here’s the video of the speech in two parts:

  39. Stephen Pruett

    Unhelpful comments Pierre. Show us the back of the envelope please. List a few of the false premises and tell us why you think they are false.

    As Fred noted, I am sure the issue of models is complicated, but two observations are worth noting. The mean models of the IPCC are diverging from observations, not converging.

    However, the most effective slide in my opinion was the one showing the actual values used to calculate the average global anomaly. The average anomaly plotted on a scale not designed to magnify differences demonstrates what it actually is: noise around the baseline that in any other field of research would be ignored in the face of variability in the data that is many times greater than the quantity of the anomaly values.

  40. Observations:

    Lindzen bludgeons his audience with 58 slides, more than the non-expert audience can be reasonably be expected to take in, which could be considered to be an attempt to project an image of authority, while telling the audience not to listen to appeals to authority.

    At #2, he acknowledges that 2x CO2, in isolation, will cause around a 1 K temp increase. At #17, he claims that there is no causal link between temperature anomalies and and anthropogenic forcings. At #18, he goes back to saying that it is trivially true that man’s activities are contributing to warming. (Brian H’s explanation is silly. For instance, Hadley circulation is a global pattern that influences regional changes; and Hadley cells expand in warmer climates and shrink in cooler.)

    At #3, he makes the mistake of assuming that oceans warm, ice melts, and plant albedo changes, all instantaneously in coming to the estimate of less than 1 K per doubling.

    At #4, “..subject to great uncertainty.”
    Well, great uncertainty within confidence intervals. Nevermind that Lindzen’s estimates are outside of those intervals. For instance, for Lindzen’s low climate sensitivity to be correct, negative feedbacks would have to be nearly as large as positive feedbacks. The large climate swings in the majority of paleoclimate studies indicate this is not the case.

    At #6, “Science is never incontrovertible.”
    True, but it would help if there was some reference to what was claimed to be incontrovertible, and who claimed it. In the meantime, I’ll assume that Planck, Tyndall, et al, and the host of paleoclimate studies haven’t been proven completely wrong yet.

    #20, an effective argument?
    It trots out an analogy which doesn’t even apply and finds flaw with it. That’s only effective if you are predisposed to believe the conclusion.

    #28 “Our present approach of dealing with climate as completely specified by a single number, globally averaged surface temperature anomaly, that is forced by another single number, atmospheric CO2 levels, for example, clearly limits real understanding; ”

    Uh, the conditions of the premise would limit understanding if they were true, but they are not. This is a statement that others are making claims which they are not making. I mean, how hard is it to open up an IPCC report and look at the table of positive and negative forcings/feedbacks? Globally averaged surface air temperature anomaly gets a lot of attention, but everyone with knowledge of heat capacity knows there is more going on. If anything, the typical climate scientist makes the mistake of assuming the audience knows more than they actually do; they tend to assume knowledge is common that isn’t.

    “so does the replacement of theory by model simulation.”
    Except, the models are based on theory, and in fact are the only way of testing if the attribution of effects is correct (approximately) in theory. Any model set of parameters (strength of effect atttributions) which does not hindcast well can be rejected. To my knowledge, most of ones with a low climate sensitivity have been.

    Ugh, Lindzen has been successful in wearing me out.

      • Yeah, yeah, modus ponens, modus tollens, and all that.
        As I said, the argument does not apply in the current context.

      • Assertion: “the argument does not apply in the current context.” I don’t know what you think the “argument” or “current context” is, so I will not comment further here.

    • “For instance, for Lindzen’s low climate sensitivity to be correct, negative feedbacks would have to be nearly as large as positive feedbacks. The large climate swings in the majority of paleoclimate studies indicate this is not the case.”

      That is only true is we know we know all parameters. How sure are we about that? I mean, God knows what we’ve been missing. And another thing: Is climate sensitivity in an ice age the same as in between ice ages?

      • The climate swings in the past 400,000 years show a relationship between temperature and CO2 levels where CO2 lags. The theory is that Milankovitch factors are not strong enough to account for the swings. They initiate the swings, but it is the positive feedback from CO2 that carries them through. However, it seems that we have been comparing the Milankovitch factors to the wrong thing. We should not be comparing them to temperature, but rather the rate of change of temperature:

        http://earthweb.ess.washington.edu/roe/Publications/MilanDefense_GRL.pdf

        That is not to say that CO2 has no effect, but it isn’t necessary to explain those swings and probably plays a weak role compared to the original Milankovitch orbital factors that initiated the swing: eccentricity, obliquity, and climatic precession.

      • John, I thought Roe’s paper argued that the Milankovitch forcings should be compared with the rate of change in ice thickness, which in itself is a proxy for the specific temperature.

      • John Kosowski

        Yes, rather than focusing on the absolute global ice volume, consider the time rate of change of global ice volume.

      • John Kosowski

        “Basic physical arguments are used to show that, rather than focusing on the absolute global ice volume, it is much more informative to consider the time rate of change of global ice volume. This simple, and dynamically-logical change in perspective is used to show that the available records support a direct, zero-lag, antiphased relationship
        between the rate of change of global ice volume and summertime insolation in the northern high latitudes. Furthermore, variations in atmospheric CO2 appear to lag the rate of change of global ice volume. This implies only a secondary role for CO2 — a weaker radiative forcing on the ice sheets than summertime insolation — in driving changes in global ice volume.”

        http://earthweb.ess.washington.edu/roe/Publications/MilanDefense_GRL.pdf

      • AJ,

        Here is Linzden speaking on this:

      • @ John Kosowski | March 1, 2012 at 6:35 pm |

        John, the amount of ice on the polar caps depends; on the amount of raw material in the atmosphere for renewal / replenishment every season. Plus on Arctic ocean depends on the speed of the currents; because that ice is seating on the top of salty water. Doesn’t depend on any PHONY GLOBAL warming. If you intend to distribute Warmist lies – you shouldn’t use the name of Milutin Milankovich!!!

        Average temp on the polar caps is double than in your deep freezer. Put in one freezer 15 bottles of water – in the other 3 bottles of water. Same coldness in both freezers, but tomorrow you will have different amounts of ice in each freezer. 2] They were lying about the glaciers on Himalayas will disappear in 35y…. Truth: as long as in S/E Asia, India they have lots of rice paddies – the glaciers will last forever —– if they replace the rice paddies with eucalyptus forest, to produce dry heat + bushfires – most of the glaciers will disappear in 10 years. John, ice can evaporate without turning into liquid water first. 3] As Sahara builds strengths – permanent ice on the European Alps will keep disappearing. Not because of any phony GLOBAL warming – but Because THE WARMIST con artist ARE PRESENTING WATER VAPOR AS A BAD GAS FOR THE CLIMATE. All details, on my website. Milankovich was for the truth – contemporary Swindlers are for mountain of lies; don’t putt them in same sentance

      • Stefan,
        Which warmist lies am I distributing? I thought I was supporting a case for CO2 not being that important.

      • @ John Kosowski | March 2, 2012 at 5:59 am

        John, ice on the polar caps has nothing to do with the GLOBAL temperature. Siberian permafrost is COLDER than Greenland. Permafrost no ice / Greenland 1km thick ice. 2] Milankovich points clearly that with the tilting of the planet – polar caps change location. If the polar cap is on water > different amount of ice, or no ice. If on land, set-up is different = lots of ice. NOT because of any phony GLOBAL warming! He proved that are real reasons for getting warmer SOME PLACES, or colder some places. John, warmings are NEVER global!!! Milankovich theory points that indication of warming on some area, wasn’t GLOBAL, but was warmer there – because got COLDER some other place / places.

        John, horizontal winds are cooling your french-frays / VERTICAL winds are cooling the planet and regulating the overall temp to be ALWAYS the same. Those vertical winds can increase by 1000% in a split second, if necessary and cool 10 times more. You know that stronger horizontal wind cools your pizza much faster. Well, vertical winds are created same as horizontal winds – when one area gets much warmer than other – those winds increase. Same with the vertical winds – when on the ground gets warmer than normal – they increase in size and speed. It’s the oxygen + nitrogen shrinking / expanding – NOTHING TO DO WITH CO2!!! As long as you are talking about warmer / colder PLANET; you are dignifying the misleading propaganda / rubbishing Milankovich.

        P.s. I have pointed to Vukcevic that his GLOBAL warmings were NOT global – he hates me for that; but I can see now in his comment he states: ”it was warmer in Europe” – that is cool / correct. Before that, Europe was the GLOBE for him. That’s why his GLOBAL temperature charts were as if the planet is getting electrocuted – but he is presenting himself as Skeptic…?!. The truth: because those vertical winds increase as soon as it gets warmer than normal – overall global temperature doesn’t get warmer than normal for more than few minutes; as I have given example in my book: if 100 atom bombs of 50 megatons explode simultaneously; the troposphere expands for 3 minutes with tremendous speed – after 3 minutes shrinks just as fast. The speed of oxygen+ nitrogen expanding / shrinking demolishes concrete buildings, not the 30kg of plutonium, which can fit in a 2L bottle. .Air wouldn’t have being shrinking after 3 minutes, if it wasn’t cooled. Million degrees warming cooled in 3 minutes; Swindlers talk of warmer planet by 1C…??? Because they are carbon /CH4 molesters.

        The ”nuclear mushroom” is visible, because expanding O+N take some dust up; but when it gets warmer by 2C, same thing happens – only no need troposphere to get that high up into the stratosphere. Another shocking truth for both camps: O+N don’t wait to warm up by 2C, then to start expanding. When localized warming happens by 0,0001C, O+N start expanding, instantly! Therefore, anything different is, deviating from the truth and dignifying the evil cult. My formulas are correct. The laws of physics were same 15000y ago as today, rely on those laws, not on shonky science. They use the word ”sensitivity” BUT NEVER TAKE IN THE ACCOUNT, the sensitivity of O+N expanding / shrinking in change of temperature; because would have proven my formulas correct – all the rest is harmful drivel. John, stop being part of that drivel. The truth will win, sooner than you think. Cheers, have a nice day!!!

      • John, I think the words: “that the available records support a direct, zero-lag, antiphased relationship”, is important for constraining the slow response. However, given the time resolution in the records, maybe not that important. The *apparent* e-folding time could be approaching zero or if the lag was 100 years, then the *apparent* e-folding time could be, well, 100 years.

  41. Markus Fitzhenry.

    ”If anything, the typical climate scientist makes the mistake of assuming the audience knows more than they actually do; they tend to assume knowledge is common that isn’t.”

    If anything, the typical climate scientist makes the mistake of assuming THEY know more than they actually do; they tend to assume knowledge is common that isn’t.

    • Next time, try giving an example to back up your point.

      Here is an example: harrywr2 seems to think that celestial mechanics has been made simple enough for the layman to understand. I’d bet he hasn’t even heard of the three body problem. We could try to give him a test and see if he can calculate the next location of Mercury’s perihelion; if it is so simple a layman could do it. Hint: It will not be in the same place the next time as it was the last. That’s about the extent of my knowledge on celestial mechanics. I would not pass the test myself, but at least I am aware that it is not a simple problem.

      Take calculating the base, 1 K delta T per doubling of CO2. How was that calculated? It involves integration across spectral absorption bands, and across the changes in atmospheric composition with altitude (because the changes in composition and density with altitude change the absorption bands), and a number of other things. That is the kind of knowledge that climate scientists take for granted. Knowing how it is calculated is one level, being able to actually perform the calculation is a matter left to software; that is, it involves so many steps that doing it by hand would be too time consuming and error prone. Is your average person even aware that MODTRAN exists?

      When I say calculate, I mean in the most literal sense, by using calculus. I would hazard a guess that the average person doesn’t know calculus, but you can not do anything interesting in physics without it. Hence, I’m pretty sure that a lot of climate scientists take the ability to do calculus for granted.

      Let’s try a more concrete example, good old “hide the decline”. Jones, Mann, everyone in the field knew there was a decline in correlation between temperatures and tree-ring proxies in recent decades. So, to them, producing a graph by overlaying observed temperatures was not an act of deception (particularly since Mann explained what he had done in the paper). It was not common knowledge amongst the general population, and we see the results.

      • Brandon Shollenberger

        Chris G, you inadvertently offered an amusing example:

        Let’s try a more concrete example, good old “hide the decline”. Jones, Mann, everyone in the field knew there was a decline in correlation between temperatures and tree-ring proxies in recent decades. So, to them, producing a graph by overlaying observed temperatures was not an act of deception (particularly since Mann explained what he had done in the paper). It was not common knowledge amongst the general population, and we see the results.

        The “hide the decline” comment was not discussing a case of “overlaying observed temperatures.” What Phil Jones actually did was truncate a record and append a new record to the end of it. That isn’t overlaying anything. It’s straight-up replacing data.

        What Mann did was different (and done in an entirely different publication), and he did not explain it. It’s true he overlaid temperature, but he did more than just that. In addition to graphing observed temperatures, he also used them to smooth his reconstructed temperatures. Put simply, he appended the observed record to the end of his reconstructed record, smoothed the combined record, then truncated the resulting record at the point the original reconstructed record ended. In other words, he used the observed record to modify the reconstructed record without any explanation or justification.

        You may not be a climate scientist, but you do seem to have made the mistake of assuming you know more than you actually do.

      • Brandon –
        The Divergence Problem (DP) behind and underlying “hide the decline” has existed since 1940. Schweingruber and Briffa did a paper about it in about 1990, and they were terribly worried about it then. No one has been able to explain it, from before that time till now. The tree-rings simply are not correlated to temps anymore. But they weren’t prior to 1880, either. So they have TWO DPs. Only from 1880-1940 do they correlate.

        As long as the DP exists, can anyone say what the temps of the past were? If tree-rings don’t work, one needs to ask if that might affect other proxies, too, those which were keyed to tree-ring records. The DP now is so out of whack, of course they didn’t want to show it. If the tree-rings don’t correlate to NOW, when the globe is covered with thermometers, of what use are tree-rings? Complicating it is the fact that biologists use tree-rings as proxies for precipitation. It doesn’t take Einstein to figure that if two factors affect tree-rings, how do they figure out which forcing caused which tree-ring to have increased growth or decreased growth?

        They have a major problem, and they are no closer to understanding it now than 22 years ago. The question really is this: Can they trust tree-rings as a proxy for temps?

        Right now, they can’t – which is why they are pulling stunts like “hide the decline” from the public and policymakers.

        Steve Garcia

      • Chris G

        Your quote:

        “So, to them, producing a graph by overlaying observed temperatures was not an act of deception (particularly since Mann explained what he had done in the paper).”

        That is NOT what occurred. The increasing (and alarming) lack of correlation was simply HIDDEN, then graphed instrumental temperatures were SUBSTITUTED from the point of divergence

        This was done to prevent alert laymen (and even savvy scientists) from asking: “If the tree ring interpretations are now so wrong, why are they not considered to be wrong for the earlier periods without instrumental measurements ?”

        That precise question has not been answered to date, and as your mendacious post shows, it is continually avoided at every possible turn. This “hide the decline” trickie even annoyed Richard Muller to the point where he demonstrated it very clearly in one of his lectures while commenting that he much resented being fooled that way because until then he had trusted them (sorry, I don’t have the UTube link anymore)

        Ho hum

      • IANL888 –
        “This was done to prevent alert laymen (and even savvy scientists) from asking: “If the tree ring interpretations are now so wrong, why are they not considered to be wrong for the earlier periods without instrumental measurements ?”

        Steve McIntyre found that hidden portion, and it was months and months before he discovered it. His basic reaction was basically, “WTF?”

        Funny thing: When the rest of the world found out about THEM taking about it, the world’s reaction was basically, “WTF?”

        There is this BIG hole, right in the middle of the central scientists’ work on global warming, and the big hole is the Divergence Problem. They were between a rock and a hard place. Keith Briffa, the one of them that knew the most about it, HE wanted to keep it in. Michael Mann browbeat him into submission. And in the end Briffa buckled. Mann got his way. They used Mike’s “trick.” It is all in the emails.

        And no, folks, in this case “trick” did not mean just some innocuous method or shortcut. Mike’s trick was to hide the part of the curve that was a very inconvenient truth for the message they were committed to sending to the policymakers.

        Steve Garcia

      • HIDE THE DECLINE

        Video presentation by Prof Richard Muller
        Director of the Berkeley Earth Project

        http://bit.ly/eGzSuJ

        What about the Climategate?

        The scientists have now been exonerated, acquitted, not guilty.

        They did get a wrist slap.

        They deceived the public, and they deceived other scientists, but they did nothing that was immoral, illegal, or anything like that.
        What did they do to deceive the public?

        This is in the report. This is in the review, not the charts.
        But these are the data as they published it on the cover of the World Meteorological Organization magazine:

        Plot 1. http://bit.ly/fmHLX3

        These are the data that many of my fellow scientists at Berkeley used.
        They say, hello, you know the public may not understand graphs, but I do.

        Look at this. Here is the temperature for the last thousand years going all over the place. It is not actually temperature but they actually measured tree rings, corals, that is a proxy for temperature; goes all over the place.
        Look what happened recently: Zoom! That is clear and incontrovertible. The public may not understand this so I have to now lend my prestige to this. I am a professor of Physics and I will now go and tell people global warming is clear and incontrovertible because I have seen the actual data [Plot 1] and it is. Unfortunately, a lot of my colleagues have behaved in this way.

        In their paper, if you dig into it, they said they did something with the data from 1961 onwards. They removed it and replaced it with temperature data. So some of the people who read these papers asked to see the data; they refused to send it to them, the original raw data. They used the Freedom Of Information Act. The freedom of information act officer, on the advice of the scientist, would not release the data.
        Then the data came out. They weren’t hacked like a lot of people say. Most people who know this business believe they were leaked by one of the member of the team who was really upset with them.
        So I now can show you what the data that they refuse to release, the original data before they did anything. What they did was, and there is a quote. A quote came out on the emails, these leaked emails that said, let’s use Mike’s trick “Hide The Decline.” That is the word. Let us use Mike’s trick “Hide The Decline.” Mike is Michael Mann, he said, “trick” just means mathematical trick. That is all. Now, my response is, I am not worried about the word trick. I am worried about the decline. What do you mean hide the decline?

        Let me show you this. Now we have the data. Now it has been released. This is what it is.

        Plot 2. http://bit.ly/hmBIcs

        That is the raw data, as any Berkeley scientist would have published it. It would have said, okay, we have had the medieval warming, ice age, and now we have global warming. And there is some disagreement, but this disagreement is all over the place and that just shows the technique is not completely reliable.

        What they did is, they took the data from 1961 onwards, this peak, and erased it. What is the justification for erasing it? The fact that it went down. And we know the temperature is going up. Therefore, it was unreliable. Is this unreliable [pre instrument data]? No. How do we know? We don’t know, but [hand waving]. This [post 1961 unreliability] is probably some human effect. The justification would not have survived pear-review in any journal that I am willing to publish it. But they had it well hidden and they erased that and they replaced it with temperature going up.

        Let me show you how cleverly this was done. Get back to this plot [Plot 1]. There it is. They added the same temperature data to three different plots giving the illusion that there are three different sets going up. And they smoothed it, because temperature changes smoothly. If they had not smoothed it, you might have noticed, wait a minute, what is the change going right there? Why is it abruptly different? You don’t notice that because it is smooth. Smoothing is legitimate in their mind, because temperature change is not discontinuous.

        So that is what they did, and what is the result in my mind? Quite frankly, as a scientist, I now have a list of people whose paper I wouldn’t read any more. You are not allowed to do this in science. This is not up to our standards.

        I get infuriated with colleagues of mine who say, “well you know it is a human field, you make mistakes.” When I showed them this, they say, “no, that is not acceptable.”

        Now, here is part of the problem. The temperature I showed you before, this one

        Plot 3: http://bit.ly/ewYmxR

        Of the three groups I picked the one I trusted the most. Which group was this? Ya, the group that hide the decline.

        Jim Hansen predicts things ahead of time. We have a group here that feels it is legitimate to hide things. This is why I am leading a study to redo all this in a wholly transparent way.

      • Girma –

        Helluva comment on hide the decline. If you are going to go into all that data, be aware that pre-1880 there was a divergence problem, too.

        Also be aware that every few years the folks in the middle (Mann, Jones, etc.) go around and change PAST data downward – AGAIN. They don’t leave past data alone. How can past data change? How can it need adjustment after the first adjustment to homogenize it?

        Also do look at papers that identify a larger than currently accepted urban heat island effect. The Chinese study (Wang??) in about 1990 that had Phil Jones as a co-author was found to make assumptions about no TOB changes that simply were made up. In that study UHI comes out something like 0.013C. That study is the basis for many subsequent studies that accepted that there was no UHI to speak of. Then look at the paper on the abysmal siting of Met Stations, the one with Anthony Watts as a co-author. Many a rudimentary study (and some papers I’ve seen) of various locales show UHI exceeding 1.0C and some specific locales much more.

        I personally think it is bogus to globally apply one UHI adjustment value. To me, that is totally unscientific. Each station has its own level of UHI, and no universal adjustment can be valid. To me it is lazy science.

        Also, for background, look into the divergence problem. Briffa is well ware of it. Dendroclimatologists (as opposed to dendrochronologists) almost universally do not even know about the biological processes involved with tree-ring growth. That is a scandal, IMHO. With the pre-1880 DP and the post-1940 DP, there isn’t much basis left for tree-rings to be used as proxies for temps. Without correlation, there can be no proxy value in a variable.

        Look into tree-rings being used as proxies for precipitation (as biologists do) – and I challenge you or anyone to distinguish which part of any tree-ring growth comes from precipitation vs temperature. No one can do that. And without being able to do that, what tie-in can there possibly be, between temps and tree-rings?

        You are entering a world where you will not be able to trust the data as presented. What is needed is a study that FIRST openly presents the raw data, showing what it is like before adjustments – including gaps in the data, timewise and geographically. Then all adjustments need to be presented and vetted properly. Only then can the properly adjusted data be compiled and presented. This has never been done in this era of global warming claims.

        Those are my suggestions.

        I do not believe that it is posible to determine if warming is happening or not, not with the corrupted datasets now existing. It might be happening and it might not be. I have no idea. It seems possible, but with crap for data, and people caught with their hands in the cookie jar heading it all and adjusting data before NASA and NOAA get it, why would I accept it as a valid claim, like I used to? **

        Steve Garcia

        ** Michael mann’s obviously screwed up Hickey Stick, with its missing LIA and MWP changed my mind. Something was crooked in (England, across the North Sea from) Denmark.

      • The above link is broken

        Plot 1 => http://bit.ly/wQpl9k

      • Markus Fitzhenry

        Chris G | February 28, 2012 at 1:02 am | Reply
        Next time, try giving an example to back up your point.’

        I’m sorry Chris, Here is an example;

        The “hide the decline” comment was not discussing a case of “overlaying observed temperatures.” What Phil Jones actually did was truncate a record and append a new record to the end of it. That isn’t overlaying anything. It’s straight-up replacing data.

        What Mann did was different (and done in an entirely different publication), and he did not explain it. It’s true he overlaid temperature, but he did more than just that. In addition to graphing observed temperatures, he also used them to smooth his reconstructed temperatures. Put simply, he appended the observed record to the end of his reconstructed record, smoothed the combined record, then truncated the resulting record at the point the original reconstructed record ended. In other words, he used the observed record to modify the reconstructed record without any explanation or justification.

        You may not be a climate scientist, but you do seem to have made the mistake of assuming you know more than you actually do.

        Haaaaaaaaa………………

      • Brandon Shollenberger

        feet2thefire:

        The Divergence Problem (DP) behind and underlying “hide the decline” has existed since 1940. Schweingruber and Briffa did a paper about it in about 1990, and they were terribly worried about it then. No one has been able to explain it, from before that time till now. The tree-rings simply are not correlated to temps anymore. But they weren’t prior to 1880, either. So they have TWO DPs. Only from 1880-1940 do they correlate.

        I’ve followed the topic for quite some time, so I know what you mean. However, I do need to add two things. First, it’s important to remember the “divergence problem” only affects some measurements. You should especially pay attention to the distinction between tree ring width and density. Second, remember the surface temperature record prior to 1880 is extremely unreliable. Failing to correlate to that would mean little. The main reason that divergence in the past matters isn’t that it fails to correlate to the temperature record, but rather that it looked weird compared to Mann’s hockey stick (everyone roll eyes).

        They have a major problem, and they are no closer to understanding it now than 22 years ago. The question really is this: Can they trust tree-rings as a proxy for temps?

        There’s an addition to what you discussed. While the tree rings used by Mann weren’t affected by the divergence problem, it’s extremely likely the “temperature signal” they show is actually the result of physical damage to the tree. When the tree recovers from the damage, there’s a spurt in tree ring growth in that spot.

        The problem isn’t that tree rings can’t track temperatures. The problem is the tree rings that supposedly do are always ones of questionable validity (presumably emphasized due to subconscious bias). If you don’t rely on those, you can get a general idea of temperature. It just happens to be a fairly imprecise one which doesn’t show much of value.

        What people should do is accept tree rings don’t give much useful information on global temperatures of the past ~1,000 years. Instead, scientists just keep finding new ways of emphasizing small amounts of data to give a signal that’s “right.”

      • Brandon –
        All good points. Nothing I can disagree with.

        Yes, I am aware of the difference between tree-ring width and density. Very few of the records used by BEST were density, BTW.

        Agreed that pre-1880, instruments were pretty incomplete.

        (BTW, just to make sure I say this: I totally respect the effort Michael Mann put into tacking such a huge study. He has reason to be proud. But he got some things wrong, and he needs to own up to those, without rancor. There is no Get Mike Mann club out there. But there is an increasingly large Disrespect Mike Mann Club. I think he is a big problem in the middle of all this, and that he is dishonest. The same thing that has happened to Peter Gleick should have happened to Mike Mann. He should have gone down in flames. And as much as I admire his effort, I condemn his fudging of data. His reviewers back in 1998 must have been over their heads or distracted or just didn’t want to take the time to vet his paper properly.)

        If the DP only affects some measurements, why does Briffa – and Schweingruber and others – still have a problem with it? Last I heard they still throw their hands up in despair.

        I am also aware that they try to find tree stands that have a good and clear signal. I assume that means good clear rings, not all muddled and “hazy.” It is not unlike looking under the street lamp for the keys one dropped 100 feet away – one looks there because that is where the light is. I’ve read that is why they go to the edge of the Arctic, because, for example, the ones at the Equator don’t have winter. That makes sense, but it also makes sense that they need to understand the limits to what they can read into Yamal tree-rings and the climate there vs the rest of the world. I simply think it is a non-representative population of trees.

        Similarly, I think Antarcitc and Geenland ice cores are the only place with a “lamp post” so that is where they look – regardless of the represenative nature of the evidence being found. For one thing, Antarcitca and the Arctic Ice see-saw in their growth of ice. So, the Antarctic has this ‘wow’ in its data, to begin with.

        In essence, you are right about the imprecision of tree-rings – and I would be a tough sell for ice cores being any better. The precision claimed is a figment of someone’s imagination.

        Also, are you aware that some tree-ring folks don’t trust the tree-rings before about 1500? I can’t recall the reason, but it struck me as significant.

        Nuff fer now…

        Steve Garcia

      • Brandon Shollenberger

        feet2thefire:

        Yes, I am aware of the difference between tree-ring width and density. Very few of the records used by BEST were density, BTW.

        I have no idea what you mean here. BEST doesn’t use any tree ring records. All it uses is actual, measured temperatures.

        If the DP only affects some measurements, why does Briffa – and Schweingruber and others – still have a problem with it? Last I heard they still throw their hands up in despair.

        That’s simple. There are only a handful of tree ring series which show the “right” answer. People want to these series to be useful. They don’t like the idea of giving up on one of them.

        That’s especially true since if they give up on each of these series with major validity issues, they’ll wind up with none that give the “right” answer. Who do you think wants to publish a study which says, “Tree ring data gives us no useful information on this issue”?

      • Brandon:
        “I have no idea what you mean here. BEST doesn’t use any tree ring records. All it uses is actual, measured temperatures.”

        About the tree-ring density vs width data sets, someone had a link to BEST’s data set listing, and about 3 were density and the rest were width. It was on BEST’s web site. It’s been a while since I went there, but that is what I saw. I was in a great discussion about tree-rings then, and wanted to see which were used how much. And they were there, listed as dendro such and such, all in one section.

        Ah! Here it is! http://tiny.cc/ia634

        Does that answer the question?

        Steve Garcia

      • Brandon –
        I should have included this in the last comment…

        [Steve Garcia] If the DP only affects some measurements, why does Briffa – and Schweingruber and others – still have a problem with it? Last I heard they still throw their hands up in despair.

        That’s simple. There are only a handful of tree ring series which show the “right” answer. People want to these series to be useful. They don’t like the idea of giving up on one of them. That’s especially true since if they give up on each of these series with major validity issues, they’ll wind up with none that give the “right” answer. Who do you think wants to publish a study which says, “Tree ring data gives us no useful information on this issue”?

        Well, that is what I mean by having a problem with it.

        I simply cannot understand why the dendroclimatologists aren’t all over this, trying to understand the underlying biology. Since they’ve not found it, your assessment here is as likely to apply to them as it does to Briffa and Schweingruber and the other climatologists.

        BTW there was a good exchange in the CG 2 emails between a one dendroclimatologist and two others. He was critical because they didn’t understand the biology and were too lazy to look into it. He evidnetly would do this at conferences and really piss everyone off. The rebuttals in the emails were all addressing dendroCHRONOLOGY, the dating of tree-rings, saying WTF are you challenging that for? They never did address his criticism, about tying rings to climate, just throwing up a straw man and lambasting him for something that he wasn’t even challenging. It was pretty pathetic. The main rebuttal guy was the guy who I saw elsewhere referred to as the “Father of dendroclimatology.” He didn’t seem to know the difference between his own discipline and dendrochronology. If you are curious, I can find that for you.

        Steve Garcia

      • Brandon Shollenberger

        feet2thefire, I don’t know where you got that link from, but it’s not a link to the “BEST’s data set listing.” It’s not even on BEST’s website. It’s on Nature’s.

        Wait a second… I recognize that list. That’s the list of proxies used in Mann’s original 1998 paper, the one which created the hockey stick!

        I don’t know who gave you that link, but they really mislead you.

      • Brandon –

        Wow, you’re right! I have NO idea who sent me there, but it was at the time of the BEST media blitz, and the page has this label on it:

        data-best

        And I accepted it as BEST’s data list.

        My bad, but I think you can see what fooled me! All this time I’ve been thinking that was BEST’s data. Scratch that index entry in my brain!

        Mea culpa mea culpa mea culpa…LOL

        Steve Garcia

      • Brandon Shollenberger

        It’s not a problem feet2thefire. It was just very, very confusing to hear you talk about tree ring data with BEST. I think the weirdest part was the Mann’s original hockey stick was being discussed elsewhere in the topic, and suddenly you provided a list of data sets from it. My mind was really confused.

        But no worries. Everyone makes big mistakes at times. Some people are just fortunate enough to do it in less public arenas.

      • Brandon –
        Hahaha – I think there was big audience at that time of night. But thanks for pointing out to me that that was not BEST. I appreciate it.

        SG


  42. Slide 16:
    Compares global temperature time series for the periods 1895-1946 with 1957-2008. The trend and variability for the two periods are very similar (which is a strong argument against the unprecedented rate of change), but there is no clear indication that the second period is overall warmer than the first.

    Instead of the above periods, I prefer 1910-1940 vs 1970-2000 => http://bit.ly/eUXTX2

    • Girma –

      I am a skeptic, and I, too, noticed that Lindzen didn’t inform of the difference in the two overall temperatures. That may sound like dissembling to you. To me it is not.

      There has been work done showing that there is a more or less straight line increase, starting about 1800 with the end of the LIA. And superimposed on that straight line is a curve that is quiet similar to a sine curve, that has a frequency of about 60 years, each with 30 year ascending and a 30 year declining periods. It is quite a simple thing, and it is quite eye-opening.

      I am sorry if I can’t provide links or papers ion this. I failed to file them away and wouldn’t even know what search terms to use. Perhaps some other skeptic here can point to that work. It is not just one scientist, either.

      Pro-warming folks don’t give any credence to the “coming out of the LIA” meme, and I didn’t give ti much stock, either, for a long time. YES, we are certainly coming out of an extended cold period, and what ELSE is the temperature going to do but go up? But I thought it was too simplistic. In recent months, though, I have come to change my mind. And what changed my mind was those studies with the smooth curves that, by damned, really look like each other, over and over, since about 1800.

      And those two curves Lindzen displayed are two of those 60-year periods. What does an inclined since curve look like? Pretty dang close to what he showed.

      Is it correct? If someone doesn’t take it into consideration, I think one is not wiling to look at all the evidence. And most of the skeptical side is mainly making just that argument: What about all this OTHER evidence, folks? Such evidence is out there. And when they have to acknowledge it, things are going to get ugly, I fear.

      Peter Gleick’s self-immolation may be the first of the pro-warming scientists to go postal. But he may not be the last.

      Steve Garcia

      • Steve

        Is the following graph that you are talking about?

        http://bit.ly/cO94in

      • That is one I’ve seen, yes. Thanks for finding that!

        But I am pretty sure one that I’ve seen has THREE full curves. And all of them are 60-year full cycles.

        Are they exact? No. No two or three compared cycles will be. But the + phase and – phases are pretty suggestive, IMHO.

        But what do I know?…LOL

        (FYI: I try to never use the words “proof”, “prove” or “proves”. I don’t even use them for Relativity or Newton’s work. I can only say “suggestive”. Some new evidence may come along tomorrow, throwing conclusions right out the door.)

        Steve Garcia

    • Girma,
      As a scientist, what do you think is driving the increasing energy in the system as shown in your graph?

      http://bit.ly/cO94in

      You do believe in the conservation of matter and energy, don’t you? Where is the additional energy coming from, what is the mechanism?

      Can we see your model hindcast to, say, 1750? With actual temperature on the same graph please.

      • Chris G


        As a scientist [I am an engineer], what do you think is driving the increasing energy in the system as shown in your graph?

        It could be the same reason for the warming during the Holocene maximum. It could be the same reason as for the warming during the Medieval Climatic Optimum.

        Or it could be due to human emission of CO2. However, that warming is only 0.06 deg C per decade, not about 0.2 deg C per decade as claimed by the IPCC. There is a difference by a factor of 3. Not coincidently, 3 is IPCC’s climate sensitivity.


        Can we see your model hindcast to, say, 1750?

        There was a climate shift from cooling to warming at the end of the little ice age (1800s) . Since then the globe has been on a long thaw => http://bit.ly/wzkYvi

        My model applies between the end of the little ice age and the next climate turning point (to cooling).

      • “…what do you think is driving the increasing energy in the system as shown in your graph?”

        My take is: the most of the increase is driven by the increase in solar cycle frequency (decrease in solar cycle length). This explanation will be put to the test in the next decade. There was a very significant increase in solar cycle length from sc22/23 to sc24, which predicts a significant temperature decrease.

        http://arxiv.org/pdf/1202.1954v1.pdf

  43. Some laudatory comments on Lindzen’s talk from unexpected quarters such as Simon Carr of the Independent.

    Simon Carr is the Independent’s parliamentary correspondent. He’s a entertaining writer but, as is clear from his piece on the subject, not particularly knowledgable about science. He has clearly taken Lindzen’s arguments purely at face value – to an extent I guess you can’t blame him, he’s just a layman and Lindzen a noted physicist, but it would be nice if he had applied the same level of skepticism/cynicism as he does to the politicians he usually writes about and done a little digging to see if Lindzen’s claims actually check out.

    • @andrew

      You clearly feel that some of Lindzen’s claims don’t check out.

      Care to expand? Where does he go wrong?

      • Latimer, what are you thinking? True believers don’t need facts or don’t need to explain their words or action. That’s not their modus operandi.

      • Latimer,

        Chris Colose, Fed M, Jim D and Chris G have all provided examples above.

      • @andrew

        I’m not interested in their opinion of what is wrong with Lindzen’s slides. *You* raised the point, so I’m interested in *your* opinion.

        And to be perfectly honest, if Colose told me the time, I’d still want to check the speaking clock rather than trust him. An appeal to his ‘authority’ gets you minus brownie points.

  44. Seem to have sparked something.

    Jones was talking about Mann 1998; so, let’s not diverge yet.

    Well, I am looking at Mann, et al, 1998, figure 5, and there are two distinct lines on the graph, “ACTUAL DATA” and “RECONSTRUCTED”. Granted, they are a little hard to make out, would have been better in color, but there they are, two distinct lines. Plus 2-sigma ranges and a 50-year low pass filter.

    Brandon,
    I’d be curious if you have evidence of “..he also used them to smooth his reconstructed temperatures.”

    Girma,
    I thought we covered that ground already where someone quotes Muller when he argues that the hockey stick is wrong, but ignores him when he publishes yet another hockey stick. You know, to go along with the proxy studies using pollen, coral, isotopes, ….

    So, anyway, given that the divergence problem was known prior to 1998, the graph in question is labeled as it is, and calculations are described in the Methods section, and supplementary information, what is it that Michael Mann hid?

    P.S. Heck no, I am not a scientist. I used to be pretty good at stats, I can handle basic physics, and I’m still pretty good at logic.

    • What is the Muller -hockey stick? Do you have a reference? My guts tell me that you are referring to BEST which is no hockey stick. A hockey stick consists a blade and a handle. The problem is those two cannot be combined unless you want to mislead since those two do not represent the same accuracy nor quantity.

      • That is still not a hockey stick. If you just had read what I just have said. Secondly, why do you compare land-only data (BEST) to GISS LOTI?

      • juakola and Chris G –

        juakola, you don’t see a hockey stick there?

        It is every bit as much of one as GisTemp or HadCrut3. Do you need tape on the blade and Wayne Gretsky’s autograph on it? BEST’s blade slope is steeper than most of the other curves, and the handle is just about the same slope as the others. Not seeing it, huh? Wow. You DO know that the blade on a hockey stick is not perpendicular to the handle, right?

        It sounds like you are denying the existence of the hockey stick altogether. Does that make you a denier, since you won’t accept the facts in front of you?

        And all Chris has to work with on BEST is the land, only, so what do are you asking him to do – beat Muller over the head until he makes one for you?

        Steve Garcia

      • Well, because it is the BEST data that is readily available. Do you think BEST land-ocean will be of a different shape, other than the hockey stick I just showed you?

        A hockey stick just refers to a shape with “handle” with one slope, and a “blade” with something steeper. I don’t know where you get your other requirements.

      • It is every bit as much of one as GisTemp or HadCrut3. Do you need tape on the blade and Wayne Gretsky’s autograph on it? BEST’s blade slope is steeper than most of the other curves, and the handle is just about the same slope as the others. Wow. You DO know that the blade on a hockey stick is not perpendicular to the handle, right?

        It sounds like you are denying the existence of the hockey stick altogether. Does that make you a denier, since you won’t accept the facts in front of you?

        *sigh* here it goes again (the name calling etc..)

        Firstly, the BEST curve is steeper because it is land only. Chris G was comparing apples to oranges when he could have picked another land-only dataset from WFT.

        Secondly, what facts are you talking about? The fact that surface temperatures have increased during the last 150 years? No, I hardly deny that (nor does the majority of skeptics).

        What I meant, is that this curve (BEST, crutem, GISS, whatever you want to use) alone doesnt make a hockey stick. To make it look like a hockey stick you need a historical context with a relatively flat handle to make this latest 150years to seem somewhat arupt. But again – comparing proxies to modern measurements in the same graph is total BS since they do NOT represent the same accuracy nor quantity. It is an apples to oranges comparison. GOT IT?

      • What I meant, is that this curve (BEST, crutem, GISS, whatever you want to use) alone doesnt make a hockey stick. To make it look like a hockey stick you need a historical context with a relatively flat handle to make this latest 150years to seem somewhat arupt.

        Who died and told you you made the rules up for hockey sticks?

        Why does the handle have to be “relatively flat”? Is that in the NHL rulebook or your own?

        It was skeptics who named it the Hockey Stick, and you get to tell us what is one and isn’t? Oy vey, aren’t we full of ourselves?

        And what is this 150-year rule? The Mann blade is since 1990. I can’t imagine what goes on in your head, to tell us that when we see a hockey stick shape, we aren’t correct because it doesn’t fit your rules. Hubris, thy name is warmist.

        Only 50 years before your 150 years, the world was coming out of the LIA, when nothing was flat – no matter what Mann tells you. Flat is in your imagination. Of course, your flatness calls the MWP flat, too. BEST only went back to 1800, but look at what BEST does right after 1800 – the end of the LIA. Oh, of course, you’ve already spoken from on high that BEST is land only, so that doesn’t mean squat – according to you, the Lord High Climate Science God. Tell that to everyone on your side who all jumped around and whooped and hollered when BEST ‘confirmed’ global warming.

        Having your cake and eating it, too – doesn’t that kind of stick in your craw? It does mine, when it is your Royal Highness’ cake. So, you get to claim BEST works for you when you want it to, but when it doesn’t agree with the point you’re making this hour, you switch sides. Ni-i–i-ce…

        I wish I was an all-knowing god like you. NOT.

        Steve Garcia

      • Brandon Shollenberger

        juakola, for what it’s worth, when I read that comment, I took it as feet2thefire being sarcastic. I thought he was being over-the-top in order to mock what he was saying, and in reality, he agrees with your position.

        Of course, my interpretation could be wrong.

      • Actually, Brandon, I was not agreeing with juakola. He said there was no hockey stick, and there is. And he replied and told me what the parameters he accepted for hockey sticks, and I disagreed with him again. He threw out parameters that I’ve never heard of, that he seemed to make up himself.

        There are hockey stick shapes there, and his rules don’t change that.

        I was being sarcastic, yes, but his second response was basically unacceptable, and invited more of the same.

        If I offended, I was also offended, him telling me a hockey stick is not there when it clearly is. Oh, well. I made an enemy probably. Not the first time. I come here hopefully for intelligent exchanges and to learn. Sometimes that happens, and sometimes it goes awry.

        Steve Garcia

      • Brandon,
        Ok, if that is the case, then my apologies. English is not my native language (as you likely have noticed) so sarcasm might go sometimes over my head.

      • Brandon Shollenberger

        juakola, don’t worry about it. feet2thefire didn’t do anything to clearly indicate it was sarcasm, so I’m not even sure it was, and English is my native language. It’s perfectly understandable that you, or anyone else, would take his comment seriously.

      • Who died and told you you made the rules up for hockey sticks?

        Why does the handle have to be “relatively flat”? Is that in the NHL rulebook or your own?

        It was skeptics who named it the Hockey Stick, and you get to tell us what is one and isn’t? Oy vey, aren’t we full of ourselves?

        And what is this 150-year rule? The Mann blade is since 1990. I can’t imagine what goes on in your head, to tell us that when we see a hockey stick shape, we aren’t correct because it doesn’t fit your rules. Hubris, thy name is warmist.

        Now you are going way over your head. I didnt call you warmist. I didnt say anything about 150 year rule. Your fighting strawmen. All i stated is that you need a historical context to whatever rise you see in the temperature series – is it just noise or is it something unprecedented?

        Only 50 years before your 150 years, the world was coming out of the LIA, when nothing was flat – no matter what Mann tells you. Flat is in your imagination. Of course, your flatness calls the MWP flat, too. BEST only went back to 1800, but look at what BEST does right after 1800 – the end of the LIA. Oh, of course, you’ve already spoken from on high that BEST is land only, so that doesn’t mean squat – according to you, the Lord High Climate Science God. Tell that to everyone on your side who all jumped around and whooped and hollered when BEST ‘confirmed’ global warming.

        Again, watch your tone. And please check what I stated I never said “BEST doesnt mean squat”. I only critisized on Chris G:s selection of the datasets and the apples to oranges comparison.

        Having your cake and eating it, too – doesn’t that kind of stick in your craw? It does mine, when it is your Royal Highness’ cake. So, you get to claim BEST works for you when you want it to, but when it doesn’t agree with the point you’re making this hour, you switch sides. Ni-i–i-ce…

      • juakola –
        You now claim that you didn’t set up a 150-year rule

        I didnt say anything about 150 year rule. Your fighting strawmen.

        Actually, at 7:27 am you had written:

        …What I meant, is that this curve (BEST, crutem, GISS, whatever you want to use) alone doesnt make a hockey stick. To make it look like a hockey stick you need a historical context with a relatively flat handle to make this latest 150years to seem somewhat arupt. [sic]

        I was addressing your claim about 150 years, and now you say you didn’t write it. But you did. And I don’t think that English not being your first language can be an excuse. You claimed that the last 150 years was “somewhat abrupt.” But the Hockey Stick is based on after 1990 being the blade and 1990 being the “abrupt” moment of change. 150 years ago nothing abrupt happened to the temps.

        This last time I was insulting was because those two parameters – a relatively flat handle and the 150 years you came up with out of thin air – have no basis anywhere in fact. You have your facts wrong.

        If you meant 15 years, then I apologize for being so snarky. Your claims just sounded like you making stuff up and telling us we didn’t follow your rules.

        Apologies for being an ass.

        Steve Garcia

      • Doh I messed up the quotes again…

      • Brandon Shollenberger

        feet2thefire, it would seem I misinterpreted you. What you said seemed so ludicrous to me, sarcasm was the only answer that made sense. Instead, it seems your comments on this matter just make no sense to me. As juakola said:

        A hockey stick consists a blade and a handle.

        You claim “he seemed to make [this] up himself,” but in reality, it is the common definition used for “hockey stick” for more than a decade. The reason the hockey stick garnered as much attention as it did it is claimed modern temperatures were unprecedented for a thousand years. This claim, as well as the basic definition of a real-life hockey stick, requires both a shaft and blade.

        I have no idea where you are getting your ideas, or your attitude from, but it’s silly. You’re ranting against juakola for stating a simple and obvious truth. It’s been accepted in global warming discussions for over a decade, and it’s been accepted in hockey discussions for far, far longer.

      • Brandon –

        Looking at BEST, there was a “relatively flat” handle, followed by a somewhat abrupt incline at one point. The “relatively flat” BEST handle was not horizontal, and he reiterated his claim of flatness, so I could only think that juokala was insisting on a horizontal flat handle. That requirement sounded ludicrous and made up by him, and I said so.

        The 150 year thing – I have NO idea what he was talking about there, but he twice stated it. Again it appeared he was making stuff up. With two made up parameters for the Hockey Stick, I thought that was too much. His statements struck me as arrogant.

        I just got done apologizing to him for being such an ass. And I meant it. I thought I’d gone overboard a bit, too.

        Steve Garcia

      • Having read feet2thefire’s comment I think he totally misinterpreted my position and claims. He calls me a “warmist” and talks about “your side” (I am a skeptic and highly skeptical of Mannian statistics). Or that is some kind of sarcasm I do not understand….

      • Geez… that guy seems to get very easily offended. Just because of semantics I get flamed and ranted. For me the concept of hockey stick consists a) a proxy reconstruction b) thermometer reconstruction which are glued together. And what I was critisizing is the combination of the two (a) and (b). If you think HS is just a shape then so be it. No need to take that so personally it is only semantics…..

      • OK, juakola, it does come down initially to you misundertanding some things. From the way you phrased them it sounded like you were a warmist.

        Sorry about that.

        But the HS is NOT tied directly to the piecing of the instrument data onto the proxy data. THAT is the “hide the decline” issue, and is different.

        The HS issue has to do with the processing of the data. Steve McIntyre goes into that at http://tiny.cc/5fbjw – and most of it is over my head.

        So, you put two things together that don’t directly connect, although both were by the same CRU/Mann insider group, called the Hockey Team by skeptics.

        I could not help but misread what you wrote, because you were connecting those two different things in a way that didn’t make sense.

        Then at the end, you even said you didn’t say the 150-year thing, but you did. So I didn’t have a clue what you thought you were saying until right here. Again my apologies. I am usually not that snarky. Maybe dealing with some of the warmists here for too long into the night I got testy. No maybe about it. I did. Sorry.

        Truce?

        SteveG

      • That 150 years what I was talking about referred to the reconstructed GMTA from thermometers which is usually being spliced on top of the proxy data to show the ‘inconvenient truth’. I definitely agree that it indeed is the few last decades (+smoothing) which matter the most when creating this visual ‘narrative’. At first I thought you are being snarky but now I am quite certain that you quite didn’t understand what I was trying to say right (and this might be also due to my bad english apologies for that).

        Truce? While I dont think I have been really fighting anyone, I accept.

    • Brandon Shollenberger

      Chris G:

      Brandon,
      I’d be curious if you have evidence of “..he also used them to smooth his reconstructed temperatures.”

      I’m always happy to provide evidence for what I say. If you’d like to learn more about just what that trick was, the best spot to start is this blog post. It discusses everything you might need to know. In the meantime, I need to correct you on a couple points. First:

      So, anyway, given that the divergence problem was known prior to 1998, the graph in question is labeled as it is, and calculations are described in the Methods section, and supplementary information, what is it that Michael Mann hid?

      The graph labels one line “RECONSTRUCTED,” yet that line uses “ACTUAL DATA” as padding in order smooth the end of it. This fact, despite what you claim, was not “described in the Methods section, and supplementary information.” In fact, it wasn’t discussed in either (if you want to claim otherwise, please provide evidence to support your claim). Given that, what Michael Mann hid is exactly what I described to you before.

      But even more importantly, and I cannot stress this enough, the divergence problem wasn’t related to Mann’s graph! You’re conflating Mann’s paper with a totally different paper! Phil Jones was referring to what he did to a temperature reconstruction made by Keith Briffa when he commented about hiding the decline. He said he used the trick Mann used in his paper on Keith Briffa’s data (he actually did even more than Mann did, but that’s another issue all together).

      So when you say, “Jones was talking about Mann 1998; so, let’s not diverge yet,” you’re completely missing the point of his e-mail.

      P.S. Heck no, I am not a scientist. I used to be pretty good at stats, I can handle basic physics, and I’m still pretty good at logic.

      That’s good to hear, though it shouldn’t take much skill at logic, stats or physics to understand this material.

      • “The graph labels one line “RECONSTRUCTED,” yet that line uses “ACTUAL DATA” as padding in order smooth the end of it. ”

        Nope, I’m still looking at two distinct lines, clearly labeled. Actual runs from 1905 to 1995, and reconstructed runs from 1400 to ~1980.

        “But even more importantly, and I cannot stress this enough, the divergence problem wasn’t related to Mann’s graph!”

        Really? If he had continued his reconstructed line there would have been no divergence? That would be an odd thing since it shows up in most other tree-ring proxies. The divergence problem is the decline in correlation between proxy and the temperature data. I guessed he had truncated the reconstructed data in order not to show the decline. If he had padded it, why does it stop before the actual data?

        In any case, most of the public associates an attempt to hide the decline with Mann, but if you are saying the would have been no decline, well, ok. I think you are wrong, but no point in arguing the matter.

        BTW, your “evidence” blog by McIntyre. I’d be skeptical of what I read there. It appears he has done things like run 10,000 “random” simulations, picked the top 100 that looked like hockey sticks, and then claimed to have re-created Mann’s hockey stick with “random” data. Funny thing that, you “randomly” pick data that looks like a hockey stick, run it through Mann’s algorithm, and it comes out looking a little like a hockey stick.

        http://deepclimate.org/2010/11/16/replication-and-due-diligence-wegman-style/

      • Chris:

        BTW, your “evidence” blog by McIntyre. I’d be skeptical of what I read there. It appears he has done things like run 10,000 “random” simulations, picked the top 100 that looked like hockey sticks, and then claimed to have re-created Mann’s hockey stick with “random” data. Funny thing that, you “randomly” pick data that looks like a hockey stick, run it through Mann’s algorithm, and it comes out looking a little like a hockey stick.

        Chris you are delusional about that.

        If you have the courage to read something not spoon-fed to you, go to this link. http://tiny.cc/510rf it might be the one you refused to get cooties from. The title is “What other data series could be plugged in?”

        I warn you, it is actually Steve McIntyre telling the world what he did. I can’t follow it all, but you might be able to. Basically, no matter what data – even stock ticker data – produced a hockey stick shape – and most of them were indistinguishable from each other.

        To help you out, here is part of what he said:

        …The figure below shows 6 “reconstructions” using different combinations of a Tech Stock PC1 or the MBH98 North American PC1 in combination with the other proxies in the MBH98 AD1400 network or white noise. For the purposes of “getting” a high RE statistic – the sole arbiter of Mannian success, it didn’t “matter” what combination you used. Other than the North American PC1 – essentially the bristlecones, it didn’t matter whether you used the other proxies or white noise. And it didn’t matter whether you used Tech Stocks or bristlecones.

        The image of the six graphs are at http://tiny.cc/510rf

        He goes into more detail – WAY over my head.

        Before convicting Steve M of cheating, don’t you think you should be a good and honorable journalist and go see what the accused has to say for himself?

        Steve Garcia

      • Brandon and Chris G –
        Brandon, I’d also mention to Chris G the amount and slope of the cut-off tree-ring curve. I mean, it wasn’t just a little bit of divergence they hid. It was steep down vs steep up. It is a real ‘shame on you, Mike, you cheating piece of crap’ thing. No wonder Briffa didn’t want to do it.

        Steve Garcia

      • Brandon Shollenberger

        Chris G, if you’re going to respond to me, I ask you read what I post. You say:

        Nope, I’m still looking at two distinct lines, clearly labeled. Actual runs from 1905 to 1995, and reconstructed runs from 1400 to ~1980.

        This makes no sense. I have never said anything which suggested there would not be two lines. Using data from one line to modify data in another line does not preclude two lines from being shown. You’re disagreeing with my by pointing out a completely irrelevant fact.

        Really? If he had continued his reconstructed line there would have been no divergence? That would be an odd thing since it shows up in most other tree-ring proxies.

        This makes no sense. Mann’s reconstructed record was not a “tree-ring proxy.” Do you actually know what his paper says?

        Regardless, Mann’s reconstructed temperatures did diverge from observed temperatures toward the end. That’s the entire reason behind his “trick.” It just didn’t have Keith Briffa’s tree ring series, the one where the “divergence problem” got its name. You cannot take me saying something doesn’t have “the divergence problem” as meaning it doesn’t diverge from another series.

        I guessed he had truncated the reconstructed data in order not to show the decline.

        You guessed wrong. Mann did not truncate the reconstructed data. I have no idea what your guess was based on, but I find it peculiar you had to guess even though you claim the “calculations are described in the Methods section, and supplementary information.” If what Mann did was described in his work, why are you having to guess at what he did?

        If he had padded it, why does it stop before the actual data?

        Because, as I told you originally:

        Put simply, he appended the observed record to the end of his reconstructed record, smoothed the combined record, then truncated the resulting record at the point the original reconstructed record ended. In other words, he used the observed record to modify the reconstructed record without any explanation or justification.

        You’re saying a lot of things you wouldn’t say if you took the time to read what is being discussed. I don’t understand that.

        As for who you trust, I’ll tell you what. Once you can accurately describe anything Mann did, I’ll discuss whatever you want from DeepClimate. Until then, how about we focus on resolving the issues at hand?

    • Chris G,

      The graph you link to looks more like a race day profile from the Tour de France than a hockey stick.

  45. Judith Curry

    You asked for our thoughts: I enjoyed both Richard Lindzen’s presentation as well as your comments.

    I would agree with you that the presentation is persuasive and that the strength of part 2 was ” that it relies on data and theory (rather than models)”. You also felt that part 2 was ”overly simplistic”. (Well, after all, it was a presentation for the House of Commons – rather than a group of scientists or engineers – so it had to be kept simple.)

    He points out that ” Models cannot be tested by comparing models with models. Attribution cannot be based on the ability or lack thereof of faulty models to simulate a small portion of the record. Models are simply not basic physics. This is a very compelling argument against the validity of any model-derived projections into the distant future.

    Your four reasons why ”Lindzen’s presentation is so persuasive to public audience” are spot on IMO. I do not think that Fred, Pekka or anyone else could disagree with these reasons, even if they might not agree with Lindzen’s conclusions.

    I would add a fifth reason: in this presentation Lindzen does not come across as someone who is using his position of authority to “speak down” to a less qualified audience.

    While he specifically rejects the concept of “incontrovertibility” in science, he does not emphasize “uncertainty” at all: either of his view or of the opposing CAGW view – but, again, I believe this may have more to do with his audience than anything else (I’m told that politicians hate uncertainty – especially if it is coming from “experts”).

    Thanks for another interesting post.

    Max

  46. “Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating.”

    I find Lindzen’s semantics a bit funny. I’ve seen an interview where the interviewer asked him if he finds the term ‘denier’ degratory or inappropiate. His answer was the that term ‘denier’ works just fine for him and is quite accurate and fine, because you cannot be skeptical about something that is totally implausible.

    In some way, I admire his confidence (and sense of humour). In the other hand, I find him at least a tiny bit too overconfident.

  47. Slide 43
    Thus, the troposphere, which is a dynamically mixed layer, must warm as a whole (including the surface) while preserving its lapse rate.

    Alas. poor Richard appears to have succumbed to the conventional wisdom which confuses isoadiabaticity and equilibrium.

  48. video link – 2 parts

    http://climaterealists.com/index.php?id=9188

    The inro music (and Monckton) are really annoying.

  49. Chris G


    …what is it that Michael Mann hid?

    He found the proxies underestimate the recent warming. He hid this and replaced it with the instrumental data that shows the recent warming.

    If the proxies underestimate the recent warming, is it not possible they could underestimate the previous medieval warming? Don’t you think any scientists when he sees a contradiction between the proxies and the instrumental data in the recent period should throw away the whole proxy data?

    • I already posted a comment which answers this. Yeah, we know, the divergence problem; no, the actual and reconstructed lines are distinct their entire lengths.

      So, what is the mechanism driving the warming in your graph?
      And, how is that hindcast graph coming?

    • Girma and Chris G –

      Chris G …what is it that Michael Mann hid? He found the proxies underestimate the recent warming. He hid this and replaced it with the instrumental data that shows the recent warming.

      Girma, you aren’t correct on that. It wasn’t just an underestimate – the actual SIGN of the curve was wrong, and in a big way.

      If the proxies underestimate the recent warming, is it not possible they could underestimate the previous medieval warming? Don’t you think any scientists when he sees a contradiction between the proxies and the instrumental data in the recent period should throw away the whole proxy data?

      Of course he should. Any correlation goes out the window. How they could deny this non-correlation is beyond belief. And then hiding it – what other term is there than scientific fraud? They KNEW it was not correlating, and they hid it.

      That was exactly why Climategate 1 was such a sensation and took the legs right out from under them. They lost their monoply on the use of the public podium. Why? Even most journalists could see the cheating. It doesn’t take a rocket scientist to understand “use Mike’s trick” to “hide the decline”.

      NOTHING helped the skeptical cause more than Mike Mann himself. He convicted himself, and Jones and Briffa, and I think Osborne was there in that, too.

      Steve Garcia

  50. (My apologies for reposting an entire comment from an earlier thread here on a different topic, but the italic snafu has made it a lot harder to read.)

    Any “serious climate scientist” would be a fool to challenge the claims in the first few slides of Lindzen’s talk on quantitative grounds, however tempted they might be by his seemingly outrageous claims. “What?” they would cry, “The IPCC agreed to nothing remotely like those numbers.”

    Even if we suppose Lindzen has a clearer understanding of climate science than his colleagues, there is a more important reason not to challenge him on what he claims is the consensus of climate science. Lindzen’s slides are a minefield of gotchas (“by itself”, “equivalent CO2″, “increasing CO2 alone“), and you would look very stupid challenging him during the talk (if protocol permitted it) without having first surveyed the slides in advance to locate every gotcha that he’d get you on.

    But that wouldn’t do you any good anyway because you’d then find yourself mired in interminable arguments, to which climate science is more susceptible than other sciences. Lindzen could exploit that weakness of the field to the hilt if the need arose. You would also need to match his pitch-perfect written-on-stone-tablets delivery.

    The only reliable way to judge these slides therefore is to remove Lindzen from the picture, accept every statement protected with gotchas without attempting to disarm them, and focus on the logic of his arguments, which gotchas can’t protect. (I’m not a climate scientist, my career before I retired was in logic.)

    1. When Slide 4 is taken to the logical conclusion Lindzen seemingly wants the honourable members to draw, namely that if greenhouse gases continue on their current rise we can expect a further rise of only 0.8 C over the next 150 years, he’s simply using the same linear-trend argument that Girma and Arfur Bryant love trotting out, obfuscated to make it less obvious.

    The warming trend is in fact far from linear. Greenhouse gases are being added at an exponentially increasing rate, with emitted CO2 doubling every 30 years or faster (I estimate 28.6, YMMV). The warming trend is curving upwards.

    [Pekka made the same point above, without however making the connection with the tired old Girma-Bryant argument, which Lindzen is effectively merely repeating in obfuscated form.]

    2. Following a lot of political slides (appropriate in a House of Commons committee room), the next quantifiable statement is on Slide 11, “no warming since 1997.” Sound familiar? To imply as he does that this is not in dispute, based on one temperature plot, is to have been out of touch with the climate debate! The Berkeley Earth Surface Temperature data for example shows the land temperature since 1997 to have been rising in the same way it has been for decades!

    3. In slides 13-14, Figure 2 averages the deviations of Figure 1, while Figure 3 is Figure 2 “stretched to fill the graph.” Nothing wrong with that, I sometimes do it myself to make the deviations clearer. But then he says “Note that the range is now from about -0.6C to +0.3C.”

    “Now?” But that’s what the range was in Figure 2! Scaling hasn’t made the deviations any larger, contrary to what Lindzen seems to want you to believe.

    And neither it should, because it’s a theorem of statistics, applicable to many distributions but particularly normal distributions, that the average of n random variables each of standard deviation d has standard deviation d/sqrt(n). It’s hard to tell what point Lindzen wants to make here, if not to persuade his audience to ignore the factor of sqrt(n).

    4. Slide 16 repeats another argument Girma and Arfur Bryant are fond of: that there is no essential difference in shape between the period 1895-1946 and 1957-2008. But that’s a completely bogus argument because it depends on detrending the second period (the graph on the left) by the man-made contribution of 0.4 C, after which you would expect the two shapes to be same given the shape of the 62-year-period AMO. (Note that he’s picked the two time periods to be separated by exactly 62 years! You’d think he’d have tried to be a little less blatant about it, but then who in a campaign to repeal a climate change act would notice such a thing?). In the case of the graphs on Slide 16 the temperature scale on the left has been decreased by 0.4 C. Yet the caption reads “Global average temperature and time scales are identical” (my italics).

    These are the methods used by magicians and certain reverse mortgage salesmen. One does not expect them from an MIT professor of climate science.

    As masters of deception, magicians fall into two categories, those who admit it’s all mirrors and sleight-of-hand, and those who insist the magic is real so as not to undermine the illusion. Reverse mortgage salesmen also fall into two categories, those who practice deception and those who don’t. Lindzen practices deception without admitting it, qualifying him for either profession.

    In business it’s not what you know but who you know. In the climate blogosphere it’s who you ask. It would be very interesting to ask McIntyre whether Lindzen’s statistics were sounder than Mann’s, and as a baseline calibration also the Campaign to Repeal the Climate Change Act to which Lindzen addressed his views.

    • Vaughan Pratt


      The warming trend is in fact far from linear.

      It is linear => http://bit.ly/wzkYvi (a LINEAR warming trend of 0.06 deg C per decade with an oscillation of 0.5 deg C every 30 years)

      To top it off, the globe is now cooling => http://bit.ly/nz6PFx

    • MattStat/MatthewRMarler

      Vaughan Pratt: The warming trend is in fact far from linear. Greenhouse gases are being added at an exponentially increasing rate, with emitted CO2 doubling every 30 years or faster (I estimate 28.6, YMMV). The warming trend is curving upwards.

      There isn’t any evidence that the warming trend is curving upwards. The increase since the LIA has be approximately a linear plus sinusoidal trend, and the three periods of near linear increase have indistinguishable slopes. The epoch of highest CO2 concentration has a near 0 slope over 10 – 15 years; maybe the start of this non-increasing epoc has been cherry-picked, but certainly there is no evidence of “the warming trend curving upwards.”

      Or perhaps, as Fred Moolten might aver, I have merely missed it. Is there some evidence that the rate of warming has increased? Such analysis of rates of change of temperature as I have seen show rate of temperature change to be nearly independent of CO2 concentration.

      • There isn’t any evidence that the warming trend is curving upwards.

        Matt, let me offer my evidence that it is curving upwards, and you as a more competent statistician than me can tell me what you think of my evidence.

        Girma’s evidence that the warming trend is linear is to fit a trend line. The trend line is linear, therefore the trend is linear.

        My only objection to this argument is that every curve is linear when judged for linearity in this way. This doesn’t sound like a very useful test for me.

        One trend line will always give a straight line. However if you split the data into two halves and fit two separate trend lines, one to each half, now there are three possibilities instead of one. The slope of the earlier one is either less than, greater than, or equal to the later one. Now we have the information we need to judge curvature, which is respectively curving up, curving down, or linear as Girma claims. This is essentially Girma’s argument modified to yield curvature information.

        When we do this fit of two trend lines instead of one in order to see whether there is curvature, using 1931 as the midpoint between 1850 and 2012, we get this graph.

        I wouldn’t call this “no evidence,” it looks to me like it’s curving upwards when judged by this method. And I certainly don’t understand why anyone would prefer Girma’s one-trend-line way of judging whether the curvature is up or down than this two-trend-line way, unless they have prejudged it as linear and simply want to prove it is linear by using a linear trend line.

      • Vaughn Pratt

        I did not arbitrarily select the linear trend.
        The linear trend is the property of the global mean temperature (GMT).

        http://bit.ly/ApMD3d

        Why?

        Because:

        A straight line passes through almost all the GMT valleys.
        A straight line passes through almost all the GMT peaks.
        These two lines are almost parallel.
        These two lines happen to have the same slope as the trend for the data from 1880 to 2000.
        96% of the GMT data of the last 162 years lie between these two parallel lines.

        That is why the property of the global mean temperature is a long term warming trend of 0.05 deg C per decade with an oscillation of 0.5 deg C every 30 years. This is the single property of the GMT data.

        Your two trend lines include the changes in trends due to the multi decadal oscillation that must be excluded from trend calculations.

      • MattStat/MatthewRMarler

        Vaughan Pratt: When we do this fit of two trend lines instead of one in order to see whether there is curvature, using 1931 as the midpoint between 1850 and 2012, we get this graph.

        That’s cute. You’re joking, right?

        You should show that to Tamino — he likes all kinds of stuff.

      • That’s cute. You’re joking, right?

        Sometimes I am, Matt, but not in this case.

        Just to be clear, this is not how I determined the shape of the contribution of greenhouse gases in my AGU presentation in December, which first detrended by removing the AMO contribution, exactly as specified by Girma, “Your two trend lines include the changes in trends due to the multi decadal oscillation that must be excluded from trend calculations.” Had Girma done this with his method of proving linearity I would have done the same.

        However Girma merely fitted a linear trend line, and I was making the point that a tiny increase in sophistication (break one trend line into two) yielded more curvature information than Girma was providing. What is your objection to that?

      • @Girma Your two trend lines include the changes in trends due to the multi decadal oscillation that must be excluded from trend calculations.

        This is a very good point, Girma. The only problem I see with it is that in your own demonstration of linearity you didn’t exclude the multidecadal oscillations yourself.

        When they are not excluded your one-trend-line method gives exactly zero information about curvature, whereas mine proves curvature upwards.
        Your point seems to be that nothing can be inferred from my argument because I didn’t exclude the oscillations. It follows that nothing can be inferred from your argument either since you didn’t exclude them either.

        This then raises the very interesting question, what would happen to our respective arguments if we both exclude the multidecadal oscillations?

        Since the monthly data for HADCRUT3VGL drowns out the relevant information, I’ll use the annual data from column 14 for the sake of a clearer picture. Hadley Climatic Research Unit computes this as the average of the monthly data in columns 2-13. (Column 1 is the year.)

        When the multidecadal oscillation as estimated by least-squares fitting is subtracted from HADCRUT3VGL, this graph shows what is left.

        Your ingenious tricks with lines everywhere will of course prove that this graph is following a linear trend, congratulations on that excellent illusion.

        But suppose we ask whether the graph is curving upwards or downwards when we don’t include those ingenious lines, but simply look at the graph with the naked eye.

        Since I can’t imagine for a second that Girma would say this graph curves upwards or downwards, I’ll have to ask others here. Do you see this graph as linear, or curving downwards, or curving upwards?

        One could imagine optometrists and psychologists using this as a test of human visual acuity.

      • Krakatau

      • MattStat/MatthewRMarler

        Vaughan Pratt: a tiny increase in sophistication (break one trend line into two) yielded more curvature information than Girma was providing. What is your objection to that?

        There is more that one way to be more sophisticated than Girma. One way is by adding a sinusoid to the linear trend, as others have done; a second way is by considering the linear regression of the quantiles, as I have recommended but not carried out; a third is by considering autocorrelated residuals (as tamino has done on his web page, confirming a single straight line without sinusoid.) By dividing the time series at its middle and fitting two lines (segmented linear regression), you have the “accelerated warming” occurring right before the post- WWII cooling, which is peculiar and what almost no one intends by “accelerated warming”.

        What Girma has done is fit a straight line to the data and then a segmented regression to the residuals. Intellectually, I’d rank that as approximately as “sophisticated” as your approach, not less “sophisticated”.

        Other people have used more sophisticated algorithms to fit segmented regression lines to the temperature data.

        In fact, the data have been around for a long time, though of course the last year’s data have only been available for a year. All the curve-fitting is now post-hoc, so the only possible resolutions of which methods are “best” will occur in the next decades as we learn which models produced the worst predictions.

      • Intellectually, I’d rank that as approximately as “sophisticated” as your approach, not less “sophisticated”.

        Thanks, Matt, I wasn’t sure what you had in mind there. Yes, measured by sophistication Girma’s and my methods of measuring curvature are essentially equally sophisticated. Measured by the ratio of number of bits of curvature information obtained however, mine is infinitely better (1/0 = infinity). I only mentioned it in case someone thought lower sophistication was better for some reason. This approach of fitting trend lines, whether one or two, is about as unsophisticated as you can get.

        Furthermore as Girma points out it can’t tell whether the curvature is of human or natural origin, only the sign of the curvature (sign of the second derivative). Girma was claiming there is no curvature but his claim is based on no information!

        My AGU presentation used a much more sophisticated approach yielding the curvature predicted by the Arrhenius law to within an unexplained variance that since the presentation I’ve been able to reduce to less than 0.1% assuming 2.83 actual degrees 15 years after actual doubling (as distinct from both equilibrium sensitivity and transient response, which both have problems).

        All the curve-fitting is now post-hoc, so the only possible resolutions of which methods are “best” will occur in the next decades as we learn which models produced the worst predictions.

        Surely this depends on how you evaluate the models. If evaluated by predictive power then sure. But there are other metrics:

        1. r2. How much of the variance does the model fail to explain? Which models of long-term global land-sea surface temperature to date have been able to get their r2 above 0.999 using only 9 parameters? That’s my competition.

        2. Future skill from past performance. How well does the method predict the last n years of data when applied to data that is missing the last n years?

    • Vaughan Pratt

      You have critiqued Richard Lindzen’s presentation to the House of Commons.

      Let’s go through your points one by one:

      1. To Lindzen’s Slide 4 you write:

      The warming trend is in fact far from linear. Greenhouse gases are being added at an exponentially increasing rate, with emitted CO2 doubling every 30 years or faster (I estimate 28.6, YMMV). The warming trend is curving upwards.

      It is true (as you write) that atmospheric CO2 levels have been increasing at a fairly constant exponential rate of between 0.4 and 0.5% per year.

      But since the CO2 temperature relationship is logarithmic, this means we should be seeing temperature increase at a linear rate, NOT exponentially.

      2. To Lindzen’s Slide 11 you write:

      “no warming since 1997.” Sound familiar? To imply as he does that this is not in dispute, based on one temperature plot, is to have been out of touch with the climate debate! The Berkeley Earth Surface Temperature data for example shows the land temperature since 1997 to have been rising in the same way it has been for decades!

      Vaughan, you are comparing “apples” (BEST land ONLY temperature) with “oranges” (HadCRUT3 land and sea average temperature). Lindzen is absolutely correct in saying that the “globally and annually averaged land and sea surface temperature anomaly” has not risen since 1997.

      3. To Lindzen’s Slides 13 and 14 you bring up a nit-pick regarding scales. Lindzen has simply pointed out that ”relative to the variability in the data, the changes in the globally averaged temperature anomaly look negligible”, a point that is apparent from looking at the graphs he shows. His point that stretching the scale makes (what he calls) the “negligible” warming trend look larger than it would otherwise look is absolutely true.

      4. You comment on Lindzen’s Slide 16 is a bit confused. He has the HadCRUT3 temperature record covering two separate time periods (1895-1946 and 1957-2008) without identifying which is which. Upon closer scrutiny one can see that the first chart covers the latter period, but the two appear close to identical. His point: the “natural” warming over the earlier period is practically identical to the supposedly “man-made” warming in the later period – a point which he makes very effectively (and which can be confirmed by plotting the two periods on woodfortrees).

      http://www.woodfortrees.org/plot/hadcrut3vgl/from:1895/to:1946/plot/hadcrut3vgl/from:1895/to:1946/trend/plot/hadcrut3vgl/from:1957/to:2008/plot/hadcrut3vgl/from:1957/to:2008/trend

      You accuse Lindzen of a “completely bogus argument” by bringing up mumbo-jumbo about “detrending the second period by the man-made contribution of 0.4°C”, but there is no such “detrending” in the curves he shows (and I have plotted on woodfortrees). So it is YOUR argument that is “bogus”.

      After running out of specific arguments, you opine:

      As masters of deception, magicians fall into two categories, those who admit it’s all mirrors and sleight-of-hand, and those who insist the magic is real so as not to undermine the illusion.

      IPCC is clearly in your second category, with its “sleight-of-hand” chart, FAQ3.1, Figure 1, which purports to show how warming is accelerating by comparing temperature trends over ever-smaller time periods.

      Vaughan, if you can’t do a better critique than that, you’d be better off doing no critique at all.

      Just my opinion.

      Max

      • The slide 16 is again a real joke – a very strong case of cherry picking. That can be seen by extending the later period to 2011 and the earlier period to 1949 or perhaps 1951.

        Suddenly the picture is very different. Suddenly it shows how the recent leveling off of the rise remains evidence for the significance of the recent increase while the peak of the earlier period turns out to be short-lived.

        How many examples of misleading cherry picking does that presentation actually have? How dishonest it actually is?

      • I guess what Pekka means is this:

        http://tinyurl.com/6q4yxuv

        vs

        http://tinyurl.com/7yyhfk5

        Yes, when extending the periods they no longer look as much alike. No extrapolation done, just an illustration of the similarity of those two warming periods. The latter is claimed to have a strong AGW fingerprint by the AGW proponents.

        Of course, if we want to find different kind of warming periods we can even try this:

        http://tinyurl.com/8499ayp

        or when similar
        http://tinyurl.com/6q4yxuv (same as 1st link)
        But of course the selection depends on what we want to illustrate.

        My point being, why isnt he allowed to find similar periods of temperature anomalies from the dataset, if he wants to illustrate the similarity between the early 1900s and the late 1900s? Your argument is similar to that it wouldnt be allowed to say “it hasnt warmed/cooled from xxxx” by plotting a trend from xxxx?

        Or do you disagree that the early 1900 warming period was different (lets not mix the causes here) than the late 1900s? Making this a “strong case for cherry picking” and “dishonesty” sounds a bit outrageous claim to me.

      • Or do you disagree that the early 1900 warming period was *different..

        *) similar

      • Juho,

        I don’t disagree on the rather strong similarity. What I did protest on is using careful selection of periods to make the similarity look stronger than it really is.

        That’s called cherry picking. Using cherry picking and other similar methods in a presentation systematically to distort the impression that one gets from the data is a method used for misleading the audience. It appears clear that Lindzen is not trying to present a balanced view of the evidence. Some people may say that he is right in doing that because there are other people who distort the evidence in the opposite direction. Even it that’s accepted the conclusion is that Lindzen presents biased information and what he tells should not be taken as true.

      • Pekka

        You write to juakola regarding Lindzen’s slide 16:

        I don’t disagree on the rather strong similarity. What I did protest on is using careful selection of periods to make the similarity look stronger than it really is.

        What specific time periods would you have selected to avoid this problem?

        Max

      • Per your request, max, I’ll ignore this confused formatting and reply to your corrected version below.

      • Max,

        In this case the essential issue is the ending year. Lindzen’s choice was so early that the rapid fall after the peak appeared random fluctuation. A few more years makes a huge difference to the impression the curves produce.

        This is related to a point I have made previously:

        The most recent period of little change in temperature provides evidence in both directions:

        1) It tells that the rapidly rising trend of the previous years has not continued over this period.

        2) It tells that the rapid rise of the previous years did not end in a short-lived peak but, indeed, to a higher rather persistent temperature level.

        The second point is essentially equivalent to the observation that the latest decade is significantly warmer than the earlier ones. This is arguably more essential than the first point. (But it’s true that some people expected an even warmer decade that did not materialize.)

    • corrected formatting – please delete earlier post

      Vaughan Pratt

      You have critiqued Richard Lindzen’s presentation to the House of Commons.

      Let’s go through your points one by one:

      1. To Lindzen’s Slide 4 you write:

      The warming trend is in fact far from linear. Greenhouse gases are being added at an exponentially increasing rate, with emitted CO2 doubling every 30 years or faster (I estimate 28.6, YMMV). The warming trend is curving upwards.

      It is true (as you write) that atmospheric CO2 levels have been increasing at a fairly constant exponential rate of between 0.4 and 0.5% per year.

      But since the CO2 temperature relationship is logarithmic, this means we should be seeing temperature increase at a linear rate, NOT exponentially.

      2. To Lindzen’s Slide 11 you write:

      “no warming since 1997.” Sound familiar? To imply as he does that this is not in dispute, based on one temperature plot, is to have been out of touch with the climate debate! The Berkeley Earth Surface Temperature data for example shows the land temperature since 1997 to have been rising in the same way it has been for decades!

      Vaughan, you are comparing “apples” (BEST land ONLY temperature) with “oranges” (HadCRUT3 land and sea average temperature). Lindzen is absolutely correct in saying that the “globally and annually averaged land and sea surface temperature anomaly” has not risen since 1997.

      3. To Lindzen’s Slides 13 and 14 you bring up a nit-pick regarding scales. Lindzen has simply pointed out that ”relative to the variability in the data, the changes in the globally averaged temperature anomaly look negligible”, a point that is apparent from looking at the graphs he shows. His point that stretching the scale makes (what he calls) the “negligible” warming trend look larger than it would otherwise look is absolutely true.

      4. You comment on Lindzen’s Slide 16 is a bit confused. He has the HadCRUT3 temperature record covering two separate time periods (1895-1946 and 1957-2008) without identifying which is which. Upon closer scrutiny one can see that the first chart covers the latter period, but the two appear close to identical. His point: the “natural” warming over the earlier period is practically identical to the supposedly “man-made” warming in the later period – a point which he makes very effectively (and which can be confirmed by plotting the two periods on woodfortrees).

      http://www.woodfortrees.org/plot/hadcrut3vgl/from:1895/to:1946/plot/hadcrut3vgl/from:1895/to:1946/trend/plot/hadcrut3vgl/from:1957/to:2008/plot/hadcrut3vgl/from:1957/to:2008/trend

      You accuse Lindzen of a “completely bogus argument” by bringing up mumbo-jumbo about “detrending the second period by the man-made contribution of 0.4°C”, but there is no such “detrending” in the curves he shows (and I have plotted on woodfortrees). So it is YOUR argument that is “bogus”.

      After running out of specific arguments, you opine:

      As masters of deception, magicians fall into two categories, those who admit it’s all mirrors and sleight-of-hand, and those who insist the magic is real so as not to undermine the illusion.

      IPCC is clearly in your second category, with its “sleight-of-hand” chart, FAQ3.1, Figure 1, which purports to show how warming is accelerating by comparing temperature trends over ever-smaller time periods.

      Vaughan, if you can’t do a better critique than that, you’d be better off doing no critique at all.

      Just my opinion.

      Max

      • Max, I’ll respond to your four responses to my points 1-4 in separate comments. This one is for your response to my first point.

        It is true (as you write) that atmospheric CO2 levels have been increasing at a fairly constant exponential rate of between 0.4 and 0.5% per year.

        This is a strawman argument. I did not write the words you put in my mouth just now, neither on this occasion nor any other.

        Natural CO2 is around 285 ppmv (my best estimate based on data from several sources is 287 ppmv). The excess above that is due to humans, and it is the excess that is increasing exponentially, at 2.5% per year, not your 0.5% per year of the total including nature. This is easily confirmed by looking at the data, which so far you’ve flatly refused to do!

        I have explained this to you multiple times, and each time you put your fingers in your ears, cry LALALALALA, and stick to your extraordinary theory that by an amazing coincidence nature decided to increase her contribution exponentially at exactly the same time as humans did, after millions of years of not doing so.

        Max, there is no evidence whatsoever that the various natural sources of CO2 are growing exponentially. It is human population and their technology that has been growing exponentially. If you seriously believe nature decided to crank up CO2 production at the same time as humans, then you are seriously confused about where CO2 comes from.

        But since the CO2 temperature relationship is logarithmic, this means we should be seeing temperature increase at a linear rate, NOT exponentially.

        Wow, my respect for your math abilities just jumped up two notches. You got that one exactly right! If it’s any consolation, the IPCC is as confused on this point as you are. Their definition of “transient climate response” is the same as yours but with 1% in place of your 0.5%. Either one of these percentages shows that temperature will increase linearly. Examination of the HADCRUT3VGL data shows it curves upwards, proving that both you and the IPCC are equally wrong. The “equally” notwithstanding, the latter contains more bits when estimated information-theoretically.

      • Vaughan, you are comparing “apples” (BEST land ONLY temperature) with “oranges” (HadCRUT3 land and sea average temperature). Lindzen is absolutely correct in saying that the “globally and annually averaged land and sea surface temperature anomaly” has not risen since 1997.

        Arrgh, a gotcha got me. ;)

        You are absolutely correct there, Max, give yourself a debating point. I thought I had mapped out every gotcha in Lindzen’s talk but I missed that one. Damn.

        But this now raises the very interesting question of why we land dwellers should give a capybara’s buttocks about what happens at sea. If the sea keeps the atmosphere cool (as owners of beach cottages will attest) while those of us inland are tormented by increasing heat as shown by the BEST land temperature (which is climbing much faster than the sea temperature), then shouldn’t land temperature count for something when assessing the impact on us land dwellers of global warming?

      • Captain Kangaroo

        Vaughan old buddy – you know I am there for you. Don’t worry about the heat none – no matter which version of GISTEMP you look at it’s the cold that ‘s gonna get you in North America.

        The Arctic is pretty much the same – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=chylek09.gif

        There is some wacky idea it is sea ice related – for what it’s worth – http://s1114.photobucket.com/albums/k538/Chief_Hydrologist/?action=view&current=arcticice.gif

        Although I still like Mike Lockwood’s solar UV idea – http://iopscience.iop.org/1748-9326/5/2/024001

        If you start to feel like a climate refugee – I have a nice place on a Queensland beach you can rent.

        Your friend
        Robert I Ellison
        Chief Hydrologist n
        dd

      • Chief Hydrologist

        Who was that masked man?

      • Lindzen has simply pointed out that ”relative to the variability in the data, the changes in the globally averaged temperature anomaly look negligible”, a point that is apparent from looking at the graphs he shows.

        Excellent, Max, we’re in agreement on Figures 1 and 2, and Lindzen’s comments on them. What you’re saying is exactly what I said.

        But how does any of that support Lindzen’s argument that CO2 is nothing to worry about?

        And what’s the point of Figure 3? It doesn’t seem to add anything nonobvious to either Figure 2 or to Lindzen’s main arguments.

        In short, slides 13-14 seem not to bear out Lindzen’s arguments that CO2 is harmless.

      • 4. … His point: the “natural” warming over the earlier period is practically identical to the supposedly “man-made” warming in the later period
        That would be fine if it were true, Max, but it isn’t. The two graphs are not identical because Lindzen has subtracted 0.4 C from the left in order to hide the 0.4 C increase attributable to AGW.

        Had Lindzen labeled the graphs to indicate that one had been lowered by this much, or said anything to that effect, it would be fine, but he didn’t. He simply slid one graph down without saying he’d done so, so as to create the impression that nothing had changed during the intervening 62 years. That was deception pure and simple: what had changed was that the temperature had increased 0.4 C. He said nothing to disabuse his audience of the impression he created with that misleading juxtaposition that implied nothing had changed.

        4. You accuse Lindzen of a “completely bogus argument” by bringing up mumbo-jumbo about “detrending the second period by the man-made contribution of 0.4°C”, but there is no such “detrending” in the curves he shows (and I have plotted on woodfortrees). So it is YOUR argument that is “bogus”.

        Lindzen’s claim that the temperature scales are identical is as bogus as the numerical identity 0 = 0.4. What you plotted on woodfortrees does correctly show the 0.4 C temperature increase. Your claim that Lindzen showed this difference is obviously false. If he had showed it I’ve had no complaint.

        If you think objecting to the identity 0 = 0.4 is “mumbo-jumbo” then you enjoy a different system of mathematics from the one I was taught.

      • Vaughan Pratt

        Thanks for taking the time to respond in detail. I’ll return the favor.

        Point 1: Atmospheric CO2 is rising at a rate of somewhere between 0.4% and 0.5% per year, compounded. This is an observed fact which is (to put it into IPCC wording) “incontrovertible”.

        One can also say, as you point out, that the increment since year 1750 (?) – when there was no accurate measurement BTW – has increased by a higher compounded annual rate.

        But in actual fact, the difference between the two is zilch, nada.

        [And your comment of me sticking my fingers in my ears, etc. is both silly and insolent.]

        Further down the line you added:

        You say things that anyone can see are false, for example your claim that CO2 increases at .5%/year, which would entail the impossible result that 200 years ago CO2 would have been at 145 ppm. Your response is to ignore the objections and continue to repeat evident nonsense.

        What a bunch of baloney, Vaughan! Atmospheric CO2 levels have risen at between 0.4% and 0.5% per year compounded since measurements started as well as most recently. This rate was slower prior to WWII, based on ice core estimates This is NOT false. It does NOT “entail any result for 200 years ago” (that’s your own personal meaningless extrapolation). It is YOU that are repeating nonsense if you state that the obvious fact is “false”.

        But, if you prefer your wording and scope of calculation, so be it (i.e. since 1958 the atmospheric CO2 increment, which is generally assumed to have been caused by human emissions, has grown from a measured 315 ppmv minus an estimated 280 ppmv = 35 ppmv to a measured 390 ppmv minus the same estimated 280 ppmv = 110 ppmv = a CAGR of that increment of the total CO2 of 2.2% per year). Yikes!

        Point 2: BEST (land only) versus HadCRUT3 (land and sea) – you agree that Lindzen is talking about global (not land only) temperatures and that you were comparing “apples” with “oranges”.

        Point 3: There is no disagreement here, apparently. Lindzen simply made the rater obvious points a) that the variability (±2 °C) was much greater than the warming trend (measured in tenths of a °C) and b) that the trend looks more impressive when the scale is expanded.

        I have no problem with Lindzen’s (obviously true) statements, but you seemed to think he was trying to deceive the listener, without having anything specific to criticize. Sounded to me like a nitpick, Vaughan.

        You added:

        But how does any of that support Lindzen’s argument that CO2 is nothing to worry about?

        Duh! It doesn’t. Nor does it have anything to do with the price of eggs in China.

        Point 4: Lindzen shows two statistically indistinguishable warming periods covering two 50-year time periods: one before there was any significant increase in CO2 and one when there was a large increase.

        You object to Linden’s comparison of the two warming periods with:

        But that’s a completely bogus argument because it depends on detrending the second period (the graph on the left) by the man-made contribution of 0.4 C

        This is wrong, Vaughan, as I pointed out by plotting the HadCRUT data on Woodfortrees for the two periods. Both curves contained natural plus man-made warming.

        So you have been basically wrong in all your points of contention, both in your critique of Lindzen’s presentation and in my wording.

        I know you are a climate scientist and should know better than to get yourself tangled up in lose-lose discussions involving basic logic with someone who is not specialized in your field, but it looks like you fell into the trap.

        Max

      • Max,

        This is really annoying. You make statements that you present as based on factual knowledge. Those claims are shown to be wrong. Then you repeat them as if they would still remain factual statements.

        This time it’s your claim of 0.4-0.5% growth rate. I told that it has already exceeded 0.5% and is certainly going to keep on increasing unless the trends do change dramatically.

        If you had some doubt about my claim you could have checked the facts. I did one further check plotting the rate as 10 year moving average of the rate. The result has been growing over the total period of Mauna Loa data. Over the first 10 years it was less than 0.3%/year, over the latest 10 years it has average over 0.5%/year. The increase has been essentially linear with rather strong variability around the trend.

        You have argued on this point so many times that it’s almost unbelievable that you have done it being so explicitly and totally wrong.

      • Pekka Pirilä

        My, my, Pekka! No need to get “annoyed”. It doesn’t sound very “scientific”, you know. – best to stick with a rational, factual discussion.

        I have taken a closer look at the Mauna Loa record.

        I will concede to you that you are correct in stating that the average annual increase has exceeded 0.5% per year (I was only looking at the long-term average, which did not do so).

        The most recent 5-year average annual rates of increase are between 0.50% and 0.56% per year.

        This started out slower, increasing from around 0.4% in the 1980s to a peak of 0.56% by 1998. The linear rate of acceleration of the annual rate of increase was around 0.008% per year up until 1998 and has remained essentially flat since then. The most recent 5-year average annual rate of increase is 0.55%

        You state the rate of increase “is certainly going to keep on increasing unless the trends do change dramatically” This has already happened, Pekka. IThe trend has flattened out, and there are good reasons to believe that it will not start accelerating again, as you surmise. The average annual rate of increase may even start slowing down again to the 0.4% to 0.5% per year range – who knows?

        In view of the dramatic slowdown in population growth projected by the UN for this century compared to the last, this appears to me to be a logical upper limit of the exponential rate of annual increase for use for future projections, probably a bit on the high side in view of high energy costs and resulting pressures on increasing efficiencies in motor fuel consumption and power generation as well as domestic and commercial heating.

        As a matter of fact, IPCC uses the following CAGR projections for various computer-based “scenarios and storylines”, all based on no “climate initiatives”:
        0.46% B1
        0.49% A1T
        0.52% B2
        0.63% A1B
        0.74% A2
        0.82% A1F1

        The last three cases are probably greatly exaggerated – but, hey, I’m not going to get “annoyed” by them – are you?

        Max

    • Matt & Vaughan

      Here is Latif supporting my argument.

      Linear + cyclic interpretation of global mean temperature => http://bit.ly/wCsZym

      http://eprints.ifm-geomar.de/8744/

  51. It is linear

    Not if you remove all those spurious lines you added in an attempt to make it look linear. The curvature upwards then becomes obvious.

    To top it off, the globe is now cooling

    I think by now Girma that everyone has figured out (thanks to you’re constantly pointing it out) that if you pick 2002 as the start year the linear trend is negative (namely a slight -0.09 degrees per decade). That’s what’s known as cherry picking.

    What you’re doing is the same as proving that the Mona Lisa is scowling by focusing on a small corner of her mouth.

    If you back off to the bigger picture starting from any time between 1970 and 1990 (so as to be sure there’s no cherry picking going on), and look at land (since very few of us live on the ocean), you will see no sign whatsoever of any sustained decline since 2002. It certainly wobbles since 2002, but the wobbling is the same kind that’s been happening for decades. Thanks to the continual wobbling, you can find places all along the period from 1970 to now where the temperature trends downwards, so you could prove using them all that it has been trending down for many decades now.

    • Markus Fitzhenry

      Grima, Vaughan would like you to appreciate the -0.09 cooling since 2002 was predicted by the IPCC in 1998.

      If you back off to the bigger picture starting from any time between 1970 and 1990 you would realise that this is not cherrypicking because the MWP & LIA gave us the right starting point.

      Of course, we don’t have to include the 80% of surface of the Oceans, that wouldn’t make sense. Vaughan is a climate scientist you know, so he is totally objective.

      • Markus,

        The current consensus of scientists do NOT even need an orb. Any parameters differences do NOT effect averaging. Even though try putting those “averaging” number back onto the planet changes every area that the temperature data was taken from.
        But that is what computer models are for…NOT an orb!

        Question?
        Are we measuring atmospheric pressure with any accuracy?
        If you travel up or down a long hill your ears “pop” does that not imply pressure differences? And yet the whole area is measured as one pressure.
        Motion and velocity were NEVER in consideration with scientists as they are “unobserved” parameters.

        http://jonova.s3.amazonaws.com/guest/lalonde-joe/world-calculations.pdf

        http://jonova.s3.amazonaws.com/guest/lalonde-joe/world-calculations-2.pdf

      • Markus Fitzhenry

        Let’s be clear about this fudge. The dry adiabatic rate is approx ~10klm

        It is claimed the radiative vertical transport of heat is 5Cdeg to 6Cdeg per klm. The value of the calculated mean flux altitude is 5klm at mid latitudes and 6klm global mean.

        How effing convienient there is no need to consider the force of pressure or gas laws when determining the point of mean atmospheric radiative flux at 5lkm ties in beautifully with S-B equation.

        Mind you not 5.1 klm not 4.9 klm it is exactly 5klm. NASA’s measurement of saturated diabetic lapse rate has been pulled out of thin air.

        Arrhenius got IR warming [a mistake by Tyndall] wrong and mistakenly assumed the S-B equation predicts that a solid surface in contact with the atmosphere emits radiation according to S-B for its temperature in parallel with convection. In reality, the sum of the two has to equal the incoming SW energy.

        The only way they could match Earth up with S-B was to tinker at the edges of gas laws, and forget about pressure. Well done Hansen.

      • Markus Fitzhenry

        Let’s be clear about this fudge. The dry adiabatic rate is approx ~10klm

        It is claimed the radiative vertical transport of heat is 5Cdeg to 6Cdeg per klm. The value of the calculated mean flux altitude is 5klm at mid latitudes and 6klm global mean.

        How effing convenient there is no need to consider the force of pressure or gas laws when determining the point of mean atmospheric radiative flux at 5lkm ties in beautifully with S-B equation.

        Mind you, not 5.1 klm not 4.9 klm it is exactly 5klm. NASA’s measurement of saturated diabetic lapse rate has been pulled out of thin air.

        Arrhenius got IR warming [a mistake by Tyndall] wrong and mistakenly assumed the S-B equation predicts that a solid surface in contact with the atmosphere emits radiation according to S-B for its temperature in parallel with convection. In reality, the sum of the two has to equal the incoming SW energy.

        The only way they could match Earth up with S-B was to tinker at the edges of gas laws, and forget about pressure. Well Done Hansen.

      • Vaughan is a climate scientist you know

        If everything else you say is this accurate, MF, we can stop paying attention to you. :)

    • A perspective on decadal climate variability and predictability
      by Mojib Latif, Noel S Keenlyside

      Abstract

      The global surface air temperature record of the last 150 years is characterized by a long-term warming trend, with strong multidecadal variability superimposed.

      [ http://bit.ly/wzkYvi ]

      Similar multidecadal variability is also seen in other (societal important) parameters such as Sahel rainfall or Atlantic hurricane activity. The existence of the multidecadal variability makes climate change detection a challenge, since global warming evolves on a similar timescale. The ongoing discussion about a potential anthropogenic signal in the Atlantic hurricane activity is an example. A lot of work was devoted during the last years to understand the dynamics of the multidecadal variability, and external and internal mechanisms were proposed. This review paper focuses on two aspects. First, it describes the mechanisms for internal variability using a stochastic framework. Specific attention is given to variability of the Atlantic Meridional Overturning Circulation (AMOC), which is likely the origin of a considerable part of decadal variability and predictability in the Atlantic Sector. Second, the paper discusses decadal predictability and the factors limiting its realization. These include a poor understanding of the mechanisms involved and large biases in state-of-the-art climate models. Enhanced model resolution, improved subgrid scale parameterisations, and the inclusion of additional climate subsystems, such as a resolved stratosphere, may help overcome these limitations.

      http://bit.ly/xANYW0

      • Girma,

        Scientists are so focused on temperature data alone, that they fail to understand what it is that creates these temperatures and circulation.
        Circulation is generated by motion…tis was NEVER in consideration.
        So we have all these laws of science created that cannot be broken even though other parameters say that these are incorrect due to not considering other parameters. Pressure is another red herring of science.

    • Vaughan Pratt


      …if you pick 2002 as the start year the linear trend is negative (namely a slight -0.09 degrees per decade).

      Did not the IPCC claimed:

      Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. http://bit.ly/caEC9b

      By the way, wisely, the world has not headed the useless recommendation of the IPCC to keep greenhouse gases at 2000 level and callously increase cost of living.

      • Vaughan Pratt …if you pick 2002 as the start year the linear trend is negative (namely a slight -0.09 degrees per decade). Did not the IPCC claimed: Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected. http://bit.ly/caEC9b By the way, wisely, the world has not headed the useless recommendation of the IPCC to keep greenhouse gases at 2000 level and callously increase cost of living.

        Don’t tell me, tell them. I’m not involved with the IPCC.

    • Hi boys
      You can see the big picture here:

      http://www.vukcevic.talktalk.net/CET-NVa.htm

      the global temperature correlates well with the CET

      http://www.vukcevic.talktalk.net/CETGNH.htm

      anyone can reproduce, no fiddle no cherry picking.

      • vukcevic

        How about including a short description of your graphs as well as definition of the acronyms?

      • Girma
        Only ‘strange’ one in

        http://www.vukcevic.talktalk.net/CET-NVa.htm

        is the NAP data set which links some aspects of solar activity to the north atlantic currents circulation and it is closely linked to both the NAO and the AMO.

        http://www.vukcevic.talktalk.net/GNAP.htm

        There is no good understanding of the climate change without understanding natural events in the north atlantic.
        Soon the metaAGU science will be trying to find out what really causes climate change, only then I will explain a bit more about the north atlantic precursor. At the moment once in the academia respected CO2 hypothesis is starting to stagger along, a bit like an inebriated gambler who lost huge sums of taxpayers’ money on an old worn out nag (horse).

    • Vaughn –

      Picking 1970 as a starting point was for a long time a common warmist cherry-pick, because that was essentially at the bottom of the 1940-1975 cooling off period. So you can’t blithely offer that up as if it has no baggage of its own. Of course it got warmer, after the bottom of the cooling period. They used to throw “since 1970″ all kinds of warming happened. Duh.

      Same thing about 1800, when they use that (BEST did, and they were wrong to do that), because that was the end of the LIA when the world was going to warm up. And it is a good thing it did!

      We are “only” 210 or so years after the end of the LIA. To put that into paleoclimate perspective, the Younger-Dryas stadial began about 12,900 years ago. Its onset was VERY abrupt. It began over a period of about 0-200 years. No one knows yet how short a time it took. Greenland’s temperatures dropped by about 12 FULL degrees C. The Earth entered a new ice age, which it was not to come out of for 1200 years. What we all think of as a stable temperature for X many thousands or millions of years simply didn’t happen. Since the end of the Younger-Dryas the Earth has been in the Holocene. Much of that time was in the Holocene Climatic Optimum (HCO), from 9,000 BP to 5,000 BP. See http://tiny.cc/p8z6g for a graph of the Holocene period temps. Clearly the temps during the HCO were warmer than at present. So any claims about this being the warmist time in the last 600,000 or whatever are all at the very least, on thin ice as far as claims go.

      But look also at that black curve on the left, how it comes from some abyss of cold. THAT was the Younger-Dryas. I won’t even go into how humans did not do either the HCO or the Y-D.

      We can’t look at the insanely brief period of thermometers and think we are having record highs, or even record lows. They are just the high and low moments among a lot of high and low moments. And by moments I mean since 1990 or since 1970 or since 1800 – they are all just moments. We know so little and we are telling ourselves we need to cure something that we don’t even know is messed up for sure, or that humans did it. SOME people think so, but if the HCO was so much warmer, then us claiming that humans caused this one – because we are innately selfish or wasteful or have industry – what are we to blame the HCO on? The lack of mammoths?

      It doesn’t matter what we use as a starting point, because we aren’t in an all-time high period like is claimed. So, if all other wamer periods were non-human-caused, then why must this one be blamed on us, especially when it is not even the warmist ever? What the slopes are from time A to time B are all silliness, people making mountains out opf molehills. You want warm? The world certainly survived the HCO – in the time of man, no less! The plants are still around, the animals that didn’t go extinct at the BEGINNING of the Y-D – mammoths, saber-toothed tigers, and more, which died in what may have been the first decade of the Y-D – survived the super warm HCO, and are still here.

      Climatology needs to slow down and stop thinking it is so all-fired important a field. It is only just now starting out, and any conclusions are premature. And we don’t have to fall for every Boy Who Cried Wolf or Chicken Little.

      Steve Garcia

      • So any claims about this being the warmist time in the last 600,000 or whatever are all at the very least, on thin ice as far as claims go.

        I was talking about the last half-century, Steve, which I attended. By all means harangue the world about whatever might have happened half a million years ago, but please leave me out of it, my alibi is that I was elsewhere at the time. ;)

        (If you’re laboring under the same delusion as Markus Fitzhenry that I’m some kind of climate scientist then maybe some identity thief has switched my identity for that of a climate scientist. If so I’d like my identity back please.)

      • some identity thief has switched my identity for that of a climate scientist A most painful insult…?

      • most painful insult?

        Not at all. It is no insult to point B to say you’d rather be back home at point A.

    • Vaughan Pratt

      Should expected future GH warming increase at a linear or exponential rate?

      Past increase of atmospheric CO2 level has been exponential, at a constant CAGR of between 0.4% and 0.5% per year.

      There is no reason to believe that future increase will be at any greater exponential rate than past increase, especially since population growth has already begun to slow down and is expected to continue to do so (from the past 1.7% to around 0.3% CAGR).

      So we assume that atmospheric CO2 will increase exponentially.

      But the CO2/temperature relationship is logarithmic (each doubling of CO2 has the same temperature impact as the next doubling).

      So, as a result, it is logical to assume that warming from added CO2 should increase linearly (not exponentially)

      Just simple math, Vaughan.

      Max

      • Past increase of atmospheric CO2 level has been exponential, at a constant CAGR of between 0.4% and 0.5% per year.

        If that were true, Max, it would have been 145 ppmv 200 years ago. It’s never been that low in the last billion years or more. Check your math.

        So, as a result, it is logical to assume that warming from added CO2 should increase linearly (not exponentially)

        Max, you evidently meet a wider range of people than me if you know someone that thinks warming from added CO2 is increasing exponentially. Do them a favor and suggest they check their math.

      • vaughan,

        You’ve misunderstood an important point.

        Max, quite rightly, says, it’s “simple math”.

        The rest of us might prefer simply math.

      • Brandon Shollenberger

        Vaughan Pratt, did you just tell manacker he was wrong about the rate of increased in CO2 levels because if you take his rate back in time far enough, it gives too low a value? That’s how I’m reading your comment, but…

        That’s stupid. Almost any rate of change given is an estimation. If you extrapolate out far enough, they’ll usually give you a wrong answer. It’s a meaningless point, and it certainly doesn’t invalidate anything. The most it can do is allow you to tell manacker he’s wrong to say “the past” is the past because he’s only looking at one part of the past, not “the last 200 years.” Not only is it a stupid point based purely on semantics, it begs the question of why limit ourselves to only the last 200 years? Maybe the next time manacker talks about the “[p]ast increase of atmospheric CO2 level,” you should tell him he’s wrong because of what things were like millions of years ago!

        I hope I’m just misreading you. If not, you’re contradicting manacker based entirely upon a stupid and meaningless point rather than simply saying something like, “When you say ‘the past,’ you’re only talking about the last XX years, right?”

      • Max,

        There’s clearly nothing fundamental in the exponential growth of the concentration. Thus estimating, how it will change must be based on some model, not just simplistic extrapolation. The rate is furthermore already significantly more than 0.5% when estimated trying to remove the effects of short term variability.

        I made some calculations based on two models for future emissions, which are plausible at for some 50 years to the future, the lower one even longer. For the persistence I used Maier-Reimer and Hasselmann model for concentrations not much higher than the present. Others might wish to use differing models, but my calculations are certainly much more justified that your guess.

        The results tell that the rate of relative increase is likely to grow to a maximum in the range 0.7-1.1 %/year before starting to decline again. The uppermost values of that range are perhaps less likely as that would require using extensively low quality coal or oil shales, but in any case my judgment is that the rate of 0.5 %/year is likely to be exceeded significantly (unless policies of reducing CO2 emissions turn out to be much more effective than I expect them to be). I don’t go now into the details of the models as I’m not using their results as more than for getting a plausible range. (They are the same models I have discussed also earlier when commenting on the concentration growth.)

      • Vaughan Pratt, did you just tell manacker he was wrong about the rate of increased in CO2 levels because if you take his rate back in time far enough, it gives too low a value?

        Brandon, I believe you’re only considering half of my response to Max about .5% vs. 2.5%. You ignored my point about the 0.5% being a non-physical theory.

        Since discussions of CO2 seem to lead to language of the form “it is a stupid and meaningless point,” let me make my point in terms of the Ptolemaic vs. Copernican theory, which I’ll call the non-physical and physical theories respectively, and which hopefully won’t require either of us to estimate the stupidity quotient of the other.

        The following numbers are made up for the sake of argument.

        The physical theory gives an accurate result 40 million years back but not 4 billion years. The non-physical theory gives an accurate result 4 thousand years back but not 400 thousand.

        There are two reasons to prefer the Copernican theory.

        1. It is physical.

        2. It is accurate over a longer time period.

        Any complaints there?

        Now let’s drop back into a subject where unlike planetary motions it’s very obvious to both sides that the opposition is stupid, namely CO2. (If it was CO2 and temperature we could escalate from “stupid” to “moron.” ;) )

        There are two reasons to prefer the 2.5% theory over the 0.5% theory.

        1. It is physical. (CO2 emissions from fossil fuel are growing exponentially, the contributors to the 30x larger natural carbon cycle have been holding relatively steady, at least over the past few millennia.)

        2. It is accurate over a considerably longer time period, namely at least 2000 years (where the Vostok ice cores show 284.7 ppmv at 342 BC, Petit et al, Nature v.399 (6735), pp 429-436, 1999), though not 20,000 years (where CO2 is down to 190 ppmv, ibid.).

        If you want to argue that Max’s 0.5% theory is just as physical as the 2.5% theory, Brandon, I’m all ears. (The 2.5% theory is due to David Hofmann late of NCAR Boulder, not me, I’m not a climate scientist, just a logician.)

        The length of time over which the 0.5% theory is just as accurate as the 2.5% theory can be inferred from the following five annual rates of growth of total CO2 as estimated by fitting a smooth curve through the Keeling curve.

        1958: 0.23%
        1995: 0.49%
        1996: 0.5%
        1997: 0.51%
        2011: 0.65%

        This blog has a bad habit of shooting its AGW messengers, but I’ll take my chances anyway and deliver the bad news that Max’s non-physical theory is nowhere near as competitive as Ptolemy’s non-physical theory. Fire at will.

      • Brandon Shollenberger

        Vaughan Pratt:

        Brandon, I believe you’re only considering half of my response to Max about .5% vs. 2.5%. You ignored my point about the 0.5% being a non-physical theory.

        For me to have ignored a point of yours, you’d have had to have made that point. You didn’t. The comment I responded to had five sentences from you. The first three were what I referred to, and the other two said nothing of any relevance.

        Your entire response to me is predicated upon this simple and obvious fabrication. I don’t know what made you say it, but it means your entire response to me is irrelevant. You can say manacker is wrong for as many other reasons as you want, but that won’t change the apparent stupidity of the one reason I commented on.

      • Vaughan Pratt

        @Brandon For me to have ignored a point of yours, you’d have had to have made that point. You didn’t.

        Brandon, I made that point when I wrote “Max, there is no evidence whatsoever that the various natural sources of CO2 are growing exponentially. It is human population and their technology that has been growing exponentially. If you seriously believe nature decided to crank up CO2 production at the same time as humans, then you are seriously confused about where CO2 comes from.”

        What would you call this point if not an objection to a non-physical argument?

        When following discussions, Google might be your friend but grep is not.

      • Brandon Shollenberger

        Vaughan Pratt

        What would you call this point if not an objection to a non-physical argument?

        I would call it a point made in a different fork and thus not a point made in the response you claimed I ignored a part of. You’ve just claimed I ignored a part of a response because I didn’t consider something you said in a totally different response.

      • Vaughan Pratt

        I would call it a point made in a different fork and thus not a point made in the response you claimed I ignored a part of.

        Brandon, no one (modulo my remark at the bottom) is being “stupid” here as you claim. You’re running into WordPress’s inability to keep two consecutive comments together, which can be moved far apart as comments between them pile up. This can easily result in the appearance of a “different fork” when none exists, as happened here.

        In this case Max responded to my critique of Lindzen’s logic here on 2/28 at 3:58 pm, where among other things he raised his perennial “CO2’s CAGR = 0.5%” claim. He broke off for less than 70 minutes (to attend to something?) and then continued the same claim in a second half, eventually posting it here at 5:08 pm that same day.

        During the next several days a great many comments piled up in between, including Pekka’s comment here on March 3, 4 days later. Pekka expressed the same frustration I’ve been feeling with Max’s illogical insistence on a steady 0.5% CAGR from here to 2100 when one can easily see from the Keeling curve that the CAGR of CO2 has itself been rising since its inception. Pekka concluded with “You have argued on this point so many times that it’s almost unbelievable that you have done it being so explicitly and totally wrong.” My feeling precisely, Max flatly ignores the evidence and sticks to his nonsense.

        This pile-up in between Max’s two consecutive posts has naturally created the impression of a “different fork.” However it’s easy to tell when two WordPress comments that are separated by dozens of other comments actually belong together: just check their dates and times. Had you done this here you’d have noticed the mere 70-minute gap.

        Regarding my abbreviated rebuttal to Max of his perennial CAGR claim, I’m more than happy to spell it out in full when challenged by anyone who hasn’t yet seen the full argument. Which, now that I think of it, might be worth putting somewhere it can be linked to even more easily than giving the abbreviated argument, which as you correctly point out is not 100% rigorous by itself.

        Regarding your repeated claims that I’m stupid, rather than repeat the famous line from A Fish Called Wanda whose churlish repetition won Kevin Kline his only Oscar, let me offer a sample dialogue between A and B.

        A: I claim P.
        B: P is false because of Q.
        A: Not so because of R.
        B: Oh, but you’ve neglected S.
        A: The problem with S is T.

        And a slight variant:

        A: I claim P.
        B: P is stupid because of Q.
        A: Not so because of R.
        B: Oh, but you’ve stupidly neglected S.
        A: The problem with S is T.

        At some mental age (5? 6? I don’t know) the listener to such a dialogue switches from thinking A must be stupid to thinking blogger would be a better career choice for B than diplomat.

        Would you agree?

    • @vaughan

      ‘That’s what’s known as cherry picking’

      But can not this rather irrelevant charge be placed at any line whatsoever when one or more of the end-points are arbitrarily chosen?

      Why is a trend from 2002 any more ‘cherry picked’ than one from 1793 or 400BC or three weeks ago last Michaelmas Whitsun? Is there some ‘ideal’ length of time over which a trend should be picked? If so what is it, and what is the justification for choosing it.

      AFAICT the statement

      ‘if you pick 2002 as the start year the linear trend is negative (namely a slight -0.09 degrees per decade).’ is true. You need to deal with that bit of the observations just as much as the 10 years before that and before that and before that to show that your theories and models have a real grasp of climate change. Better still, you need to go back to a model run from 15 years ago that actually predicted the cooling trend.

      Dismissing a set of observations as ‘cherry picking’ is pretty meaningless IMO.

      • Brandon Shollenberger

        Latimer Alder, the expression “cherry pick” has a distinct requirement of something being hidden. In other words, a decision is only cherry picking if it changes the results. It doesn’t matter what years you start and end at as long as your conclusions don’t depend on your decision.

        In other words, you can start a graph at 1793 or 400BC. It doesn’t matter. Just don’t make conclusions based on when you started it.

      • @brandon

        ‘It doesn’t matter what years you start and end at as long as your conclusions don’t depend on your decision.’

        ???

        Very puzzled by this. How could you draw conclusions that aren’t influenced in some way by the data that you choose to include?

        A worked example would help my understanding.

        Hypothetical example from me. I choose 988BC and have some evidence that the temperature anomaly then was -0.4C Now it is +0.6c. Over 3000 years the rate of increase is 0.003C per decade. Ergo AGW is trivial.

        Somebody else picks 1980 as a starting point and comes up with a much greater trend and concludes that CAGW is just around the corner.

        Then Girma et all show that since 2002 it is actually cooling.

        Which is ‘right’? In each case the conclusions depend on the starting point

      • Brandon Shollenberger

        Latimer Alder, you say:

        Which is ‘right’? In each case the conclusions depend on the starting point

        The only example you provided which was clearly cherry picking was Girma’s. I believe the first example shows what is causing your confusion. It doesn’t hide any data. Instead, it’s wrong because of the logic it uses. It assumes there was a single linear trend since 988BC, and that’s wrong. It’s bad logic; it’s not hiding data.

        If you want to try to figure out if something is cherry picking, here’s a simple process. First, you must be looking at a subset of what’s being analyzed, not the entirety (if everything is there, nothing is being hidden). Second, if you pick a different subset, you must get a notably different answer. That’s all there is to it.

      • Brandon Shollenberger

        By the way, my initial comment was unclear, and I imagine that’s what caused your confusion Latimer Alder. I apologize for that, and I hope my followup comment was more clear. In case it isn’t, let me try being more clear:

        Cherry-picking is when results are gotten by hiding adverse information/data.

      • Is there some ‘ideal’ length of time over which a trend should be picked? If so what is it, and what is the justification for choosing it.

        This is an excellent question, Latimer. I would say that if you are using a 1 foot ruler to measure things, there is an ideal range of lengths for those things, namely between say 1/16 of an inch and 10 feet. Within that range there is no length that is ideal for measurement by a ruler, the ruler is suitable for all lengths in that range.

        By the same token if you are trying to measure the slope of a slightly bumpy incline, there is an ideal range of intervals along that slope to measure it at. It should be longer than a couple of bumps, or the measured slope will be meaningless, but at most the length of the entire incline or errors will enter.

        Does that seem reasonable?

        Recall that my statement was “If you back off to the bigger picture starting from any time between 1970 and 1990 (so as to be sure there’s no cherry picking going on).” Some people seem to be reading it as “If you back off to the bigger picture starting from 1970.”

        My point there was that no matter what starting point you pick, as long as the starting point in a reasonable range, in this case 1970 to 1990, you see the same bumpy slope going up from that point, with no apparent indication of any slowdown in the most recent decade. Starting earlier than 1970 is not reasonable because it’s beyond the length of the incline in question. Later than 1990 is too short because there are not enough bumps to get an idea of the slope.

        One you have the slope, you can ask how well the curve in question follows the general pattern of the slope. This seems like a somewhat subjective judgment. For example I would say that the BEST curve follows a general pattern along the whole length of any interval starting between 1970 and 1990 and ending in 2012. But being a subjective judgment, others may well disagree.

      • Nope. still don’t understand.

        Example 2 (from 1980) also assumes that there is a single linear trend. But you say that this is wrong to assume for 988BC –> 2011

        So why is example 2 (linear trend from 1980) any ‘better’ than assuming two such trends..one up to abut 1998 (warming a bit) , and the other from 1999 onwards (cooling a bit)?

        And how would I know – just from first principles – which would be the ‘right’ data subset to look at. For any as yet to be seen set of data.

      • @vaughan

        Since 1999 , say, we have had over 10 ‘bumps’. Why is that not enough to draw the conclusion that there has been no perceptible change over that period?

        You seem to have arbitrarily decided that the ‘correct’ range is somewhere between 20 and 40 years. But your justification for this is no better than Girma’s. You accuse her of picking the endpoints to show what she wants…but you do the same.

        And I’m always reminded that some climatologits first claimed that you’d need 10 years of no warming to show a trend, then when that occurred it was 15 , now 17. The ‘ideal’ number seems to disappear like the pot of gold at the end of the rainbow…the nearer you approach, the further away it gets. If there was no warming for 20 years, somebody would claim that a minumum of 25 years was needed. And so it would go. So far you’ve not given me any reason to believe that there is any justification for it at ll.

        And though I sort of cotton on to your point about a ruler (You don’t use a micrometer to measure the distance to Venus, nor a telescope to look at individual atoms), I don’t quite see its relevance to this case.

      • @brandon

        ‘Cherry-picking is when results are gotten by hiding adverse information/data’

        OK. So am I to understand that the famous hokey stick ‘hide the decline’ was just cherry picking, not attempted fraud then?

      • @Latimer Example 2 (from 1980) also assumes that there is a single linear trend. But you say that this is wrong to assume for 988BC –> 2011

        Step away from that portrait of Mona Lisa, Latimer.

        Notice how her smile becomes more obvious?

        You keep focusing on one tiny detail of the big picture. The big picture is that there was an industrial revolution starting around the beginning of the 19th century, which is part of the argument that the rise in CO2 to 394 ppmv only goes back a couple of hundred years, prior to which CO2 was in the neighborhood of 280-290 ppmv for the past several thousand years, as can be seen from the ice cores.

        Your reference to 988BC therefore indicates one of two things:

        1. You lack the big picture.

        2. You enjoy scoring debating points by focusing on minutiae even though you know they’re irrelevant to the big picture.

        This is typical of you, Latimer. All along you’ve been wasting peoples’ time by playing your pointless little games. After a while it gets old. I’ve noted the URL of this comment. In future I’ll respond to your more tendentious responses with a link back to this comment.

      • Brandon Shollenberger

        Latimer Alder, I think you must have misread what I said:

        Example 2 (from 1980) also assumes that there is a single linear trend. But you say that this is wrong to assume for 988BC –> 2011

        So why is example 2 (linear trend from 1980) any ‘better’ than assuming two such trends..one up to abut 1998 (warming a bit) , and the other from 1999 onwards (cooling a bit)?

        I never said your second example was “right.” It was wrong for the exact same reason as your first example. I simply didn’t see a point in saying an example that was exactly the same was exactly the same. Applying a linear trend to data that doesn’t have a (single) linear trend is using faulty logic. Sometimes it’s worth doing as a matter of convenience because its simple and may give a reasonable approximation, but it can never give you a “right” answer.

        And how would I know – just from first principles – which would be the ‘right’ data subset to look at. For any as yet to be seen set of data.

        Generally speaking, if you don’t know what data ought to be used, you should use of all of it. You’ll want to exclude data known to be corrupted (like when a temperature reading says the surface is 448 degrees), and you’ll need to keep in mind the differences in data sets, but otherwise, you won’t go wrong by using all the data available.

        OK. So am I to understand that the famous hokey stick ‘hide the decline’ was just cherry picking, not attempted fraud then?

        This question doesn’t make sense. Something can be both “attempted fraud” and “cherry picking.” Labeling something one does not preclude it being the other as well.

        In fact, fraud commonly involves intentionally cherry picking things.

      • Latimer Alder

        @vaughan

        I rather resent the remark that these are ‘pointless little games’. Especially since I am still asking the same question that you described as ‘excellent’ a few hours ago.

        Many people instinctively and unthinkingly seem to use the phrase ‘cherry picking’ to deride any results or illustrations they find inconvenient. I’m trying to find out whether there is any actual theoretical substance to such claims. And so far I’m inclined to the conclusion that there isn’t. That ‘cherry picking’ is just a substitute for ‘I don’t want to know’.

        Because while you deride my 3000 year example and dislike Girma’s 10 years, you are happy with 20-40 years as a good interval. But why? What makes this a better interval than 60 years? Or 17? Or any other arbitrary number you care to mention?

        Let me ask the question another way. If I were given a time series of data (immaterial what the data represents) which had 1000 data points, what interval would you recommend to analyse as being the ideal for showing trends? And how do you decide on that interval? Would your number change if there were 10,000 points? Or 100? Is there a general formula?

        BTW .. not a good idea to argue about CO2 levels when anaysing temperatures and intervals to show trends. Starts to smack of you having predetermined that because there is CO2 around there must still be warming so the answer must be long enough to show it. Which would almost exactly confirm my point about cherry picking ..that it is the shout of those who find the results inconvenient.

      • Vaughan Pratt

        I rather resent the remark that these are ‘pointless little games’. Especially since I am still asking the same question that you described as ‘excellent’ a few hours ago.

        Nice exercise for a philosophy class: what is wrong with this reasoning? ;)

  52. Vaughan Pratt


    …tired old Girma-Bryant argument …

    Please tell me how to make the data [ http://bit.ly/wzkYvi ] NEW?

    I thought the observed temperature in the past does not change.

    • Girma –
      “Vaughan Pratt …tired old Girma-Bryant argument … Please tell me how to make the data [ http://bit.ly/wzkYvi ] NEW? I thought the observed temperature in the past does not change.”

      I’ve seen at least two posts at WUWT or CA pointing out that the Hockey Team has adjusted the temps again – and every time the recent past is adjusted UP, while the longer ago (read: more or less pre-1950) past is adjusted DOWN, conveniently increasing the slope of their straight-line regression. It might have been as many as four times, but I am not sure.

      Steve Garcia

  53. JC: His scientific argument in the second half of the talk is appealing in that it relies on data and theory (rather than models).

    But is it possible to have a model without thereby having an implicit theory ?

    • But is it possible to have a model without thereby having an implicit theory ?

      Excellent point. If you take what’s true of that model as your theory, then no because that’s the theory entailed by the model.

      This remains true even when contemplating multiple models. In that case the implicit theory is that which is true of all those models (equivalently the intersection over all those models of the theory of each).

  54. AGW => “I can do well on exams if you could give me the answers.”

  55. Dear Judith,

    With respect to you, and all your readers, I was there. In an attempt to address one of the many misrepresentations or omissions of relevant facts, i was prevented from actually asking a question. However, Professor Lindzen graciously invited me to email my questions. He got more than he expected and he did not like it. However, despite being warned by him not to publish my email, I have done so – and would hereby invite him to sue me. However, he will not sue me – nor will he answer my questions – I suspect; because to do either would expose to the world the extent to which his scientific objectivity has been clouded by ideological prejudice.

    If you and your readers cannot be bothered to work your way through the 1800-word email (linked to above), tomorrow I will publish a list of 17 simple statements, to which I have also invited Professor Lindzen to respond to as well but – in all probability – he will not do this either; for the same reason as above.

    If Lindzen wants to sue somebody, he should sue Naomi Oreskes and Erik Conway for publishing Merchants of Doubt.

    In the interests of the integrity of science, I hope you will not delete this comment.

    Kind regards,

    Martin Lack

    • MattStat/MatthewRMarler

      Martin Lack, in your email you wrote this: Are you not worried at all by the fact that, due to the massive inertia in the climate system, more warming is already “in the pipeline”?

      Surely you understand that warming already “in the pipeline” is an assertion, or hypothesis, not a “fact”? And even if it’s true, the amount of warming isn’t reliably known? And that the duration of the hypothesized warming is also not known but may be 2,000 to 4000 years if it occurs at all?

      Could you quote Dr Lindzen’s “warning” not to publish?

      • Why, are you going to sue me for not saying “extremely high probability” instead? With ice caps and glaciers melting, permafrost thawing, sea ice disappearing – you have to be a complete fool to deny that more warming is “in the pipeline”. This will happen even if we all stop burning fossil fuels tomorrow.

      • Of course, Martin believes that James Hansen’s climate sensitivity claim of 6 C for a doubling of atmospheric CO2 is “settled” when really it is more of a fringe view.

      • MattStat/MatthewRMarler

        Martin Lack: Why, are you going to sue me for not saying “extremely high probability” instead?

        Is that the way you always respond when someone points out that what you wrote is technically incorrect? “Extremely high probability” is also not a “fact”, though it may be your best judgment.

        So far, all you have established is that Prof Lindzen has declined to reply to a poorly worded email.

      • I am sorry, MattStat but I was getting very irritated by John Kosowski. He has been plaguing my blog for almost a month now – going off topic and accusing me of lying all the time Therefore, his repetitive attempts to try and get me to incriminate myself here were just driving me mad. However, that was no excuse to take it out on you.

      • This chap seems awfully keen that somebody – anybody – should sue him. Can anyone oblige? First that funny little Craven guy, then Glieck, and now this cove, hurling themselves up Heartbreak Hill, against a hail of withering fire from solid, well-defended denialist positions, yelling Banzai for the naked Emperor!

        My,these people are getting desperate…

    • Glancing over your letter it occurs to me that we should talk fundamentals. Remember the scientific method, (Popper) scientific theories can never be proven but they can be falsified. If you fail to falsify it, the theory may be right. So proving a hypothesis or theory is all about attempting to falsify it. So the hypothesis/theory about greenhouse effect with positive feedback, accumulates in the prediction of the tropical hotspot around 200 HPa and it’s not there. Therefore that assumption is all but falsified and if you think that it is Lindzen idea, just look what the ‘team’ has to say about it:

      http://foia2011.org/index.php?id=1889

      Quoting: “You’ll be unsurprised to hear that I think this paints too rosy a picture of our understanding the vertical structure of temperature changes. Observations do not show rising temperatures throughout the tropical troposphere unless you accept one single study and approach and discount a wealth of others. This is just downright dangerous. We need to communicate the uncertainty and be honest.”

      So if you got the data right and the prediction turns out to be false, you just lost your theory. Sure it takes a while to part ( explained by Thomas Kuhn – The structure of scientific revolutions) but that doesn’t change a thing.

      So then you can ponder about Snowball earth, the PETM, the Pleistocene, Venus, all you want, but you can’t repair a failed theory with mere hypotheses based on that same idea.

      Things are just a lot more different than we think. Don’t blame Lindzen for that.

    • Martin- why misrepresent what happened..
      You initially asked a very long question… publically..

      which was discussed/answered.

      And then later on you attempted to ask another long question, the chair cut you off, because others wanted to ask questions, and their was another meeting in the room straight afterwards. Ljmited time for questions.
      You asked more questions than I did.
      And the guy next to me did not really understand what your 1st question was..

      • Hi Barry, There are a number of errors in your recollection of events:
        1. Frustrated by Lindzen’s misleading discussion of palaeoclimatology, I attempted to make the point that CO2/temp time proves nothing. We now have a problem because we have changed CO2 – temp must now change to restore radiative energy imbalance.
        2. As I have indeed conceded to Lord Monckton on Simon Carr’s blog, it was understandable that the Chair would cut me off (especially as I was off-message).
        3. Neither you nor anyone else knows what my second question would have been as I was not allowed to ask it (but for the record it was less than 15 words).

        Do you think the guy next to you would understand now?

    • Martin,
      Your inertia question sort of reduced you to a crank.
      Think of Lindzen’s warning as more of a helpful suggestion.

  56. It still looks like what you choose to believe, not what is scientifically verified.

    http://redneckphysics.blogspot.com/2012/02/what-do-you-choose-to-believe.html

    Was there a global Medieval Warm Period? Was there a global Little Ice Age? Is there significant land use impact? Is CO2 the primary suspect?

    Other the warming that can be directly attribute to CO2 is less than “projected”, it has been warming for at least 200 years and no one should have any confidence in any one cause, there is not much left but “belief” in your pet theory.

    Mine is still agriculture and albedo BTW :)

  57. Martin,
    What are Lindzen’s misrepresentations? I read your letter, and it is a simple battle of ideology, which, of course, you are entitled to. But where is Lindzen not telling the truth? Let’s list them out one by one so that we can really find the truth here.

    • As I am sure you actually realise, John, the misrepresentations of relevant facts are itemised in my questions. As I have said, for those with ADHD, I will publish a list of 17 statements that Lindzen would appear to want to dispute. However, if he does, he is not only at odds with just about every climate scientist on the planet (apart from Pat Michaels and Roy Spencer – oh and the good Dr Curry herself – it would seem), he is picking a fight with history and science. Unfortunately, it is not just Lindzen that will lose, we will all lose. That is why this corporately-sponsored campaign to deny the reality of ACD must be stopped now.

      • No need for name calling Martin. You state there are numerous misrepresentations. List them right here and now. Issues that Lindzen disputes has nothing to do with misrepresentations. You stated “many misrepresentations.” If there are many, certainly you ought to be able to list a few of them.

      • In my humble opinion, the most blatantly hypocritical aspect to the presentation Lindzen gave was in the use of graphs. He claimed that “warmists” stretch the axes of graphs to make thing look more “alarming” than they actually are. However, he then used exactly the same technique to make the Keeling ‘curve’ appear not to correlate with temperature records over the short-term.

        As I said, this was just one of my many questions. Shall I now expect a Court Summons in the post? Somehow, I don’t think so…

      • Martin,
        With which graph do you take issue? Which page of the pdf?
        Thanks.

      • This conversation is over, John… I stand by every word in my email (i.e. as posted on my blog today). I was very careful not to accuse Prof. Lindzen of lying. You can tell the truth (or at least believe you are telling the truth) and still, irrespective of your intent, mislead people and/or leave out relevant information that does not support your argument. The questions I posed to Professor Lindzen all arise from what he said (or did not say); and most people would have gone away from his talk more certain than ever that there is no cause for alarm. Indeed the did; and they continue to do so. It was all very clever, but that does not make it right.

        The seventeen statement that will appear on my blog tomorrow, each one derived from a question posed today, all reflect the genuine scientific consensus position; and therefore demonstrate how completely at odds with that consensus Lindzen is.

        Given that modern-day “sceptics” are not like Galileo, Occam’s Razor dictates that Lindzen is almost certainly wrong and, if he is, our collective failure to take mitigating action to minimise ACD will almost certainly prove to be very unpleasant.

      • Martin Lack | February 28, 2012 at 10:22 am |
        In my humble opinion,…..

        Martin, there’s nothing humble about your opinion

      • No Martin, the conversation is not over. It is just getting started. You accuse Lindzen of “many misrepresentations.” Those are your words above. Certainly if you are accusing him of misrepresenting facts, you ought to be able to list them. Since you aren’t I am going to assume that you can’t. And it is pretty strange that when questioned on it, you refuse to elaborate.

      • Who the hell do you think you are, John? Have you now morphed into Lindzen’s lawyer? I think it would be evident to the entire world that Lindzen can defend himself. For your information, however, to put your sound-byte back into context, I said “one of the many misrepresentations or omissions of relevant facts“.

        I’m sorry, mate, but, as has been repeatedly shown to be the case on my own blog, you’ve picked a fight with the wrong guy.

      • “As I am sure you actually realise, John, the misrepresentations of relevant facts are itemised in my questions.”

        Who am I? I am the guy that holds you accountable when you don’t tell the truth. What are Lindzen’s “misrepresentations.”
        Unlike your blog where you can delete my posts, and even your posts when I prove you wrong, you can’t delete my posts here.
        To be honest, I don’t even know whether Lindzen was misrepresenting facts. I just know that you accused him of such, so I wanted to find out the truth. So Martin, put up or shut up. What are Lindzen’s misrepresentations?

      • This is not a game, John. My email explains where I feel Lindzen is misleading people. Whether or not he intends it is immaterial (and I have very pointedly not accused him of doing it deliberately), it remains highly likely that that is the effect he is having on an awful lot of people.

        As I said, for those who cannot be bothered to read my long email, my post on my blog tomorrow will detail 17 different ways in which Lindzen is at odds with the consensus view of ACD.

      • Martin,
        There is a big difference between Lindzen being at odds with a “consensus” view and misrepresenting material facts. Accusing someone of misrepresentation is no game indeed. Perhaps you should retract your accusation, and just proceed on the grounds that you think Lindzen is wrong.

      • John, this is absolutely the last response I am going to make to you today on this site so, read it very carefully. I have said 2 or 3 times (may be more) that I am not accusing Lindzen of willfully misleading people. Therefore, if it will stop you from wetting your trousers, I will gladly apologise for the one statement that you have quoted, which, on its own, might have been capable of being construed to indicate otherwise.

      • Martin,
        You come across as a typical AGW extremist- iow a pedantic twit.

      • Martin,

        How about those of us afflicted with MLAD?

        (Martin Lack Attention Deficit ) – as in paying attention to you is difficult.

    • There is a Lack of substance to Martin’s yammering.

      • Brilliant :)
        Mr Lack did seem somewhat blinded by his own genius. Still I suppose the dimmest bulb will dazzle if you are close enough.

      • Thank you, Mr. Eddy. And you are correct. It does seem that the great (in his mind), Lord Martin Lack, is suffering from a scorching self-illumination.

    • John Kosowski and Martin Lack

      There are no “misrepresentations” in Lindzen’s presentation. It is all pretty straightforward.

      Vaughan Pratt has attempted to find such “misrepresentation” in his critique above [February 28, 2012 at 5:29 am].

      This critique is very weak and ill-founded. I have rebutted it item-by-item [February 28, 2012 at 3:58 pm], refuting all his claims of “misrepresentation”.

      Dr. Curry has stated above that Lindzen’s slides were compelling and that this “may be the most effective seminar he has given on Global Warming”

      Dr Curry’s right on this one; Dr. Pratt is wrong.

      Max.

      • Agreed. So far I have not seen anyone cite a “misrepresentation” contained in Linzden’s presentation. That is why I asked Lack for specifics which, of course, he could not provide.

      • Sorry for the delay, John. I ran into an unexpected obstacle.

        If you accept my earlier apology, and you accept that I have not accused Lindzen of saying anything he knows to be untrue, the fact remains that a great deal of what he said would have – indeed clearly has – been very readily misunderstood by a generally non-scientific audience.

        Now then, to the unexpected obstacle, you asked me for an example of something that was potentially – not necessarily deliberately – misleading. In response, I cited the Keeling curve versus Temperature data; and you asked me which slide that was in the PDF. OK, well, I went to the PDF, and the offending slide is “missing”. I checked the May 2010 version of the talk; and it’s not in there either. So I watched the video embedded in this blog earlier; and there it is – at about 28 minutes and 30 seconds.

        Both the video and a screenshot of the “missing” slide are now appended to the relevant post on my blog. Therefore, even if Lindzen is not deliberately misleading anyone, he is clearly guilty of hypocrisy – which is what I very politely suggested in my original email to him last Thursday.

        How does that grab you?

      • Martin,
        So you are saying that you don’t like the graph that he chose because the units of CO2 concentration can be stretched to make the graph line up however the presenter chooses?
        And that makes Lindzen a hypocrite because he criticized doing that to make .2 C look significant?
        Do you have a more appropriate graph of CO2 and temperature that we should be viewing?
        Do you know why the graph was dropped?

      • This critique is very weak and ill-founded. I have rebutted it item-by-item [February 28, 2012 at 3:58 pm], refuting all his claims of “misrepresentation”.

        Max, I conceded that your rebuttal of my second item was correct in view of Lindzen’s specific wording of “global,” which I’d overlooked. However I pointed out the errors in each of your other three rebuttals, which you have yet to show are not errors.

        This has been a pattern with you. You say things that anyone can see are false, for example your claim that CO2 increases at .5%/year, which would entail the impossible result that 200 years ago CO2 would have been at 145 ppm. Your response is to ignore the objections and continue to repeat evident nonsense.

    • “You come across as… a pedantic twit”
      Nope, not pedantic, hunter: I was just trying to be careful what I said.

      • How about just trying to be concise??

        Drop all the lengthy stuff about how clever you are and how you are going to smite the deniers and all that BSand just focus on clearly stating what you think Lindzen did wrong.

        Example

        1. Lindzen stated x on slide y. This is wrong because of

        fact1 (with reference)
        fact2 (with reference)
        fact3 (with reference)

        I have criticised Lindzen before for over-ponderous delivery but at least he eventually gets to the point.

        Because if you can’t manage this simple stuff, I fear I will be obliged to agree with Hunter.

        See if you can do this

      • What is the matter with you people. I have not actually accused Lindzen of doing anything “wrong”. I just wnat him to explain to me why he says what he says (and omits to say so much). I realise know that my email was too long (but it is not my fault his talk was so readily-capable of being misunderstood by non-scientists). That is why I have now reduced my questions to 17 simple statements. Lindzen would do the world a great service if he wold just tell us all which, if any he agrees with:
        The IPCC is too optimistic.
        2. Holocene climatic stability is now endangered.
        3. The ‘marketplace of ideas’ is a fallacy.
        4. The notion of a scientific conspiracy is an illusion.
        5. Some of your (Lindzen’s) graphs were potentially misleading.
        6. Given (2), post-Industrial temperature rise is significant.
        7. Given the inertia in the system, more warming is ‘in the pipeline’.
        8. Sceptics are always ‘going down the up escalator’.
        9. Therefore ‘global warming’ did not stop in 1998 (or at any other time).
        10. Neither the Sun nor volcanoes are now the dominant climate forcing.
        11. As CO2 is the only thing to have changed significantly, this is a ‘fair test’.
        12. ACD is inevitable because the Earth’s energy balance must be restored.
        13. Soon we will have to re-name the Glacier National Park in Montana.
        14. It would be sensible to move to a low/zero carbon economy ASAP.
        15. Environmental concern is based on palaeoclimatology not models.
        16. Climate “sceptics” are not like Galileo.
        17. Environmentalism is not the enemy of humanity.

        However, if, as I suspect, he agrees with none of the above, then he must stop invoking conspiracy theory and give us some evidence.

      • Brandon Shollenberger

        Martin Lack, I have a bit of advice for you. One of the worst ways you can try to get someone to listen to you is to start off by saying, “What’s the matter with you?” All it adds is mockery. As for your actual material, I find it peculiar you say you were “just trying to be careful” with what you said. Much of what you say seems to indicate the exact opposite. For example, you claim:

        I have not actually accused Lindzen of doing anything “wrong”.

        Yet just above you say:

        Therefore, even if Lindzen is not deliberately misleading anyone, he is clearly guilty of hypocrisy

        Misleading people and hypocrisy are both “wrong.” The only way to have these two comments not contradict is to claim being a hypocrite doesn’t count as doing something. Even if one accepts that claim is true (it isn’t), your wording does nothing to indicate it. The same problem is found in some of the 17 points you listed:

        8. Sceptics are always ‘going down the up escalator’.
        9. Therefore ‘global warming’ did not stop in 1998 (or at any other time).

        Your 8 is asking Lindzen whether or not he agrees with a gross and derogatory generalization, but that isn’t what I want to focus on. Your point 9 actually says that since skeptics are always blah, blah, blah, global warming did not stop in 1998. That makes no sense at all. What temperatures do is not connected to what skeptics may or may not do. You also say:

        11. As CO2 is the only thing to have changed significantly, this is a ‘fair test’.

        This point has a false premise as CO2 is not “the only thing to have changed significantly.” Unless you were just trying to catch Lindzen in some stupid, deceptive rhetorical trick, this line is horrible.

        16. Climate “sceptics” are not like Galileo.

        I’m sure skeptics are like Galileo in many ways, just like everyone shares many similarities with others. You’re asking him to agree or disagree with a comment which is too vague to be meaningful. And that ignores the fact you don’t know the truth about Galileo, but instead rely on some pretty picture myth of him.

        For the shortened version, you really need to work on clear and simple communication.

      • No-one is perfect – hypocrites included.

        40% change in CO2 is significant; nothing else has changed. I think my statement was entirely legitimate.

        Why won’t you deal with the implications of Lindzen being mistaken; after all – he is in an extreme minority.

      • Brandon Shollenberger

        Martin Lack, you say:

        40% change in CO2 is significant; nothing else has changed. I think my statement was entirely legitimate.

        You said CO2 is the only thing which changed. I contradicted you. You now repeat that CO2 is the only thing which has changed. Repeating yourself is obviously not going to convince me, so I’m not sure why you’d do it. I’ve discussed a number of other things which have changed on this very page, so you simply telling me they don’t exist won’t work.

        I recommend you try a little research on this subject. One easy option for you is to look at the discussions of IPCC estimated forcings which exist on this very page.

        If you do, you’ll find plenty of other things have changed.

        Why won’t you deal with the implications of Lindzen being mistaken; after all – he is in an extreme minority.

        I was responding to you. My comments dealt solely with things you had said independent of anything Lindzen had said. Given that, the reason I “won’t… deal with the implications of Lindzen being mistaken” is it isn’t relevant to anything I said. More specifically, I don’t care to derail a discussion I started.

      • Brandon, I’m sorry but, I can’t see where you contradicted me: What else apart from atmospheric CO2 concentrations has changed significantly (i.e. steadily) – increasing by 40% – since the Industrial Revolution?

        For “scepticism” to be worthy of any merit, you must have an alternative hypothesis capable of explaining all the change that has occurred since then; and is now accelerating ahead of IPCC predictions.

        When in a hole, we should stop digging – All the evidence indicates that CO2 is the dominant cause of ongoing warming (which has not stopped – see the “still going down the up escalator” animated graph on the SkepticalScience website – I’m fed up posting links to it so you will just have to find it) . Therefore, now that we know that burning fossil fuels is causing the problem, we should stop doing it ASAP.

        Your “wait and see” attitude is just simply irrational.

      • Even more ‘Wow’

        Since when, and by whose authority, were you appointed as the Grand Inquisitor of AGW? Sent to cleanse us heretics from any non-conforming thoughts, and perhaps to purge our Souls as well.

        Suggest that you condense your 17 points down to 5 or fewer, and then it might be possible to have a rational debate.

        Otherwise you are beginning to sound like some religious doomsday prophet with fixed ideas (Because It Is Written by the IPCC!) and a vision of an imminent Climate Armageddon rather than anybody with sensible points to make.

        And your serial concentration on yourself as the topic of conversation is not a good sign for your persuasive skills.

      • Brandon Shollenberger

        Martin Lack, when you stop making things up about me, we can maybe have a discussion. Until then, this is my last response to you:

        Brandon, I’m sorry but, I can’t see where you contradicted me: What else apart from atmospheric CO2 concentrations has changed significantly (i.e. steadily) – increasing by 40% – since the Industrial Revolution?

        First, “significantly” and “steadily” are not interchangeable. Your parenthetical here is nonsensical. Second, it’s extremely easy to find sources discussing such, but if you need help, try this link I provided elsewhere on this page. You’ll quickly see many different things have changed, not just CO2, and it’s only by considering the combined effects of all these things we can hope to reach any sensible conclusion.

        For “scepticism” to be worthy of any merit, you must have an alternative hypothesis capable of explaining all the change that has occurred since then; and is now accelerating ahead of IPCC predictions.

        The part I made bold is dumbfounding. I don’t know what makes you think it’s true, but to me, the most likely source seems to be someone’s delusions.

        Therefore, now that we know that burning fossil fuels is causing the problem, we should stop doing it ASAP.

        This is either a horribly phrased comment, or a completely idiotic one. I’ll leave it to you, and other readers, to figure out which.

        Your “wait and see” attitude is just simply irrational.

        You have no way of possibly knowing that is my attitude toward global warming, yet you not only attribute it to me, you deride me over it. I have no idea why you did such, but it makes talking to you completely unappealing. At the point you flagrantly make things up about me, I have no reason to trust anything you say about anything else. If you can’t get the obvious right, how could I expect you to get anything else right?

      • Martin,
        If you are wrong about the coming “mass extinction event” and are successful in banning the use of fossil fuels, you will be the enemy of humanity just like Rachel Carson was an enemy of humanity for her “work” on getting DDT banned.
        One very successful part of Linzden’s talk was his characterization of alarmists like you. You appeal to “consensus,” but you don’t know what the “consensus” is. Or perhaps you know, but are misleading people. Hansen’s 6C is not consensus. He is a contrarian.
        His 6C climate sensitivity is purely circular. In his 1988 paper, which I have already cited to you, he puts a sensitivity into his model that would explain the previous warming. He comes right out and admits it. And, there is nothing wrong with doing that. But, then he runs the model, is not accurate over 20 years, and changes the model to make it accurate. And, the big “proof” for CO2 being the driver is “nothing else can explain the warming.”
        Then, of course, reality still isn’t matching the models, so we have to come up with other explanations besides, of course, the sensitivity being too high. Aerosols, that is it.
        And, Martin, how do Hansen’s grandchildren feel about him jetting all over the globe on $26,000 vacations collecting $500,000 “prizes?”

      • The notion of a scientific conspiracy is an illusion.

        Perhaps. The notion of a scientific corruption, however, is anything but an illusion. Scientists, like anyone else, serve the interests of their paymasters – climate science being science in the pay of politics, and therefore in the service of politics. Exactly like tobacco company scientists worked in the interests of their employer.

    • Martin,
      Disagreeing with you is not misrepresentation.
      Who should we listen to, an internet expert or a professor?

      • Experts perform worse over the long term than the man in the street. The reason is simple. The experts assumes he knows the answer, even when he doesn’t, simply because he is an expert. The man in the street has the good sense to recognize that what we don’t know is inherently unpredictable.

        Experts are misled because they assume they know all there is to know on a subject, and anything they don’t know must have a relatively minor effect. History tells us the opposite. Today’s scientists will be considered no different that superstitious, dark age alchemists 1000 years from now.

      • Vaughan Pratt

        Today’s scientists will be considered no different that superstitious, dark age alchemists 1000 years from now.

        You appear to be laboring under the delusion that prior to the last few centuries all scientists were alchemists.

        As undergraduates we were obliged to take a course in the history of science. We didn’t see the point at the time. In retrospect it’s clear they should have shocked us into reality by holding up fred berple as a example of what we would turn into without that course.

      • ferd berple

        The “man in the street” is more likely to get a prediction right than the “expert” for exactly the reason you have stated.

        Nassim Talbeb covers precisely this point in his The Black Swan

        [Vaughan Pratt has very likely not read this book yet.]

        Max

  58. Rogelio escobar

    I think that C02 as any other minor gas N02, O2 etc, has absolutely NO effect on global temperatures in the atmosphere. All excessive heat potentially produced by an theoretical “excess” in the greenhouse is lost to space through equatorial regions if I recall re Spencer et al. C02 has been literally miles higher during glaciation periods.

  59. Taking just one assertion from Professor Lindzen’s presentation (Page 11 of the PDF): “As Phil Jones acknowledged, there has been no statistically significant warming in 15 years.”

    To which 15 years does he refer? Could it be the 15 years from 1995 to 2010? If so, Peter Sinclair explains (to those who would listen) why reference to the ‘no statistally significant warming’ statement is disingenuous.

    • I’m sorry, I should have said ‘the 15 years from 1995 to 2009′ (in which global warming was statistically significant at the 90% confidence level). Warming in the 16 years from 1995 to 2010 was statistically significant at the 95% level. Silly me.

      • B – Do you agree that from 1995 to the present there has been no statistically-significant global warming

        Jones: Yes, but only just.

        You seem a little confused about what Jones did say in that interview.

      • No, P. Solar, I am not confused. I suggest you read what I wrote, and then follow the link I offer you before trying to continue the ‘discussion’.