Mid 20th Century Global(?) Warming

by Peter Webster

The mid-20th century temperature “bump” (peaking circa 1940) is an interesting feature of the temperature record.  This “bump” was discussed in an email from Tom Wigley to Phil Jones referring to a WUWT post that  discusses a paper by Thompson et al.

The issue of the mid century temperature bump was raised on a previous thread by Girma in the context of an email he sent to Kevin Trenberth (some excerpts provided below):

G: The following is my interpretation of the [global temperature] data.

G: In the last 100 years, the globe had TWO warming phases. The first was from 1910 to 1940 and the second was from 1970 to 2000, and their global warming rate was about 0.15 deg C per decade giving a warming of 0.45 deg C. In the intermediate 30-years period from 1940 to 1970, there was slight global cooling.

G: Based on these observed data, as the global warming rate of the two global warming phases were identical, the effect of human emission of CO2 for 60-years has not increased the global warming rate. Though CO2 is a greenhouse gas, the observed data says it has no effect on the global temperature trend. Observation should win theory all the time.

KT: The warming in the first phase was not global but focused in the
 North Atlantic. It was related to changes in the ocean. The warming in
the southern hemisphere is more steadily upwards: no steadying off or
 down phase. So the patterns of change also matter.

Looking at the raw surface temperature (unhomogenized), it can be shown that the warming through the thirties and early forties was indeed global and not restricted to the North Atlantic.

The first figure (from Polyakov et al 2003) shows a time series of  surface land temperature in the Arctic from 1880-2000. The numbers above the abscissa show the number of stations used in the compilation. The graph shows a gradually rising surface temperature trend over the 105 years of the temperature record but the dominant feature is the nearly 2C (trough to ridge) warming from about 1920 to 1940, followed by a decline bottoming out in the mid-1960s. At that stage the temperature trends upward to almost the same level as the previous peak.

The Polyakov et al paper received a brief mention in IPCC WG1 Chapt 3 :

A slightly longer warm period [compared to the present], almost as warm as the present, was observed from the late 1920s to the early 1950s. Although data coverage was limited in the first half of the 20th century, the spatial pattern of the earlier warm period appears to have been different from that of the current warmth. In particular, the current warmth is partly linked to the Northern Annular Mode (NAM; see Section 3.6.4) and affects a broader region (Polyakov et al., 2003) (Chapter

We can test Trenberth’s contention that the earlier warming “…was focused in  the North Atlantic”  and that “….The warming in
the southern hemisphere is more steadily upwards: no steadying off or
 down phase…”

The second figure  is a preliminary analysis using land station data kindly provided by Phil Jones of CRU (unhomogenized).  The figure shows the surface land station data clustered in areas (numbered: these are the World Meteorological Organization regions) for the period 1930-1950 relative to the mean of that period between 30N and 45S. 5-year running averages are shown.  A clear warming occurs up to the early and mid-1940s followed by cooling through the 1950s and 1960. Contrary to Trenberth’s statement, the warming that peaks around 1940 is not restricted to the North Atlantic, with the southern hemisphere showing the same signature. Thus, it is safe to say that the “bump” in the land surface temperature pattern was global.

So, a number of questions arise:

  • Even though the IPPC noted the mid-20th century warming , why was it not highlighted as something worth investigating? Various unsatisfying explanations (to me at least) for this are given in Ch 9 on attribution..
  • Why is the southern hemisphere drop in temperature from 1945+ much the same as the northern hemisphere? The standard argument for the drop is the increase in aerosols with enhanced industrial activity after WW2. But this seems strange as even now the ration of aerosols between hemispheres is 2:1 NH: SH.

It would seem to me that there were large-scale natural oscillations occurring during the earlier part of the 20th century. Clearly, examination of these oscillations, during a time of relatively large data coverage,  might help determine the proportion of the warming in the latter half of the 20th century that is due to anthropogenic influences and those associated natural coupled ocean-atmosphere modulations.

While this preliminary analysis focuses on land temperatures, there are many  issues surrounding the sea surface temperature data during this period and we will return to this particular issue at another time.  For example, Thompson et al (2008) attempted to explain the sea-surface  temperature drop after 1945 as an uncorrected measurement bias caused by the change from bucket to engine-intake temperature measurement:

Data sets used to monitor the Earth’s climate indicate that the surface of the Earth warmed from 1910 to 1940, cooled slightly from 1940 to 1970, and then warmed markedly from 1970 onward1. The weak cooling apparent in the middle part of the century has been interpreted in the context of a variety of physical factors, such as atmosphere–ocean interactions and anthro- pogenic emissions of sulphate aerosols2. Here we call attention to a previously overlooked discontinuity in the record at 1945, which is a prominent feature of the cooling trend in the mid- twentieth century. The discontinuity is evident in published ver- sions of the global-mean temperature time series1, but stands out more clearly after the data are filtered for the effects of internal climate variability. We argue that the abrupt temperature drop of 0.3 6C in 1945 is the apparent result of uncorrected instrumental biases in the sea surface temperature record. Corrections for the discontinuity are expected to alter the character of mid-twentieth century temperature variability but not estimates of the century- long trend in global-mean temperatures.

Yet, the surface land data shows a similar drop in phase with the drop found over the ocean areas.   I understand that CRU is revising its ocean surface temperature analysis; it will be interesting to see how the mid-20th century ocean temperatures change.

205 responses to “Mid 20th Century Global(?) Warming

  1. Peter,

    No one looked at the physical changes occurring to the planet years ago and fluffed it off to AGW.
    These physical changes have changed the weather systems and heat of the oceans.
    Now the warm waters are in the Arctic and the cold air is an evaporation machine in full operation.

  2. Peter,
    I missed adding the record breaking cold temperatures worldwide. These are not a short term occurrance.

  3. Climategate emails? Really? Why does it even matter if an email written at some point in the past is correct or not?

    Surely the crux of the issue is:

    “Various unsatisfying explanations (to me at least) for this are given in Ch 9 on attribution..”

    Why are these explanations unconvincing? The rest of the post is devoted to picking apart an email yet the actual published science only merits a single hand waving sentence.

    • “Modelling studies are also in moderately good agreement with observations during the first half of the 20th century when both anthropogenic and natural forcings are considered…” This isn’t even an explanation. It just establishes a relatively low standard (“moderately good”) for the first half of the century.

      • read the threads on detection and attribution, where this issue was discussed extensively

      • Since you seem to be asking me personally to read them, let me just mention that I read them with great interest. I don’t remember everything, though.

    • steven mosher

      One of the interesting things about this paper is that it collates hitherto unused sources to characterize the warming in the arctic.

      Since I’ve spent far too much of my spare time looking at the data used by GISS and CRU, I can tell you a couple things.

      1. The differences between CRU and GISS are largely driven by differences in how they handle the arctic.

      2. The methodologies of CRU and GISS are “brittle” WRT how the arctic is handled. Primarily because of the high variability of stations that are located near the ice/land boundary.. which can also be a land/water boundary.

      3. The data currently used by both agencys is relatively sparse 9 in stations per sq km)

      4. The combination of sparse stations and high variability is a problem

      5. The combination of under sampling and polar amplification is a problem.

      So its interesting that paper that samples this region more completely should show comparable or higher warming in the mid century as it does today.

      The handwaving away of this finding does not engender trust. Of course, AGW is still a sound theory, the question remains what are the amplitudes and frequencies of unforced variation. That seems like a real science question and one that some people would rather not fund.

  4. Interesting post- qualification of the natural cycles (especially the mid century oscillation) are critical before any attribution of an anthropogenic fingerprint can be setablished.

    On the face of it, there would seem to be a number of naturally occuring cycles ‘sitting’ on a baseline or slight warming (exit of the LIA).

    Taking this assumption as read, would it not follow then that the recent warming and the ‘records’ were in fact nothing but natural?

    Could you link or send the raw temp data you used? It would be interesting to attempt to ‘remove’ the baseline warming and then compare the mid and latter century temperature oscilations- as if they’re statistically similar it could suggest that the whole cAGW theory is a load of hot air (bad joke- sorry!)

  5. This fits in with supporting the reasonable null hypothesis, not the phony one Trenberth and pals are pushing.

    • hunter I trust you are following the Oz – lots of dam stuff for you. The thot plickens…

      • Only in plot thickened minds:

        But a manual for Wivenhoe Dam appears to vindicate decisions made by the utility not to drastically increase spills on the weekend before the floods.

        The size of releases made from January 8 to January 10 show the operator followed a flood mitigation strategy based on a reasonable expectation that drastic releases would not be necessary because dam levels would not threaten an emergency spillway.

  6. Re-reading the IPCC Ch 9, I find this statement: “The rate of warming over the last 50 years is almost double that over the last 100 years (0.13°C ± 0.03°C vs 0.07°C ± 0.02°C per decade; Chapter 3)”

    This is rather amazingly misleading. Of course, an average rate over a period of ups and downs will be lower than a period that is mostly up. The effect of this is to obscure the fact that early 20th century warming was approximately as fast as the later warming period.

    • Yep, the average of half of a sine wave is definitely different from the average of the entire wave!

  7. Thank you, Peter, for your excellent presentation.

    And thank you, Judith, for allowing the information to be presented here.

  8. Part of the issue is that if you are going to talk about aerosols, NH and SH are not as useful as you might think because two major sources, the Sahara and the Amazon are right on the equator, so which side are you going to put them on??

    Better would be to use three bands, tropical, northern temperate and southern temperate zones

  9. Labmunkey

    It would be interesting to attempt to ‘remove’ the baseline warming and then compare the mid and latter century temperature oscilations- as if they’re statistically similar it could suggest that the whole cAGW theory is a load of hot air (bad joke- sorry!)

    It is not required to remove the “baseline warming”. By plotting the 30-years trend for the early and late 20th century warming, you find that they are almost identical at about 0.15 deg C per decade as shown in the following plot:


    As the previous warming rate has not been exceeded, there is nothing unusual with the current warming rate.

    As the earlier warming was natural, so is the current one.

    • randomengineer

      Wouldn’t plotting 2nd derivatives for temp and CO2 be definitive in showing that they aren’t related?

      Temp would look like:
      /\ /
      / \ /
      —- ————-\ / ————
      1910 1940 1970

      There would be no negative excursion for CO2.

      • randomengineer

        Graph came out crappy. Forgot HTML can’t deal with spaces. Drat.

        Idea would be a positive excursion (temp rise) followed by negative (cooling) followed by positive (warming.)

      • Johannes Rexx

        Try non-breaking spaces next time.
        ampersand nbsp

      • … followed by a semi-colon.

        aaaaa aaaaaa

  10. Which is why the 2nd derivative is never discussed in AGW theory. The *acceleration* of warming is critical to proving that AGW is a fact. The natural variations in the earths climate generally show relatively smooth ramps in either the up or down direction. Only their periodicity/frequency changes have any 2nd derivative component. Thus, amplitude accelerations generally occur naturally only during transitions in or out of snow ball earth conditions. Therefore, AGW must show acceleration (2nd deriv.) in order to convince me of the existence of a problem. The problem that Trenberth faces is that “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” IMHO, he here alludes not just to the lack of warming but also to the lack of acceleration of warming.

  11. Presumably that 30 year base line has just shifted, as convention demands?

    If we have a few years of negative anomalies or declining anomalies, what then…

    Phil Jones was hoping to retire before baselines got shifted ;)

    • Steven Mosher

      Jones was concerned about changing the baseline because of the way it impacts the station COUNT in his method ( the common anomaly period)
      his selection of 1961-1990 is driven by the following objective facts.

      If you decide to use a common anomaly method, you should to select your period when the station count is highest. This happens to be the period from 1950 to 1990. Further, if you look at all possible 30 year periods in the whole record and select the period where the average annual global station count is the highest you will select 1953-1982 as the period. HOWEVER, if you look at the SH only you will pick the 1961-1990 period. So, using 1961-1990 gives you the following. it gives you the best sampling for the SH and good sampling for the entire globe.

      Now, if you compare the results for various selections of anomaly period
      ( 1951-1980, 52-81, 53-82 etc etc) you will see that the trend estimations are invariant WRT the period selected. What changes is basically the noise, and it doesnt change that much. If you select base periods with fewer stations, then you get noiser data. Bottomline 1961-1990 is a great period to use to get the most information out of the SH ( which is generally less well sampled than the NH)

      • And in 1990, the station count was cut by 60%, resulting in an immediate ~1.5°C bump in the trend line.

        Amazing, that!

    • David L. Hagen

      Barry re 30 year base line -See Roy Spencer
      Dec. 2010 UAH Global Temperature Update: +0.18 deg. C

      we have just switched from a 20 year base period (1979 – 1998) to a more traditional 30 year base period (1981-2010) like that NOAA uses for climate “normals”. . . .
      1) because the most recent decade averaged somewhat warmer than the previous two decades, the anomaly values will be about 0.1 deg. C lower than they used to be.. . .
      2) the 30-year average annual cycle shape will be somewhat different, and more representative of “normal” of the satellite record than with 20 years; as a result, the month-to-month changes in the anomalies might be slightly less “erratic” in appearance. . . .

  12. Dear Dr Curry

    Just come to say Thank You, and Congratulations for being so true, for loving Science so much, an for being so brave:

    I’ve just read that Makarieva has had difficulties in publishing her theories about the formation of the wind and the role of the water vapor, and the effect of it all on global climate. I have read the “learned paper” needed two accredited scientist referees, to study her work and vouch for its publication. And that you accepted the moment you were approached, but that all other scientists asked have declined. So the Makarieva paper is unlikely to be published in the near future.
    Thank you again for being the way you are.
    As we say in Spain
    “Los tienes bien puestos”
    (It seems a bit crude, but believe me, this is one of the best things that can be said of anyone. At least in Spain)

  13. This is a long overdue area that is ripe for exploration. The rate of the temperature rise from about 1910 to 1940 in the HADCRUT data is at least as great as the warming from 1978-1998. If the former rate of warming was “natural” what makes the same rate 68 years later “unprecedented”?

    • yes this is the elephant in the room! there is much discussion of the cooling, after 1940, but the warming is of equal interest.

      • Moreover there was a relatively small CO2 increase during this period. The first and second IPCC reports attributed this warming, and indeed all the warming in the last 200 years, to human factors. The later switch to the warming in “the last 50 years” (during which it only warmed for about 20 years) was a major retreat for AGW that no one in the press and policy realms seemed to notice.

      • this is interesting, i need to check this out in some detail, the retreat (in terms of period of anthropogenic attribution) has not been previously spotted or publicized, to my knowledge?

      • Been picked up at several blogs, IIRC, already. Just don’t ask me for references!!

      • In a mood of proselytising zeal I once lectured Tamino about the need to engage one’s critics or questioners with generosity: he ended up calculating the relative CO2 forcings for the two 20th C warming periods, .25 w/m^2 and 2. However, for reasons I could not understand my attempt to compare the two was flawed by my use of wrong temperature datasets. Apparently.


      • They have very sensitive and subtle editors on staff there.

      • Monckton made much of the identical slopes of the late 19th C warming, the warming to 1940, and that to 2000. Obviously only the latter could have been CO2-related.
        He went to the wall to try to make Pachauri admit the comparison of a 120-yr trend to the latest 30-yr rise was deliberately misleading, also. No luck.

  14. Peter Webster: Regarding Thompson et al (2009) findings, the 1945 discontinuity also appears in cloud cover and nightime marine air temperature data, and inverted in global wind speed data. Discussed and illustrated them in the following posts:

    It’s tough to explain the shifts in the other datasets as changes from bucket to engine-intake temperature measurements.

    One of these days I’ll get around to finishing a post to show why I believe there’s a missing La Nina in the ENSO data around that time. The SPCZ SST anomalies and other western Pacific SST subsets appear to be responding to a significant La Nina, but the NINO3.4 and CTI data don’t indicate one.

  15. Has any scientist done research on the possible cooling effects of WW2? Has this been ruled out?

    Oil spilled onto the oceans: one American oil company had 67 oil tankers sent to the bottom during WW2 – conservatively the equivalent of 5 Exxon Valdez spills.

    US Navy and Coast Guard vessels lost:


    They did not call it Iron Bottom Sound for no reason. My father’s ship sent 150′ of bow to the bottom there, along with almost all of the fuel oil.

    I’m still trying to get a number for TNT. The Russians claim to have used 660,000 tons tons of TNT. The 1st atomic bomb was equivalent to 12 to 16 kilotons of TNT. That’s just getting started. It was an intense global combustion event.

    • Under your theory, wouldn’t the period of the Vietnam war have had a similar effect? There was a greater amount of ordnance dropped during that war/period that during WW2

      • It’s not a theory. I just want to know if somebody has looked at it.

        I’ve read the ordinance claim before. Is that versus what the United States dropped, or versus what all participants dropped. We obviously dropped the most, but the other allies and the axis powers also dropped a fair amount. And then there is artillery, both floating and land-based. And there is scorched earth. A very large amount of cellulose was burned. And small munitions. And all of this was global versus concentrated in SE Asia.

        The Russian front, as a for instance, was in a good position to spread gunk on snow and ice. Oil on the oceans, there was a lot of it. As a for instance, would an oil sheen effect ENSO? I do not have a theory about it; I’m just curious.

      • What was unique with WW2 was that an enormous amount of the ordnance was exploded in the oceans, and all over the world. The total amount of nitrogenous compounds, metals, organics, hydrocarbons etc that went into the oceans in 1939-45 was certainly measured in many tens of millions of tons. Much of this was in the very nutrient-poor tropical Pacific. The Japanese shipping losses in the Pacific alone were more than 10,000,000 tons. Add in allied losses, depth charges, torpedoes, mines, bombs, shell etc and it does not seem unlikely that marine productivity, plankton, aerosols etc would have been significantly affected.

      • Just look at the number of cities that were significantly reduced by bombing campaigns, and the scorched earth of the Russian retreat. The amount of cellulose that was burned during WW2 is staggering.

        When I first asked about this, the question was directed to Vaughan Pratt. My father’s 75mm rifle fired more than 12,000 rounds on Iwo Jima. There were four 75mms in the platoon, and they all fired a similar number of rounds. Here is what one of the ~48,000 explosions looked like: his platoon in combat against a fortified coastal-gun position:


        This was the least common, by a wide margin, USMC artillery piece operating on the island.

    • JCH I may have missed it, but I can’t remember seeing anything that deals with WW2. It seems to me it must have produced all sorts of perturbations of the environment, worthy of serious study, if you’re worried about the climate.

      • sorry that should have been “deals in depth with WW2”

        I see references, but they seem to gloss.

  16. It is evident that there is a huge AGW signal to noise problem with natural cycles furnishing the noise. Comparing the two rates of warming and concluding there has been no AGW is a non sequitur. Claiming that one warming period was largely natural while the second largely or significantly anthropogenic is speculative wishful thinking. Claiming that the earlier warming trend was confined to the northern hemisphere is obviously another example of Trenberth’s dismissive nonsense.

    Contending that a change in climate is within the range of Holocene natural variability doesn’t explain the origins, magnitude, direction or duration of that change. Comparing past with present offers little in the way of explanatory value. It does, however, offer some comfortable reassurance. At the current rate of warming GMST may once again reach the level of the Roman Warm Period. That shouldn’t take more than 50 years. If we are lucky, it will happen sooner. After a few more decades we might even match the Minoan Warm Period optimum. In a little less than a century we will be right back where we were 3,000 years ago and life on Earth as we know it will end as it did then.

    Quite troublesome is the current state of the sun. If past is prologue, we just may be slam-dunked into a Maunder Minimum. If so, it may take a century or more to get back to where we were a century ago. So much uncertainty! But just to be on the safe side I’ll see what I can do to enlarge my carbon footprint. Every little bit counts, don’t you know?

  17. The 30 year satellite record also shows global warming most pronounced in the NH and least pronounced the the SH by a margin of almost 3-1.

    So I’m at a loss as to how Dr Trenberth could conclude the late 20th century warming trend was different then the early 20th century warming trend using the argument that the early 20th century warming trend was confined predominantly to the NH.

    It seems to me the temperature data for the antarctic prior to WWII is fairly sparse.

    • This probably results from most of the N. hemis being land, while most of the S is ocean. The ocean will damp out any warming trend. It has nothing to do with bucket adjustments or Trenberth’s N. Atlantic nonsense. Mann et al have tried to pin the MWP on N. Atlantic oscillations, as if such could last for hundreds of years (and conveniently ignoring data from China and North America and Japan and NZ and…etc.)

  18. Wrt the Wigley/Jones email exchange, I’ve really tried to keep an open mind while re-reading it, but I really can’t see how it can be interpreted as being anything other than a discussion of finding ways to ‘hide the blip’.

  19. Roger Andrews

    Peter Webster asks two questions: First: ” Even though the IPPC noted the mid-20th century warming , why was it not highlighted as something worth investigating?”

    Well, the answer to this one is simple. It doesn’t jibe with AGW theory. AGW theory can’t explain the mid-20th century warming, and certainly climate models can’t hindcast it, so it was convenient not to investigate it too closely.

    But what do we get when we do investigate it? We find the following:

    1. There was a period of “global” warming between about 1910 and 1940 followed by a period of “global” cooling between about 1940 and 1970. However, these warming and cooling periods were confined almost entirely to high northern latitudes. They are barely detectable at the Equator.

    2. The warming/cooling periods were strongest in the Arctic, began in the Arctic and then moved south. They took several years to travel down to the Tropics, and their amplitudes were much diminished by the time they got there.

    3. The strong 1910-40 Arctic warming coincided with an increase in total radiative forcing from all sources of only about 0.3 watts/sq m, according to GISS. So did the strong 1940-70 cooling.

    The obvious conclusion here is that the pre/post- 1940 Arctic warming and cooling periods were natural events that began in, and spread south from, the Arctic. (What caused them? Maybe variations in sea ice extent, maybe not, but they certainly weren’t caused by man-made GHG emissions.)

    The second question was: “Why is the southern hemisphere drop in temperature from 1945+ much the same as the northern hemisphere?” The answer to this one is simple too. The data have been massaged. If we look at the unmassaged data we find that surface air temperatures actually decreased by about 0.4C in the Northern Hemisphere but only by about 0.1C in the Southern between 1940 and 1970.

    • The cosmic ray theory says high to high-mid latitudes should respond most to cosmic rays, and cooler temps there would spread south over time by mixing. Note that the cosmic ray effect is diminished at the S. Pole because it is all covered with ice (albedo effect changes the result). Increase cloud cover in Artic will cool the ocean during summer, whereas clouds at the S pole reduce shortwave reflection.

      • Most of climatologist misinterpret the ‘Arctic effect’. The Arctic’s albedo is a factor, but not the critical one. The Greenland’s summer temperatures during the last 100 years (continuous record available, for both east and west coast) have been within temperature band of 2 degrees C (+5 to +7 C), while the winter temperatures band for the same period is twice as wide (-9 to -5 C). This makes winter far more critical factor, the albedo doesn’t matter because of low insolation, while the cloud cover makes a positive contribution.
        Second graph in: http://www.vukcevic.talktalk.net/LFC10.htm
        There is no correlation between either summer or winter Greenland’s temperatures and CR impact recovered from the Dye3 and NGRIP data.
        There is more about Arctic in: http://www.vukcevic.talktalk.net/NFC1.htm

      • Roger Andrews

        Maybe I’m missing something, but your graphs seem to show a strong correlation.

      • Yes with the CET (Central England Temperature) not Greenland, however that is not due to CRs but another factor:

      • Roger Andrews


        It would be nice if we could attribute Arctic temperature changes to cosmic rays, and my understanding is that possible links between the two have already been established. However, I can’t find any cosmic ray reconstructions covering the critical 1910-70 warming and cooling periods. Do you know of any?

      • Arctic records are no good
        (see: http://www.vukcevic.talktalk.net/CET&10Be.htm)
        the Antarctica’s data are not contaminated and hence far superior

      • Here are the Arctic’s annual data starting 1910 (read horizontally), normalised values:
        5.908 5.759 6.144 6.406 5.684 4.59 3.956 4.155 5.137 6.045 6.144 5.846 6.107 6.679 6.517 5.572 4.429 3.372 3.111 4.143 5.038 5.15 5.1 5.175 5.535 5.958 4.366 4.665 4.926 5.896 5.672 6.02 5.946 6.107 4.292 3.011 3.745 4.13 4.491 4.13 4.379 4.752 4.54 4.441 3.222 1.283 1.121 1.295 1.693 2.414 3.235 3.658 4.553 4.727 4.429 3.446 2.8 1.842 2.812 4.217 4.404

      • Roger Andrews

        Thank you. Are these the good CR data or the bad data?

      • Data numbers are not absolute values, just normalised to show the change. They are the best available I know of, but still contaminated. Only good CR data are those collected by NASA via satellite (1970+). Even so cloud coverage varies only about + – 1-2% from year to year (well within uncertainty band), while neutron count varies about 10-15% for max-min of a solar cycle.

    • Sea ice certainly decreased during the 1915-1940 period. Norsk Polarinstitutt has collected the rather scrappy data available, and from this it seems that at least in the Atlantic sector sea-ice in 1930’s was about as reduced as it has been in the last decade:


      • Craig Goodrich

        Well, we know a couple of things about the Arctic —
        — the greatest ice/temperature variation year-to-year is in the winter.
        — winter ice formation (and deformation) strongly affects summer ice extent.
        — ocean currents and temperatures have a much greater effect on ice formation and distribution than atmospheric temperatures.

        Anybody have data on the North Atlantic and Arctic Oscillations for that period?

    • Tropical variation is always minimal. The tropics are very stable. Only similar latitudes are valid comparisons.

  20. Dr. Curry
    It may be of some interest to note that sudden climatic changes are often associated with sudden and extensive change in the Earth’s secular magnetic field (Bz) change; this is also case for the Equatorial Pacific ENSO region in the mid 20th century:
    Up to 1940 (period of temperature rise) the region’s secular variation was benign, than in 1940 ‘all Hell breaks loose’ (sudden unexpected temperature fall), only to stabilise by mid 1980’s (temperature on the rise again); the current secular Bz is showing only a small increase in activity, so one wanders is there a new cooling period due.
    Here I show relevant changes from 1930 to 1985
    For more details you can also see:
    Your comment is welcome!

  21. It strikes me that a significant omission from this post and thread is a discussion of the greater role of solar forcing during the first warming phase than the second. For detailed discussion of this and other aspects of solar/climate relationships, the comprehensive review by Gray et al is informative –

    “>Solar Influences On Climate

    To cite a relevant passage from the paper, “Model simulations of twentieth century climate that include all the major, known forcings (solar, volcanoes, GHGs, aerosols, and ozone), together with the detection/attribution techniques based on observed patterns, have shown that most of the global warming in the first half of the twentieth century was natural in origin, and much of this can be attributed to an increase in solar forcing [Tett et al., 2002; Stott et al., 2000, 2003; Shiogama et al., 2006; Meehl et al., 2004; Knutson et al., 2006; Hegerl et al., 2003; IPCC, 2007]. These same studies and others [e.g., North and Stevens, 1998] also concluded that most of the warming in the latter twentieth and early 21st centuries was due to increasing GHGs that have overwhelmed any natural changes in solar forcing. Results for the past 20 years continue to indicate that solar forcing is playing at most a weak role in current global temperature trends [[Lockwood and Fröhlich, 2007].”

    The review also discusses uncertainties surrounding attempts at precise estimates of the solar contribution.

    In addition, it reproduces the graphs of twentieth century warming on both continental and regional levels (Figure 28), which tend to support Trenberth’s contention that regions in proximity to North Atlantic climate variation (North America, Europe) contributed disproportionately to early twentieth century warming.

    It is my sense that part of the problem of comparing the “two phases” is the spike in the middle (around 1945). If this is included in the “early phase”, the warming rate is accentuated, but if it is omitted, the warming trend does appear to be somewhat lower than during the post-1970 era. The variations in ENSO, PDO, and AMO appear capable of explaining some of that upward jump (and by inference, part of the downward slide subsequently), and aerosol cooling (with some documentation in terms of aerosol optical depth) may explain much of the remaining flat interval, but as Peter Webster has noted, greater hemispheric differences might have been expected if aerosols were the complete story.

    • Fred, you raise important points regarding attribution of this feature, which were discussed on the earlier attribution threads. The main concern I have is the status of the global surface temperature data sets, progressive analyses continue to work towards minimizing this feature, whereas it is very evident in the raw data.

    • Steven Mosher

      Fred, just to further Judith’s point. What one sees in the raw data ( lets take SST’s) is generally more variability than you see in the processed data. When you actually look at the “raw’ source data, what you see is a sparse and confounded ( seasonally) data collection process. This source data is then processed ( by EOFs) and one ends up with global maps. fields which of mathematical necessity have had their variability reduced.

      Not sure how one solves that problem

      • Steven,
        I still question ARGO having to change ALL their programs due to the data ALL showed in the lower range.
        Now vast areas even with the program change are dropping.

        Seem to remember that AGW was good for just about anyone for a grant.

      • Steven Mosher

        I’ll suggest you go get the data and look at it for yourself. Spend more time looking at the actual work done and more time trying to understand it. Then your “questions” will have more weight.

        That’s kinda what I did with the land data. To date Bob Tisdale is the only private citizen I know who has taken time to start the process of wading through actual data. we need more bobs.

        Since you seem interesting in things like salinity there is also a issue there. In some areas GCMs get the salinity grossly wrong. I dont have a ready reference but it was covered at AGU.

      • Roger Andrews

        As a private citizen who has waded through the data arguably as much as Bob Tisdale, allow me to make the following points:

        First, Dr. Curry is only partially correct when she observes in the post above that “progressive analyses continue to work towards minimizing (the mid-20th century warming) whereas it is very evident in the raw data”. It isn’t “analyses” that do this, it’s “corrections”. “Corrections” to major data sets in fact always result in a closer match between observations and climate models or between observations and what AGW theory says should have happened. Other examples include SSTs, tropospheric temperatures and even the tide gauge records.

        Second, suppression of the mid-20th century warming is only a comparatively minor problem. A bigger one is the large surface warming differential between the hemispheres, which is also very evident in the raw data, and also something else that AGW theory can’t explain and climate models can’t replicate. However HadCRUT3, the IPCC’s official surface air temperature time series, doesn’t show this differential. Why not? Because “corrections” have succeeded in doing away with it.

        But the biggest problem is HadCRUT3 itself. HadCRUT3 isn’t actually an air temperature series at all, but a mixture of surface air temperature and SST data. These data are mixed because it’s assumed in the total absence of proof that SSTs are valid long-term air temperature proxies (which in fact they aren’t). The raw data have also been heavily “corrected” using adjustments that in the case of air temperatures are suspect and in the case of the SSTs are known to be wrong (Thompson et al. 2008). In other words, the IPCC has based its conclusions on a surface air temperature time series that is a heavily and incorrectly-adjusted mixture of incompatible data sets.

        None of this of course proves that AGW isn’t happening, but we certainly aren’t going to improve our understanding of it while we continue to “verify” climate models against a series like HadCRUT3, which can only charitably be described as inadequate.

      • North/South hemispheres difference ‘could be’ attributed, in my view, to these factors:
        – North mainly land, poor retainer of the absorbed heath. South mainly water with high thermal capacity.
        – Arctic, mainly ocean, Transpolar current, number of gyres and significant heat exchange with North Atlantic. Antarctica, isolated land mass with Circumpolar current acting as a the Antarctica’s temperature flywheel.
        – Magnetic configuration. North push-pull Hudson bay-Siberia poles, undulating change in the last 200 years. South pole smooth fall off during same period.
        Differences can’t be properly, and SHOULD NOT be, reconciled in a single global temperature number!

      • Vukcevic,

        I have found that the thermal energy heat of the oceans have moved to the colder extremes. This completely changes the weather patterns generating MASSIVE evaporation and colder land temperatures.

        Currently a report is coming out that will state that ocean levels are dropping. This is one of two factors, the compression of colder water and the massive evaporation are lowering the ocean levels.

      • Heat accumulated in equatorial area during the last 20-30 years eventually finds its way to the polar regions (the world oceans conveyor belt) , but it does take time, about 1500 years for one round trip.
        Climate (AGW) science may be in a hurry, the oceans are not.

      • Vukcevic,
        Due to the salinity changes, less heat is recharging the ocean equitorial areas and shifted the ocean heat to the polar regions. This started 40 years ago.

      • Steven Mosher

        “The raw data have also been heavily “corrected” using adjustments that in the case of air temperatures are suspect .”

        I dont know how you come to that conclusion since the description of the adjustment proceedure was not released to those of us who FIOAd it and since the only study of the adjustments showed that the adjustments were minor and since if one uses the same raw data as CRU and the same method the basic answer is slightly lower than CRU.


      • You don’t need to know how the corrections were done. All you have to do is compare the corrected records with the raw records. When the corrected records fairly consistently show more warming than the raw records (e.g. the USA, Australia, New Zealand , the Southern Hemisphere as a whole), or suppress inconvenient wiggles (e.g. the mid-20th century warming), and where these corrections always result in better agreement between observations and climate model hindcasts, then it’s appropriate to describe the corrections as “suspect”.

      • Steven Mosher

        “You don’t need to know how the corrections were done. All you have to do is compare the corrected records with the raw records. ”

        1. The raw records of CRU have never been released so you cannot have done this comparison. the data in question is still covered by confidentiality agreements

        2. One independent investigation was made of these corrections and that investiagtion found no significant change for changes made to the land record.

        3. Steig and others did a random selection of the 60 longest records in CRU. They then proceeded to source the data for these stations from UCAR ( as opposed to GHCN). no difference was found. Also this 60 station subset showed the same warming

        4. As a critic of the temperature record I did a similar exercise randomly selecting data from the same source as CRU. No difference.

        5. ron Broberg included more sources using GSOD. again no difference.

        Simply, there is no factual basis to suspect the adjustments. HOWEVER, any time records are adjusted there should be a attendant increase in uncertainty of the final answer as every adjustment comes with error9 as opposed to bias). That is the issue. This issue has been recognized by those in the field, but has not been addressed.

        “When the corrected records fairly consistently show more warming than the raw records (e.g. the USA, Australia, New Zealand , the Southern Hemisphere as a whole), or suppress inconvenient wiggles (e.g. the mid-20th century warming), and where these corrections always result in better agreement between observations and climate model hindcasts, then it’s appropriate to describe the corrections as “suspect”

        Well, I can speak for the USA records. Having studied the adjustments since 2007, I shared your suspicion. However, walking through the data and reading all the papers, and actually comparing raw with adjusted, I would come to a different conclusion. The adjustments are all rational, they are for the most part well documented. The issue is the uncertainty associated with the adjustments. If you want to discuss a particular adjustment we can start with the largest ones in the US record. TOBS and MMTS. everything else is mousenuts.

        or you could use the rawest of data, data that has not been adjusted or been through any significant QA; GSOD.
        answer? no difference.

        In summary, I think, there is reason to investigate the adjustments, but there is no reason to assume they are suspect. I say that because my initial suspicion of US adjustments was smashed by the facts. Best not to repeat subjective errors.

      • “the data in question is still covered by confidentiality agreements”

        Steve, we keep seeing this, but I have only seen evidence of it being tested irt of Sweden. After some confusion attributable to English language difficulties it became clear that Sweden was perfectly happy for its raw data to be released, but objected to Phil Jones attributing his doctored version to them.

        I must have missed something. Can you give examples of other countries who have refused permission for their raw data to be released?

      • It is claimed that Canada refused permission. Again, though, Canada did not refuse to allow their data to be released. They only refused to allow the CRU-value-adjusted version held by the CRU to be released, falsely labelled as Canada raw data. Canada asked CRU to direct requests to their own repository for raw data, where it was freely available. CRU falsely described this as a refusal to allow CRU to release Canada’s raw data.

        Jones’ and the Met Office’s blanket request to release CRU-adjusted data, explicitly mislabelled raw country data, can only be regarded as a blatant attempt to coax a negative response to their purported request to release raw data.

      • Roger Andrews

        First on the US records. I too have looked at the USHCN corrections and my conclusions are exactly the opposite of yours, so I guess we’re just going to have to agree to disagree on that one.

        Second on the question of CRU. CRU produces “land” series that are reportedly based on land station records. So does NCDC and GISS (which has two, one a “land” and the other the “met station only” series) and I have one of my own. In the Northern Hemisphere these series show the following net warming between 1900 and 2000.

        CRU 1.0C

      • Roger Andrews

        Whoops! Hit the wrong button on the incomplete posting above. Here’s the complete version.


        First on the US records. I too have looked at the USHCN corrections in some detail and my conclusions are exactly the opposite of yours, so I guess we’re just going to have to agree to disagree on that one.

        Second on the question of CRU. CRU produces “land” series that are based on land station records. So does NCDC and GISS (which has two, one a “land” and the other the “met station only” series) and I have one of my own, constructed from 900 unadjusted GISTEMP records. In the Northern Hemisphere these series all show about 1C of warming between 1900 and 2000:

        CRU 1.0C
        NCDC 1.0C
        GISS land 0.9C
        GISS met 0.9C
        Me 0.9C

        But in the Southern they show this:

        CRU 0.7C
        NCDC 0.6C
        GISS land 0.6C
        GISS met 0.4C
        Me 0.3C

        The problem here is that there is no way the land station records can support the 0.7C of “land” warming shown by CRU (or for that matter the 0.6C shown by NCDC and GISS). Clearly the CRU land station records received some large warming corrections, whatever they might have been.

      • Might it not be useful for Roger Andrews and Steven Mosher to find why their analyses of the USHCN corrections are so at odds? To find their ‘uncommon ground’ so to speak?

      • Steven,
        I recieved my salinity data changes before ARGO was in the water.
        This was from an oceanographer doing actual physical testing of ocean samples.

      • Steve,

        You bring up an interesting point. The method you speak of takes the predominant EOFs for the 1960-90 period. These are them applied to the highly variable ship data which follows the trading routes. If this ship data “tickles” one of the 1960-90 EOFs then it is inserted across the basin. This is which you get a beautiful anomaly SST chart for 11893, for example. I will post a picture of the SST distribution tomorrow. This technique makes for beautiful diagrams but there are differences between the raw ship data and the reconstructions. Also there is the problem of principal patterns not being represented in the 1960-1990 period. E.g., during the 1960-1990 period (as best as we can tell) there were few if any no mid-Pacific warmings (or Modiki El Nino) so that the predominant EOF did not include this form of El Nino so that none appear in the reconstructed data. Does this mean that there were no central Pacific El Ninos before 1960? We will never know!
        You ask how the problem can be solved. If you want global patterns of SST back into the 19th century then you are stuck with some form of methods to fill in the gaps. But perhaps an assessment of these reconstructions in the form of error maps (reconstruction minus ship data) should be part of the meta-data accompanying the SST fields.


  22. Judith,

    Not considered is the physical changes of this planet started to occur in the 1970’s and never addressed.
    The physical changes then are included in the temperature records of all things normal full steam ahead for the last 40 years.

  23. Does anyone have any idea where I could find an accurate comparison of the AMO & PDO combined with Global Temps over the past century or so? I know that Dr. Pielke did it with SST’s (or at least I think he did) but I would be interested to see it done with atmospheric temps and see how high the correlation was, especially when looking at the 20th Century warming periods.

  24. Harold H Doiron

    I applaude the self-reflection in this forum regarding “What is the scientific truth vs. IPCC reports?” Whom among you believe that the EPA Adminstrator’s determination that CO2 is a pollutant and therefore needs to be more tightly controlled is a bit premature and is willing to tell her so? Tighter CO2 control is not without cost to all Americans and to some who can afford it least. It seems to me that climate change science has been hijacked by politicans and special interest groups who don’t really care about science. I suggest that the climate change research community needs to fight back against political influences affecting your free and unfettered search for the truth regarding AGW, even if you take a hit on getting your research proposals funded.

  25. Acceleration and deceleration of global temperature trends

    “global warming has stopped” means the global mean temperature maximum record of 0.548 deg C for 1998 has not been exceeded for 13 years.

    “Global warming has stopped” means deceleration of decadal trends. For example, for 1990 to 2000 the trend is 0.25 deg C per decade warming, while for 2000 to 2010 the trend is only 0.03 deg C per decade warming, a deceleration by a factor of 0.25/0.03=8.3!

    “Global warming is happening” means acceleration of decadal trends. For example, for 1980 to 1990 the trend is 0.07 deg C per decade warming, while for 2000 to 2010 the trend is 0.25 deg C per decade, an acceleration by a factor of 0.25/0.07=3.6!

  26. Just spotted this at Real Climate:

    “There is one upcoming development that is worth flagging. Long in development, the new Hadley Centre analysis of sea surface temperatures (HadISST3) will soon become available. This will contain additional newly-digitised data, better corrections for artifacts in the record (such as highlighted by Thompson et al. 2007), and corrections to more recent parts of the record because of better calibrations of some SST measuring devices. Once it is published, the historical HadCRUT global temperature anomalies will also be updated. GISTEMP uses HadISST for the pre-satellite era, and so long-term trends may be affected there too (though not the more recent changes shown above).”

    Well, applying the Thompson temperature correction should get rid of much of the “bump” over the ocean. The “airbrushing” of the bump seems to be ongoing . . .

    • At the time of the “correction”, I posted a RealClimate comment noting that according to the IPCC data, the correction would not eliminate the ocean temperature disparity because most of it appeared in the marine air surface temperature record as well. RealClimate (probably Gavin) also noted this, and therefore appears to be aware that the disparity is not simply an “engine intake” problem but a true climate phenomenon.

      • will be interesting to see what the new ocean data set looks like, i’ve heard a rumor that the bump is diminished

      • Steve McIntyre at climate audit did an interesting follow up of this paper here.


        There were also several other post on that paper (which is Thompson 2008 (not 2007).

        The situation is much more complicated than what the paper originally stated. Correcting only the data between the 1940s and 1960s should have only displaced the problem in time. The fact that it took three years to correct suggest that the reform was much larger than first anticipated.

        Climate audit had pointed out the problem with SST a year before the Thompson paper.

      • Roger Andrews

        Unless I am greatly mistaken it will look very much like HadSST2 series it replaces. The bias corrections will simply be re-massaged to get rid of the “Thompson discontinuity” while preserving the good agreement between the SSTs and the NMATs after 1946. The unfortunate fact, however, is that the new bias corrections will be no more valid than the old ones (see my 7.26 pm post).

      • Nebuchadnezzar

        It’s interesting that you (and others, including our host) pooh-pooh a study that, for all we know, hasn’t yet been completed, let alone published. Isn’t it better not to prejudge this kind of thing?

        There are enough problems with published analyses, without worrying about rumors about unpublished analyses that might appear somewhere at some point.

      • Roger Andrews

        Well, I did say that I might be mistaken, but “provisional” corrections have already been published by Kent et al. (http://onlinelibrary.wiley.com/doi/10.1002/wcc.55/abstract – the paper used to be freely available but now it’s behind a paywall) and all these corrections do is massage the bias adjustments to preserve the agreement between the corrected SSTs and the NMATs. As a result the re-corrected SST record looks pretty much the same as HadSST2. HadSST3 will be significantly different only if the ad-hoc approach of choosing bias corrections that match the SSTs to the NMATs is abandoned. It’s highly unlikely that this will happen, but if it does I will regard it a a major step in the right direction.

      • Nebuchadnezzar

        Kent et al. 2010 shows corrections from HadSST2 (Rayner et al. 2006) and Smith and Reynolds (2002). The corrections are zero after 1941. Is that what you meant by provisional?

      • I should have made clear I was referring to the 2008 Thompson paper addressing the 1945 dip in SST, and not the 2009 paper that focuses on the bump. To me, internal climate dynamics remain a reasonably good explanation for the latter, but maybe more detailed scrutiny would contradict this.

      • Roger Andrews

        The disparity you refer to is the back edge of a clearly anomalous wartime “spike” which, as Bob Tisdale pointed out earlier, is visible not only in the SST series but also in the night marine air temperature, cloud cover and wind shear series (inverted) – and to these we can add the daytime marine air temperature series and the Kaplan sea level pressure series, which Bob didn’t mention. The chances that coincident distortions in so many different marine series during a period of wholesale disruption of global shipping are indicative of a “true climate phenomenon” are zero. They were a result of an across-the-board observational bias that affected all marine data sets and which lasted only for the duration of World War II plus a few months. Your disparity in fact coincides with the demobilization of the US Navy.

    • Well, applying the Thompson temperature correction should get rid of much of the “bump” over the ocean. The “airbrushing” of the bump seems to be ongoing . . .

      So you’re accusing scientists of wilfully manipulating the records to get rid of the “bump” because it is inconvenient for them?

      • andrew adams | January 22, 2011 at 2:56 am
        So you’re accusing scientists of wilfully manipulating the records to get rid of the “bump” because it is inconvenient for them?
        Not the ideal place to discuss this, but if you Google ‘Tom Wigley’ ‘the blip’ and ‘Air Vent’ you will see how one prominent climate scientist viewed the inconvenient data.

        McIntyre’s crusade for the data is not just a sideshow: adjusting away inconvenient data is a major failing of climate science.

        And, in spite of the poor logic behind the ‘bucket correction’ of Folland and Parker, in spite of the enthusiastic corrections of Wigley, we are still left with ‘why the blip?’ (Google on that as well. It’s an eye-opener.)

    • The more I see models being ‘adjusted’ and ‘corrected’, the more I’m reminded of Pathological Science.



  27. Peter,

    Per Meehl et al (2004), I recall that the pre-1950s rise in temperatures is reproduced in GCMs due primarily due to changes in solar forcing combined with a volcanic lull, but I could be wrong.

    I’ve got a fairly detailed examination of land and SSTs in response to your points over at the Blackboard; its a tad too long to post here: http://rankexploits.com/musings/2011/examining-mid-20th-century-warming/

  28. This paper answers the question about why that “bump” occured, and what caused the changes in direction of temps we have seen from 1900 to today:

    Has the climate recently shifted?

  29. Judith,
    When you put the salinity changes map.
    Over the Sea surface temperature map.
    Over the satellite cloudcover map.

    They are all in relation of huge changes happening with evaporation and temperature dropping.

    How many years will Climate Science take to look at this relationship to say if it is correct or not?

    Lives will be lost as time ticks on that could a least have a chance if warned.

  30. I don’t know why the Lean et al. (2005) reconstruction of irradiance from sunspot activity is not considered (e.g. see it at Climate4you site Sun section). This clearly shows a rise of 1.5 W/m2 in the solar constant coinciding nicely with 1910-1940. This translates to about 0.2 W/m2 of forcing, almost half the total rise since the LIA. With feedback it can account for up to 0.2 degrees, which is more than the expected effect from CO2. There were only about 60 sunspots in the maximum around 1910 increasing to more recent typical values over 100 by 1950. I saw that Wigley was discounting solar effects, but I don’t understand why.

    • Solar reconstructions are still evolving. Lean 2005 shows higher TSI variations than currently agreed upon.

      • Thanks. I think the LIA would be hard to explain without the kind of solar irradiance changes they suggest, however, so I am inclined to believe their magnitudes.

  31. Judith Curry, you actually give credibility to Girma, an absolute crank?

    Sorry, I’ve now lost all respect I’ve had for you.

    He’s the equivalent of the anti-vaxxers, anti-evolutionists, 6,000 year old earthers.

    I’m astonished. You’re smarter than this.

    • dhogaza,
      “Sorry, I’ve now lost all respect I’ve had for you.”

      I remember you losing all respect for Judith back in early Aug. 2010 over at RC. If you lost all respect then, how could there possibly be any left to lose now?

      Or were you just pretending back then?

    • Sorry, I’ve now lost all respect I’ve had for you.

      And Dr. Curry should worry about that why exactly?

    • Some people’s disrespect is golden.

  32. It’s clear, though, why climate science professionals are increasingly avoiding your blog. Girma? It’s like a geologist promoting a young-earth creationist’s view of geology and wondering why professional geologists run away.

    It’s not because the earth is 6,000 years old, Judith.

    Moderation note: Attack Girma’s arguments, not the person. Girma has been unfailingly polite, citing information about the temperature record and occasionally quoting Feynman, I don’t find any grounds to object to that. Spending time here is hard work for a climate professional, and a gratifying number of them are spending time here.

    • Whatever any of us feels about Girma’s views, he/she has the right to ask questions. Girma asked a question of KT and received a reply. It is that reply that is the subject of this thread, not Girma’s views. The headpost says nothing whatsoever about Girma or his/her opinions. As seems to be a habit of yours, you come in firing bullets at the wrong target.

    • Could you give some examples of how Girma’s posts compare to the young earth creationist view? Thanks in advance.

    • Your ad hom attacks may be acceptable by the few who still post at Real Climate & Open Mind, but not here.
      Improve your tone or kindly retire.

    • dhogaza‘s comments fit the pattern expected from reputational mechanism described by Kuran and Sunstein. Specifically (at Journal page 761) “and make anyone who questions their objectives appear ignorant, duped, or depraved.” Dr. Curry’s reply is perfectly appropriate.

      Timur Kuran and Cass Sunstein. 2007. Availability Cascades and Risk Regulation. Research. Social Science Research Network. October 7. http://www.law.uchicago.edu/files/files/364.pdf

      An availability cascade is a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse. The driving mechanism involves a combination of informational and reputational motives: Individuals endorse the perception partly by learning from the apparent beliefs of others and partly by distorting their public responses in the interest of maintaining social acceptance. Availability entrepreneurs – activists who manipulate the content of public discourse – strive to trigger availability cascades likely to advance their agendas. Their availability campaigns may yield social benefits, but sometimes they bring harm, which suggests a need for safeguards. Focusing on the role of mass pressures in the regulation of risks associated with production, consumption, and the environment, Professor Timur Kuran and Cass R. Sunstein analyze availability cascades and suggest reforms to alleviate their potential hazards. Their proposals include new governmental structures designed to give civil servants better insulation against mass demands for regulatory change and an easily accessible scientific database to reduce people’s dependence on popular (mis)perceptions.

      I disagree with the proposal by Sunstein and Kuran for “new governmental structures”. We have seen how that works under political guidance.

  33. dhogaza

    It’s clear, though, why climate science professionals are increasingly avoiding your blog. Girma? It’s like a geologist promoting a young-earth creationist’s view of geology and wondering why professional geologists run away.

    To counter the character assassination of dhogaza I will restate what I promote.

    Let us look at the data.

    Here the accelerated warming of the IPCC:

    Here is how the IPCC interprets the above data:

    1) Global warming rate for the 150 years period (RED) from 1856 to 2005 was 0.045 deg C per decade.

    2) Global warming rate for the 100 years period (PURPLE) from 1906 to 2005 was 0.074 deg C per decade.

    3) Global warming rate for the 50 years period from (ORANGE) 1956 to 2005 was 0.128 deg C per decade.

    4) Global warming rate for the 25 years period from (YELLOW) 1981 to 2005 was 0.177 deg C per decade.

    IPCC then states:
    “Note that for shorter recent periods, the slope is greater, indicating accelerated warming.”

    Okay, let us apply this “IPCC interpretation of data” procedure to compare the global warming rates in the last 25 years to that in the last 13 years going backward from 2010 as shown in the following plot.

    This result gives:
    1) Global warming rate for the 25 years period (RED) from 1986 to 2010 was 0.17 deg C per decade.

    2) Global warming rate for the 13 years period (GREEN) from 1998 to 2010 was 0.00 deg C per decade. (No warming!)

    Like the IPCC, I can then state.
    “Note that for shorter recent periods, the slope is smaller, indicating decelerated warming.”

    Instead of reporting this GOOD NEWS to the public, some have shifted the goal post by talking about individual year’s temperature by saying “we have had one of the hottest years on record.”

    Can’t someone living at the end of the 1930s also say “we have had one of the hottest years on record.” ?

    Yes, the globe was also warming at 0.15 deg C per decade 80 years ago!

    There is no warming now (http://bit.ly/fMwWl1).

    AGW supporters, is it possible that instead of the accelerating warming of the IPCC shown above, the global mean temperature can be interpreted as a cyclic pattern with an overall warming of only 0.06 deg C per decade as shown in the following plot?

    Did not Feynman stated on doing science the following:
    “Details that could throw doubt on your interpretation must be given, if you know them.”

    As the current DECADAL warming rate is zero (http://bit.ly/fMwWl1), not 0.2 deg C per decade as claimed by the IPCC, what is wrong in further validating the theory of man-made global warming before implementing policy based on the theory?

    What if we have cooling until 2030?

    The global mean temperature data for 2010 is out.

    It is 0.475 deg C.

    The previous maximum of 0.548 deg C for 1998, 13 years ago, has not been exceeded.

    Global warming has stopped for 13 years, and we continue to count the number of years that the previous maximum has not been exceeded.

    How many more years is required to declare global warming has stopped?

    The above is what I promote.

    By the way I am a MAN!

    • > By the way I am a MAN!

      Tell that to Peter Webster:

      > The issue of the mid century temperature bump was raised on a previous thread by Girma in the context of an email she sent to Kevin Trenberth (some excerpts provided below) […]

      Who’s racking who’s chain?

    • Whoops, sorry re gender! Lets put it down to the new PC speak that randomly rearranges pronouns! I visit blogs quite rarely and I apologize for being careless.


  34. Judith,

    I find is funny that climate science has not followed the ocean thermal heat movement.
    This has a direct effect to evaporation patterns and worldwide precipitation movement.
    Just saying oscillations does not make the understanding of the dynamics this generates to the weather patterns.

  35. LC

    “Whatever any of us feels about Girma’s views, he/she has the right to ask questions. ”

    I am a HE!

  36. KT: The warming in the first phase was not global but focused in the
 North Atlantic.

    From this data: http://bit.ly/fBZscX

    Southern Hemisphere warming rate for the period from 1910 to 1940 was 1.0 deg C per century.

    Northern Hemisphere warming rate for the period from 1910 to 1940 was 2.0 deg C per century

    From this data: http://bit.ly/gSi9q3

    Southern Hemisphere warming rate for the period from 1970 to 2000 was 1.3 deg C per century.

    Northern Hemisphere warming rate for the period from 1970 to 2000 was 1.9 deg C per century.

    From the above result, in both 20th century warming periods the rate of warming in the NH was greater than that in the SH.

    • As I posted above, both sides in this argument seem to have ruled out solar effects between 1910 and 1940 (see my other comment in this section). It is clear that sunspot activity was unusually low around 1910 (60 sunspots at the max), and reached some its highest activity of the century by 1950. Why can’t solar effects with feedbacks explain this warming period? Quantification was given in my other post.

  37. Judith,

    The credibility of climate science is falling due to the failure to follow the precipitation and evaporation patterns.
    Just strictly sticking with temperatures has missed the shifting of the ocean heat. This is a big influence of evaporation and changing of precipitation and wind patterns.
    Failing to look at other factors is now much harder to bring credibility back.
    If climate science sticks to the current system, then it will fail and be ridiculed as a massive waste of funding for bad science practises.

    • Joe,

      No question that oceans’ / water / vapour cycles play a leading role in Earth’s climate.
      No question also that this role remains basically not well known / understood, and that it has been steadily underestimated by models.

      But more generally, I would rather say the credibility of climate science is actually falling due to its inability to face and explain observational facts and data (main answer is always “data / measurements are not valid …!”).

      Best regards.

  38. Peter, I wonder if there is good data available on rainfall and/or river flow in northern South America (mainly the upper Amazon basin) in the twentieth century. And, I wonder whether that data would show changes corresponding to the global temperature changes shown in the graphs.

    One hypothesis is that global temperature is affected by the tropical rainfall (and consequent large latent heat release) in that region. The moisture transport and rainfall there may vary with the speed and path of the easterly wind from the Atlantic. The solar heating and the orographic lift are likely to be quite efficient in removing whatever excess moisture is transported into the region, so small changes in Amazon moisture transport may have small but noticeable effects on global temperature.

    A positive-feedback causation chain could be one where a small increase in trade wind speed or direction (toward areas with better orographic effects) release additional heat into the tropical upper troposphere, which affects the location and strength of downstream weather systems, which increases the strength/direction of the surface wind, which increases the moisture transport, which increases the heat release.

    Atlantic surface temperature and salinity might also be affected, which would have knock-on effects in the Northern Hemisphere.

    Northern South America appears to me to be special in that orographic lift and the strong solar heating of the surface probably remove much of the incoming moisture, rather than the moisture passing through into the eastern Pacific.

    And, a shift in the mean ITCZ location across the region might affect millennial global temperature.

    Or this may be totally wrong, but it’s fun to think about. It seems like the Pacific and ENSO, due to their size, get much of the attention, while South America, with its efficient setup for removing moisture, remains obscure.


    • Also of note are possible effects on upper-atmosphere moisture and thus regional IR loss, and the possibility that the region might act as a negative-feedback mechanism on global temperature. Many moving parts in this conjecture. Again, it’s simply fun to consider the situation.

    • David,

      If you have time, this link has an animation of rotation for cloud cover which may help.

    • It’s also no coincidence that the central US is the world champeen tornadic and extreme weather locale. The air from over the mountains comes in dry and easily heated on the plains, all ready to crash into the northbound sopping wet air from the Gulf. Geography matters.

  39. As an aside, might check WUWT recent entry on the error bars on these temperature measurements. 0.475C vs 0.548C (+/-0.5C) We are still way “down in the weeds” (noise).

  40. I remain skeptical, statistically.

    I look at the odds that the curve after the bump returned to the same or lower temperature as before the bump, and they are low, growing lower as time passes, falling so far as to make the bump itself statistically insignificant to the longer term trend.

    I look at the odds that the curve after the bump returned to a lower slope of temperature increase, or a different curve of temperature increase, or to require new or other mechanisms than operated before the bump, and again they fall to levels making the bump statistically insignificant to the longer term trend.

    I look at the banded comparison of northern, middle and southern latitudes for the bump, and the bump falls to statistical insignificance outside of northern latitudes, to the point I cannot take seriously outside of northern latitudes that there even was a bump.

    So the bump exists. It’s a bump of limited geographic and chronological scope.

    The bump represents local climate, for a period of so short a time as to hardly count compared to the usual 30-year time metric accorded to climate periods.

    While it’s interesting, one supposes, to climatologists studying the north Atlantic (which statistically it most closely finds support for association to), so would make it particularly interesting to students of hurricanes that make landfall in the USA, like our host, it doesn’t appear to be significant in arguing against the evidence for any global effect, or any effect with its roots in a global change dating to the time CO2 started to rise.

    If anything, the bump merely means if we see another multidecade temperature drop, arguments against AGW based on that fall are shown to be less persuasive, rather than more.

    Moreso arguments that a the temperature is flat-seeming for a decade or more are also shown to be less persuasive by looking at the mid-century bump.

    We’d need a clear fall to pre 1880 temperature levels, or the better part of a century of temperature drop, to statistically cast AGW into doubt, if we reject computer models.

    If we accept computer models and their current expert interpretation, all we need to dismiss AGW statistically is ten or more (possibly overlapping) 15 year periods of flat or dropping global temperatures with local ocean phenomena removed.

    And hey, we have one of those ten already.

    In as little as 9 years, if the temperature globally can be demonstrated to be flat or dropping, if you believe in computer models, you could have your falsification of AGW.

    Did I get that right?

    • Craig Goodrich

      “Did I get that right?”

      I don’t think so. What you seem to be doing is the same thing Dr. Trenberth tried in his recent presentation — reversing the null hypothesis.

      In the case of AGW theory, the null hypothesis is that any warming effect from CO2 is so minor as to be insignificant in comparison to natural factors such as ocean oscillations and other (mostly poorly-understood) phenomena.

      This is the “detection and attribution” problem discussed in Dr. C’s earlier posts; indeed it appears that every time a new measurement capability is introduced (ARGO, new satellites and the like), the resulting data supports this null hypothesis — and so must immediately be discounted, discredited, and waved away by The Team et al.

      • Craig Goodrich

        Let us both agree that we can accept a null hypothesis stated as, “any warming effect from CO2 is so minor as to be insignificant in comparison to natural factors such as ocean oscillations and other (mostly poorly-understood) phenomena.”

        One postulate must be that we have elected a valid candidate CO2 baseline, which I propose must be CO2 as at 1885 and earlier, although I recognize this introduces some problems.

        Why 1885?

        Some widely-held data suggests that CO2 at 1885 and earlier were generally globally stable for pretty much the entire history of mammalian life on Earth, and more.

        It’s the regime under which we and pretty nearly every living thing with more than one cell we know evolved and found its current place in the world.

        I’m comfortable with calling this baseline of long enough duration and currency to be precedent-setting in a way no other CO2 measure can claim.

        A second postulate, that the temperature we use to represent that CO2 baseline be drawn from the same dataset as we rely on for the rest of our curve.

        I don’t claim that temperatures were stable at any particular level for this entire duration, or that 1885 is any more or less representative of temperatures for that duration than any other, but given the problems with prior reconstruction, I’d need a lot of convincing to use a temperature other than the measure available in 1885 to represent 1885 and prior. I need little convincing to use a point from the dataset for statistics on the dataset, particularly if it appears otherwise valid and as early as tenable.

        There are issues of how consistent and widespread the temperature measurements as early as 1885 are; however, there are reasonable statistical methods for validation of these records, and certainly no definitive rejection of this data has been presented to me to date.

        Certainly, temperatures from 1885 to 1927 seem as plausible to belong to the 1927-2011 record as any other span of 42 years (like 1928-1970).

        Another postulate must be that all, “natural factors such as ocean oscillations and other (mostly poorly-understood) phenomena,” must be a) of much more limited duration than the period we are examining, and b) if they are not global, then also that there is evidence of them in the data, when compared regionally.

        The Mid-20th Century North Atlantic Blip appears to meet both conditions a) and b), and also appears to be the largest such occurence within the dataset by far.

        The third postulate must allow for correct expression of precision (ie “error bars”), which also argues for as long a dataset as possible and certainly on this dataset no period shorter than thirty years (or I’d suggest seventy years).

        So.. What’s the proper test of the agreed on null hypothesis on that range for these postulates, and the standard interpretation?

    • I think the following exchange has merit.

      Bart R: The bump represents local climate, for a period of so short a time as to hardly count compared to the usual 30-year time metric accorded to climate periods.
      “If we accept computer models and their current expert interpretation, all we need to dismiss AGW statistically is ten or more (possibly overlapping) 15 year periods of flat or dropping global temperatures with local ocean phenomena removed.
      Craig Goodrich:“so minor as to be insignificant in comparison to natural factors such as ocean oscillations and other (mostly poorly-understood) phenomena.”
      Bart R:The third postulate must allow for correct expression of precision (ie “error bars”), which also argues for as long a dataset as possible and certainly on this dataset no period shorter than thirty years (or I’d suggest seventy years).

      However, consider:
      Ocean oscillations in the Pacific have a period of approximately 60-year cycles.
      Tisdale’s animations show Pacific oscillations propagating to other ocean basins.
      NASA’s animation shows a mechanism for the Pacific oscillation. (NASA. 1998. SVS Animation 287 – Visualizing El Niño. Scientific. Scientific Visualization Studio. April 1. http://svs.gsfc.nasa.gov/vis/a000000/a000200/a000287/index.html )
      Akasofu calls attention to the possible influence of trend due to recovery from the Little Ice Age (since 1850). Akasofu, Syun-Ichi. 2008. The Recovery from the Little Ice Age and The Recent Halting of the Warming. September 25. http://people.iarc.uaf.edu/~sakasofu/pdf/recovery_little_ice_age.pdf

      Then there is the problem of data integrity. :-(

      • P, D

        I’m going to take the bold measure of suggesting we separate out anything that clearly falls into, “Mostly Uncertainty,” from anything that falls into “Mostly Probability,” so we can have a productive discussion of the statistics.

        So, yes, we have problems of data integrity, appropriateness of data to problem, understanding of the system, questions of are there better metrics and better questions to ask about AGW, and so forth and so on. (I personally hate working with temperature anomaly records from arbitrary sites, but it’s data, and data overrules personal feelings. I also think we ought examine what data would be better and start measuring it now if we aren’t already doing so.)

        Some of these ‘Most Uncertain’ questions themselves can be examined by statistical means, and some of these examinations can be productive; others, we have to apply logic and experiment to reduce contributors to Uncertainty as much as possible.

        It’s doubtful we can fully remove all Uncertainty, but wouldn’t it be nice to make an orderly attempt to minimize it on this serious question?

        You bring up LIA and the Pacific oscillation, and the question of multibasin propagation.

        Excellent points; may I take a stab at a reasoned argument for the handling of these troubling considerations?

        The LIA has, sadly, no good direct global temperature record (please correct me if I’m wrong). Proxy measures have a much larger error bar, the start and end dates are ill-defined, and we have good reason to call the information we do have about LIA substandard compared to the global temperature anomaly record, and distinct from it.

        In short, we sadly must rule out LIA from consideration for statistical purposes; we can use LIA to inform interesting questions later in the process, and to help us peg our Uncertainty level, but the Uncertainty envelope we bracket our Probability results in can be evaluated after we have our statistics worked out, and thereby know what we’re bracketing and what therefore can estimate better what we need to examine for Uncertainty.

        When the data we have is presented in “regional” segments, North, Low and South (noted elsewhere http://data.giss.nasa.gov/gistemp/graphs/Fig.B.lrg.gif), we see (judging by eye) the prominent Mid-20th-Century Bump (1927-1970ish) is absent from two of the three graphs (or at least greatly reduced).

        So, don’t we have to reasonably conclude (Occam’s Razor to rule out some strange intersection of multiple unknown natural effects all cancelling out one another for just and only the time we are examining) that the Pacific effect, though real, as it is known to be less than half the total length of the record, and not easily seen to produce its own bump of comparative size, is relatively much smaller in terms of impact on temperature?

        That is, there was a freakishly big bump for 4 decades in the North, and all the other bumps we see look tiny in comparison, by visual inspection. As we’re only interested in global effects, why can’t we deprecate the bump in the North just because it’s only in the North? Of course we’ll want the numbers crunched later, if we agree this approach is tenable.

        Two of three regional graphs of essentially the same phenomenon, if we reject the null hypothesis on both of them at some significant confidance level when we crunch the numbers, means we can then justifiably apply new hypotheses, AGW among them, on all three, plus their composite result, and determine which fits best, doesn’t it?

  41. Girma actually raised THE key issue (and I thank you for this).

    How can AGW theory explain that warming rates observed during [1910 – 1940] and then [1970 – 2000] periods are fully comparable (0,15°C per decade) whereas CO2 concentration has steadily increased by about 20% between both periods? And how can it cope with cooling trends observed during [1880 – 1910] and [1940 – 1970] periods (and even now since roughly 1998) while CO2 concentration was also steadily growing?

    This very inconvenient truth, based on observational data, actually provides an indisputable rebuttal of AGW theory .

    Then the next question is not to know whether mid 20th century’s warming was global or local (Trenberth’s answer is totally misled as warming was obviously global, even if SH warming rates are lower than NH one), but rather to find out where is the shortfall in the nice theory.

    Coming back to Joe Lalonde’s remark – “The credibility of climate science is falling due to the failure to follow the precipitation and evaporation patterns.” -, I would rather answer that the credibility of climate science is falling due to its inability to ask the good questions, and to put its own theories into question.

    When the thermometer provides an unpredicted measurement, the first reaction is of course to check the validity (calibration, data treatment etc…) of this measurement. But once validity is confirmed, it’s no use “breaking the thermometer”, as some climate “scientists” proposed when temperature measurements (both satellites and balloons’ data) “desperately” failed to show the hot-spot predicted by models. The good answer is to check what’s wrong with the prediction, and the nice theory(ies) on which prediction is based!

  42. Bart R (22-Jan-11 1:27 pm)

    If anything, the bump merely means if we see another multidecade temperature drop, arguments against AGW based on that fall are shown to be less persuasive, rather than more.

    Oh NO!

    The IPCC had made its projections:

    For the next two decades, a warming of about 0.2 deg C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1 deg C per decade would be expected.

    AGW will be disproved if in the coming decade the global temperature trend has no warming or slight cooling, instead of the 0.2 deg C per decade warming of the IPCC.

    It is in the following chart that we will plot the observed temperatures to verify IPCC’s projections.


    • Girma

      AGW will be disproved if in the coming decade the global temperature trend has no warming or slight cooling, instead of the 0.2 deg C per decade warming of the IPCC.

      I respectfully disagree.

      All that the absence of the IPPC’s predicted rise would prove would be the failure of the prediction.

      Disproof of AGW is a stronger condition, which would require a stricter test.

      I mean, I predict 5 heads and 5 tails from ten coin tosses; ten heads in a row prove my prediction wrong, but do not disprove the fairness of the coin. Knowing it to be a fair coin also doesn’t allow me to say that after ten heads I’ll throw ten tails in a row, either.

      There’s no logic in your claim.

      • Latimer Alder

        Your coin analogy only works if each toss is an entirely independent event. Uninfluenced by any other ‘forcing’.

        But AGW theory is exactly the opposite from independent. It suggests that as one variable (CO2 concentration) increases, so must another variable (global average temperature) and that the first causes the second. Without this relationship, there is no AGW theory left.

        It is unlikely that the CO2 will be found to decrease over the next few years.

        But if the temperature is not shown to rise (or even to fall), then it will be clear that whatever the relationship (if any), it is a more complex one than if A goes up so B goes up too. If we observe that A goes up and B stays the same or goes down, we have a different problem to explain.

      • Latimer Alder

        Again, respectfully must disagree, and for exactly the reason you point out: the (inter)relationship of CO2 emissions and global temperatures.

        Man-made CO2 emissions don’t have a simple linear relationship with overall CO2 emissions, or CO2 capture overall, nor is there a simple correlation of local with global levels, nor with conversion of CO2 into other components of the system (for example by photosynthesis) locally nor globally, nor with potential trends in factors of any of the above metrics.

        Nor is there at the next level of indirectness of relations a direct, simple and linear correlation of CO2 levels and radiative forcings, nor of radiative forcings and all measures of temperature.

        Likewise, feedbacks positive and negative are not direct, simple or linear, nor are they the same globally; polar and high altitude ice fields and glaciers are not the same as tropical cyclone belts on the oceans, which in turn are not the same as mountains, deserts, plains and woodlands, in how they might respond.

        Low and high altitude systems, likewise, not direct, not simple, not linear.

        The cumulative outcome of all of these layers and layers of possible determinants of temperature observed comes down to very nearly a probabilistic mechanic. That is, the coin toss is the apt analogy.

        In the coin toss analogy, the metaphor is that as CO2 emissions continue above the threshold of the CO2 budget of the biosphere to take CO2 out of the air and water, the coin becomes marginally less and less fair — marginally more likely to come up heads, cumulatively with time.

        Did something else dominate the coin toss in the mid-20th century? It seems likely. However, that factor (likeliest a North Atlantic current, apparently) appears to be geographically limited and chronologically to have spent itself, as well as to have been at most of the same scale as AGW (and likely less), so does not appear to merit seriously to weigh against AGW, statistically.

        Certainly other factors might have some impact; indeed, they must.

        But what AGW proponents using models have told us is that there is no circumstance where AGW is correct and there is not a significant global 15-year warming trend nineteen years out of twenty.

        We’ve had one such lack of trend among running 15-year periods out of the last 20, which is exactly as predicted. A second in 20 years gives us a very small window of doubt, a third could take us into serious questioning of the statistical support for AGW; looking at the Uncertainty and sigma confidence levels, it will take seven more in the next 20 years to get Probability to tell us AGW is not viable.

        If you believe in models and in statistics.

        If you don’t believe in models, and must rely on statistics alone, you much wait much longer, with all the results against AGW for the majority of decades in the coming century.

        If you believe in neither models nor statistics, then @whee, you can conclude anything you please.

      • Tomas Milanovic


        The cumulative outcome of all of these layers and layers of possible determinants of temperature observed comes down to very nearly a probabilistic mechanic. That is, the coin toss is the apt analogy.

        Allow me to disagree very strongly.
        This is hand waving of the worst species.
        There is absolutely NO reason that a complex non linear system should have its future states miraculously distributed according to some (time invariant !) distribution just because it is “complex”.
        Actually there are plenty reasons why it shouldn’t, one of these being that the probability itself in an infinite dimensional space, what is the case here, is undefined.
        For example the “probability” that a continuous field obeying chaotic dynamics like the temperature T(x,y,z,t) takes a state with a given average takes an indetermined and non computable
        form ∞ / ∞.

        So no, considering that the system behaves like a coin toss is deeply flawed and anything but a good analogy.
        There is certainly not the beginning of a proof that an invariant probability distribution of the “states” can be well defined and even exists .

      • Tomas Milanovic

        While you’re great at rightly describing the impossibility of generating all knowledge for an undescribed system, and demonstrate precision and technical excellence, my ambitions are far more modest than you give me credit for.

        So, please excuse a great deal more hand waving.

        It’s hard to disagree with what you have said within the looming and large context (see my hands wave at ∞ / ∞) you have chosen.

        I don’t consider that the system as a whole behaves like a coin toss, at that (∞ / ∞) level. Who could?

        I’m not considering the whole (∞ / ∞) system in what I’ve said, nor do I need to for the purposes of the test requirements generated by the far more limited question as pertains the falsification of AGW on the given temperature data. (Which effort I consider nearly meaningless, for diverse other reasons, but I’m not shy about venturing where angels fear tread.)

        The minimal (perhaps only) test to consider is the binary result of temperature up or temperature down, if we can ask our question correctly.

        If we can ask our question correctly, we can refer to the coin toss analogy with comfort.

        If we can’t get there by properly limiting our question, then falsification of AGW is beyond us, we’re all post-modern, and we can stop speaking of science altogether (for ∞ / ∞).

        Which I why I said (intending only the limited context of a world where we could speak meaningfully of falsification of AGW on the data, or of considering bumps and blips in the data to have much impact on such a question), “If you believe in neither models nor statistics, then @whee, you can conclude anything you please.”

        An AGW, for example, that is inconsequential is unimportant to us; we don’t need to ask questions about such a thing.

        Likewise, an AGW that is not at least somewhat cumulative over time is also not interesting, once we’ve established that in no short time period is AGW going to be so overpowering as to render the question trivial. Lucky us, we can accept both of these postulates, too, if we escape post-normalism.

        An AGW that has within the data available demonstrated negative feedbacks so strongly as to make us doubt its existence, again, we can’t talk about and have to acknowledge that when it comes to pass; we can’t distinguish an AGW-falsifying result from a result in this data space — which we do not apparently have yet despite the challenge of the mid-20th-century North Atlantic blip.

        Someday, if some combination of events within the chaotic (∞ / ∞) system generate data that renders moot our question, then on that (∞ / ∞) day, we can dispense with statistical questions of falsification of AGW.

        Are you saying it’s happened yet?

        I remain, respectfully, statistically skeptical.

      • Bart R,

        I also have to respectfully disagree with your statement.

        Failure of IPCC’s predicted rise is more than only a failure of the prediction:
        1) It’s the failure of ALL predictions since many predictions have been performed, using different models and sets of hypotheses & conditions.
        2) It’s the failure of ALL models used to perform these predictions since none of them has been able to foresee observed T° stagnation (and even slight cooling since 2001) : ALL Models are formally invalidated by comparison to observations.
        3) Its the failure of ALL nice Theories and Physical Laws (especially AGW) on which these models are built, and that are hereby proven erroneous or at least incomplete.

        On top of this already very inconvenient truth, one should also consider that if models are ALL proved unable to provide any reliable forecast, they also fail providing any correct hind-cast of past climate data:
        – For [1910 – 1940) period they provide a warming trend that is 2.5 times lower than observed one.
        – They are unable to provide any cooling trend as observed during [1880 – 1910] and [1940 – 1970] periods (and now since 2001)

        These failures basically come from the (hereby disproved) assumption that increased CO2 concentration causes T° increase (and vice versa due to considered positive feedback factors), which is the baseline of AGW theory.

        All these elements provide unquestionable rebuttal of AGW theory.

      • Eric Ollivet

        I understand the temptation to call a prediction resulting from a group of predictions ALL (though excuse me if that’s the last time I capitalize the word herein) those predictions.

        When predicting the outcome of football games, handicappers might take into account predictions of all the factors from predictions of the health of each athlete, to weather conditions, to past performances, and yet, for all that, the distribution and statistics of wins and losses do not mean that the losing teams had all their athletes unhealthy, nor did the games won by teams that do better in clear weather undo a single drop of rain.

        Your ‘unquestionable rebuttal’ is the worst sort of backwards logic.

        It makes on its face no sense, aside from the blatant and sweeping other errors of miscategorization, lazy assumptions, and poorly applied judgement.


  43. Many places around the arctic were warm around 1940

    “The warmest year in the extended Greenland temperature record is 1941, while the 1930s and 1940s are the warmest decades.” (from: Extending Greenland temperature records into the late eighteenth century, B. M. Vinther,1 K. K. Andersen,1 P. D. Jones,2 K. R. Briffa,2 and J. Cappelen3; JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 111, D11105, doi:10.1029/2005JD006810, 2006)

    Karasjok (Norway)

    And I recommend section 162: “warming of te Arctic” from:

  44. Regarding 1940s data tampering:

    Make no mistake, this is a criminal act of vandalism; naive ignorance of earth orientation parameters & Stat 101 fundamentals does not establish innocence.

    Regarding “global”:

    Just as global summaries are misleading, SH (southern hemisphere) summaries are also misleading; there is a MAJOR spatial discontinuity at the Southern Ocean.

  45. Harold Pierce Jr

    Here is comment that I posted several times on Romm and Tamino’s blogs and which they always delete.

    RE: Cyclic Climate Changes: What the Russians say.

    Hello Tamino

    The English translation of “Cyclic Climate Changes and Fish Productivity by L.B. Klyashtorin and A.A. Lyubushin can be downloaded for free thru this link:


    NB: This mongraph is 223 pages. The Russian edition was published in 2005 and the English translation, in 2007.

    By analyzing a number of time series of data related to climate, they found that the earth has global climate cycles of 50-70 years with an average of about 60 years which have cool and warm phases of about 30 years each. They summerize most of the studies published upto early 2005 that show how this climate cycle influences fish catches in the major fisheries.

    The last warm phase began in ca 1970-75 (aka the Great Shift) and ended in ca 2000. The global warming from ca 1975 is due in part to this warm phase. A cool phase has started and they predict it should last about 30 years. See Fig 2.23.

    In Fig . 2.22 and Table 2 they show that increasing world fuel consumption does not correlate with cool and warm phase of the 60 year cycle.

    Several others studies have found this 60 year cycle. During the cool phase La Nina years usually out number El Nino years as was the case from ca 1940-70.

    I haven’t checked to determine if the book was referenced AR4.

    Here is another comment I frequently post on blogs.

    There is another factor that may significantly contribute to global warming and in particular to the UHI effect: Fine black dust from rubber and asphalt.

    I ask this simple queation: Since 1900, where have the many billions (and billions and billions…!) of pounds of rubber and asphalt dust gone?

    The short simple answer is anywhere and everywhere but no one really knows not even the EPA. Synthetic rubber does not degrade upon exposure to sunlight, oxygen or microbes.

    A passenger car tire with an A treadware rating will lose about a pound of rubber over it lifetime. Can you imagine how much rubber is deposited along major roads and highways from 18 wheelers?

  46. Harold Pierce Jr 22-Jan-11 4:31 pm

    By analyzing a number of time series of data related to climate, they found that the earth has global climate cycles of 50-70 years with an average of about 60 years which have cool and warm phases of about 30 years each.

    Do you mean the following?


  47. I am just hypothesizing here.

    In order to create a “hockey stick” in the 20th century global temperature record, is it possible that the observed temperatures for the 1900s where increased by about 0.2 deg C (shaded red in the following chart) and those for the 1950s were reduced by about 0.2 deg C (shaded blue)?


    My hypothesizing is raised after reading adjustment of data in Tom Wigley’s email: ‘It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”’.


  48. Harold Pierce Jr


    You should check: ” FAO Smarter than the IPCC” at:


    This report was published in 2001 and L.B. Klyashtorin was a major contributor. I didn’t find any mention of climate cycles in AR4 SFPM
    How does the IPPC get away ignoring this and similar articles and for how long?

    BTW: Alan Cheetam’s “Global Warming Science” website is by far the best organized I have encountered. It is _the_ “one stop and shop until you drop” store for global warming info.

  49. Mikel Mariñelarena

    Up in the thread Eli Rabbet proposes an interesting exercise: Better would be to use three bands, tropical, northern temperate and southern temperate zones

    Fortunately, HadCRUT provides conclusive evidence that the “bump” was global in nature with its southern extratropics maps and data: http://hadobs.metoffice.com/hadcrut3/diagnostics/regional/south_30s/
    If anything, the bump in that latitude band was more pronounced than elsewhere.

    As for the possibility of this being an artifact of ocean temp measuring techniques, GISS shows that the early 40s warming is as evident in the southern extratropic land stations as most anywhere else, notably in South America:

    This has profound implications for the IPCC attribution and sensitivity conclusions. This and other observational evidence cannot be reconciled with the attribution of the post-40s cooling to anthropogenic aerosols.

    My feeling is that there has been a curious lack of discussion on the observational evidence for the IPCC -1.2 W/m2 aerosol forcing, both in the literature and the blogsphere.

    • Roger Andrews

      “Fortunately, HadCRUT provides conclusive evidence that the “bump” was global in nature …”

      If you want to prove something conclusively, please don’t use HadCRUT3. As I noted in an earlier post, HadCRUT3 can only charitably be described as inadequate.

      The closest thing to a realistic air temperature series that’s readily available on the web is the GISS “meteorological station only” series. This also isn’t free from defects, but it is based purely on surface air temperature data, which is the metric we are supposed to be using. (HadCRUT3 is basically a proxy air temperature reconstruction that uses heavily-adjusted SSTs instead of cherry-picked tree rings.) According to the GISS series the mid 20th-century “bump” never made it down to the Southern Hemisphere.

      • Well actually according to Trenberths birthplace ( nz data) it did

      • Roger Andrews

        The official NIWA New Zealand temperature record actually shows no sign whatever of a peak around 1940. Besides, Trenberth wasn’t born until 1944.

      • Roger Andrews

        Well yes. There’s no mid-century hump in this record.

        If you want to find a mid-century Southern Hemisphere hump I suggest you look at the records for southern Chile, although the peak here didn’t occur until around 1950.

      • Mikel Mariñelarena

        Roger, in case you missed my GISS link above, which is in broad agreement with HadCRUT, there it goes again: http://i53.tinypic.com/iedr35.jpg

      • Mikel

        First a correction to my previous post. I was wrong when I said that the mid-century “bump” never made it down to the Southern Hemisphere. It did. However, it’s only detectable in a few areas, most of which are in South America, and its average amplitude in the S. Hemisphere was only about a tenth what it was in the Arctic.

        Yes I did look at your map, but all I saw was strong warming in the Arctic and NW China and not much change anywhere else. Maybe I am missing something.

        I looked for some GISS met station series hemispheric plots that I could refer you to but couldn’t find any. So to illustrate how temperatures varied with time and latitude on either side of the 1940 “hump” here is a summary of my own estimates, which are based on 900 unadjusted station records selected from the GISTEMP data set, which is the data set GISS uses to construct its met station series:

        1910-1940 1940-1970
        60N-90N +1.03 -0.98
        30N-60N +0.28 -0.23
        0-30N +0.23 -0.11
        0-30S +0.04 -0.10
        30-60S +0.01 -0.01
        60-90S No data No data

        It’s hard to see how these remarkably symmetrical warming and cooling episodes, both of which were strongest and the Arctic and also appear to have begun in the Arctic, could have been a result of human activities. They are in fact so symmetrical that they almost cancel each other out, and as a result we get only about 0.1C of “global” warming between 1900 and 1970.

  50. I don’t know why my comment didn’t appear above. My point was that the 1940’s bump is not generally invoked to explain the post-1940’s cooling, which was esentially an interval of flat temperature lasting more than two decades. A better explanation appears to be a reduction in insolation reaching the surface, as deduced from a reduction in daytime maximum temperatures while nighttime minimum temperatures actually rose –
    Wild et al – GRL 2007</a.

    Changes in TSI were probably too small to account for the reduced insolation. A better explanation lies in an increase in aerosols (presumably mainly anthropogenic in origin), with consequent direct and indirect increases in solar light scattering. Whether this is a full explanation remains uncertain.

    • Apologies for the messed-up html

    • Roger Andrews


      In an earlier response to one of your postings, Judith Curry made the following observation:” The main concern I have is the status of the global surface temperature data sets, progressive analyses continue to work towards minimizing (the mid-20th century ‘bump’), whereas it is very evident in the raw data.”

      You now state that this bump was followed by “essentially an interval of flat temperature lasting more than two decades”. But this is what we get after the progressive analyses have flattened the bump out. When we look at the raw data we see a very evident period of about 0.2C of “global” cooling between 1940 and 1970, which so far as I know is inexplicable as an anthropogenic effect.

      Your statement also presupposes that the corrections applied to the raw data to smooth out the mid-20th century bump and suppress the 1940-70 cooling are valid. Do you have any backup for this? I ask because I have studied this question in some detail and have reached exactly the opposite conclusion.

  51. Harold Pierce Jr

    I have read some comments that the bump might be due the “Kriegs Marine Effect of WW II or that more extensive temp data were collected during WW II in particular in the SH in the Pacific.

  52. Eric Ollivet | January 22, 2011 at 1:48 pm

    How can AGW theory explain that warming rates observed during [1910 – 1940] and then [1970 – 2000] periods are fully comparable (0,15°C per decade) whereas CO2 concentration has steadily increased by about 20% between both periods?”

    Eric, the increase in the use of fossil fuels in the period 1970 to 2000 compared to the period 1910 to 1940 is much more than 20%.

    Here is the data from the Carbon Dioxide Information Analysis Centre.

    1) Total carbon emission for the period from 1910 to 1940 was about 30 G ton
    2) Total carbon emission for the period from 1970 to 2000 was about 170 G ton

    As a result, the increase in carbon emission between the two periods is about 5-times.

    However, the corresponding increase in global temperature was 0.45 deg C in both periods as shown below.

    • Girma,

      Thanks for the link to the data.

      But take care of the figures you’ve extracted and provided into your post.
      It seems to me they’re not correct:
      – For [1910 – 1940] period, Total Carbon emission has been almost stable with roughly 1 GTC, which is equivalent to 3,7 GT of CO2.
      – Between 1940 and the mid 70’s (i.e during the cooling period after the bump), emissions have been almost multiplied by a factor 5, reaching 5 GTC (equivalent to 18 GT of CO2)
      – For [1970 – 2000] period, Total Carbon emissions have increased from 4 to 6.7 GTC, with a mean value of 5,5 GTC (equivalent to 20 GT of CO2).
      – Latest value (2007) is 8,5 GTC (equivalent to 30 GT of CO2)

      Anyhow I do basically share your point of view. When I talked about a 20% CO2 increase between [1910 – 1940] and [1970 – 2000] periods, I was of course referring to global CO2 concentration (man-made and natural). Considering AGW theory, what matters is the total figure, not only the man-made one.

      But you’re perfectly right highlighting that:
      – Emissions during [1910 – 1940] period were actually peanuts compared to [1970 – 2000] levels: mean levels have been multiplied by 5.5 between both periods, whereas warming levels are fully comparable (+0.45°C in 30 years i.e +0.15°C per decade)
      – Emissions have been multiplied by almost 5 during the [1940 – 1970] cooling period

      Both observations formally disprove AGW theory

      • But take care of the figures you’ve extracted and provided into your post. It seems to me they’re not correct:

        Eric, we are both correct. However, I was referring to the total emission during the whole 30 year period, but you were referring the yearly rate.

        First warming period [1910 to 1940] total emission of about 30 G ton or 1 G ton per year

        Second warming period [1970 to 2000] total emission of about 170 G ton or about 5.5 G ton per year

      • Hi Girma.

        I’m terribly sorry and confused for having misinterpreted yours figures in such a way. I did not catch you were talking about total emissions for the overall 30 years’ periods.

        As you have pretty good idea on this topic, and often refer to “woodfortrees” data and treatment software, I would like to submit to your diagnosis a short analysis related to another key issue : Is CO2 really driving Temperature.

        To answer this question, I apply same treatment on 3 data sets:
        – Temperatures (HADCRUT3 variance-adjusted global mean)
        – CO2 concentration (ESRL CO2 (Mauna Loa) interpolated mean)
        – PDO index (JISAO PDO index)

        The applied treatment is the following:
        – From time: 1960 (no data available before for CO2)
        – Mean sample: 12 months (to remove/smooth seasonal variations)
        – Derivative (to get variations)
        – Mean sample: 12 month (to remove / smooth the “noise”)
        – Normalise (to get comparable scales)
        Note: you may reduce time period to get better view

        Here ‘s a link as exemple :
        3 major findings:
        1) All datas show 3 to 5 years’ long cycles, probably driven by ENSO
        2) T° variations are always preceding CO2 variations by roughly 6 to 12 months
        3) T° variations are themselves (generally) preceded by PDO variations (noticeable exception in 2009 where T° rises first, and they are also almost phased sometimes).

        1) Temperatures are not driven by CO2 but drive CO2
        2) Temperatures are themselves driven by Oceans’ Oscillations (PDO, ENSO….)
        3) AGW theory is once again disproved

      • Thanks Eric

        That is very interesting. I will look into it.


  53. Tomas Milanovic

    Peter Webster

    E.g., during the 1960-1990 period (as best as we can tell) there were few if any no mid-Pacific warmings (or Modiki El Nino) so that the predominant EOF did not include this form of El Nino so that none appear in the reconstructed data. Does this mean that there were no central Pacific El Ninos before 1960? We will never know!

    I agree with enthousiasm.
    It is much too rare (with exception of Climate Audit) to discuss reconstruction methods.
    Due to my origins (non linear spatio temporal dynamics) I am extremely critical of EOF in general and their use in chaotic systems in particular.
    The problem of EOF and its variations is that it’s an extremely simple linear theory that can be applied with little cost, time and knowledge.
    For instance anybody downloading this EXCELLENT easy to use matrix software with EOF and SVD modules included (http://www.luckhan.com/download/MatrixSetup5.exe) and with basic knowledge of linear algebra can do any EOF he wants within a few hours.

    I am convinced that this is why many scientists succumb to the motto “If you have plenty data and don’t know what to do with it, do EOF”.
    Yet if it is very easy to produce EOF (or SVD) matrixes, it is extremely hard to interpret them.
    Many EOF application are full of artefacts because f.ex real fields are used on propagating patterns despite the fact that they are only valid for standing waves, domain shape dependences are not analysed (this one is very common), subdomain instabilities provoke EOF instabilities, sampling problems (closely space eigenvalues) can contaminate the results etc.

    When I hear EOF mentionned in connection with subtle spatio temporal dynamics it makes me invariably think of a person analysing a watch with a pneumatic hammer.
    I would have little to no trust in the relevance of “data” produced by some EOF (or worse SVD) method.
    The probability that the relevant dynamical properties have been destroyed by the hammer is simply too big.

    My advice to beginning scientists would be : Stay away from EOF at all cost unless you are a mathematician or study linear problems in physics”

  54. Gavin writes: “global warming continues.”

    Let us verify Gavin’s statement.
    Let us look at the last 30 years data and compare the three decadal trends.
    Here is the plot for the data from the Climate Research Unit:

    It shows the following results:
    1) A global warming rate of 0.07 deg C per decade for the period from 1980 to 1990
    2) A global warming rate of 0.25 deg C per decade for the period from 1990 to 2000
    3) A global warming rate of 0.03 deg C per decade for the period from 2000 to 2010

    According to the New Oxford American Dictionary, Gavin’s “continue” means to “remain in a specified position or state.”

    According to the data above, to say “global warming continues” the global warming rate for the period 2000 to 2010 should have been 0.25 deg C per decade. However, instead, it is only 0.03 deg C per decade, which is nearly no global warming in the last decade.

    As a result, Gavin should have concluded “the global warming rate has significantly reduced.”

    (As Gavin does not let me post comments at his blog, I have posted it here)

  55. Carter, T.R. 2006. General Guidelines On The Use Of Scenario Data For Climate Impact And Adaptation Assessment. IPCC, June. http://www.ipcc-data.org/guidelines/TGICA_guidance_sdciaa_v2_final.pdf

    Aerosols: AOGCM experiments which account for both the negative forcing associated with historically observed concentrations of aerosols and greenhouse gas forcing over the same period have achieved a close correspondence of global mean temperature changes compared to observations (e.g. Mitchell et al., 2001 – Figure 10). These experiments have also been projected into the future on the basis of the assumed concentrations of sulphate aerosols, usually under the assumption of the IS92a or SRES scenario SO2 emissions profiles. The effect on climate when aerosols are included, compared to experiments forced by greenhouse gases only, is to suppress global warming. However, none of the SRES emissions scenarios shows regional SO2 concentrations as high as for the IS92a scenario, and by the end of the 21st century all scenarios show that the effects of greenhouse gas forcing dominate over the aerosol effect.

    Wigley, T. M. L., and S. C. B. Raper. 2002. Reasons for Larger Warming Projections in the IPCC Third Assessment Report. Journal of Climate 15, no. 20 (October 15): 2945-2952. http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282002%29015%3C2945%3ARFLWPI%3E2.0.CO%3B2

    Abstract Projections of future warming in the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report (TAR) are substantially larger than those in the Second Assessment Report (SAR). The reasons for these differences are documented and quantified. Differences are divided into differences in the emissions scenarios and differences in the science (gas cycle, forcing, and climate models). The main source of emissions-related differences in warming is aerosol forcing, primarily due to large differences in SO2 emissions between the SAR and TAR scenarios. For any given emissions scenario, concentration projections based on SAR and TAR science are similar, except for methane at high emissions levels where TAR science leads to substantially lower concentrations. The new (TAR) science leads to slightly lower total forcing and slightly larger warming. At the low end of the warming range the effects of the new science and the new emissions scenarios are roughly equal. At the high end, TAR science has a smaller effect and the main reason for larger TAR warming is the use of a different high-end emissions scenario, primarily changes in SO2 emissions.

    Lindzen, Richard S. 2007. Taking GreenHouse Warming Seriously. Energy & Environment 18, no. 7 (12): 937-950. doi:10.1260/095830507782616823. https://www.cfa.harvard.edu/~wsoon/ArmstrongGreenSoon08-Anatomy-d/Lindzen07-EnE-warm-lindz07.pdf

    Page 11: To be sure, current models can simulate the recent trend in surface temperature, but only by invoking largely unknown properties of aerosols and ocean delay in order to cancel most of the greenhouse warming (Schwartz et al, 2007). Finally, we note substantial corroborating work showing low climate sensitivity.
    Ultimately, however, one must recognize how small the difference is between the estimation that the anthropogenic contribution to recent surface warming is on the order of 1/3, and the iconic claim that it is likely that the human contribution is more that ½. Alarm, we see, actually demands much more that the iconic statement itself. It requires that greenhouse warming actually be larger than what has been observed, that about half of it be cancelled (sic) by essentially unknown aerosols, and that the aerosols soon disappear. Alarm does not stem directly from the iconic claim, but rather from the uncertainty in the claim, which lumps together greenhouse gas additions and the cancelling (sic) aerosol contributions (assuming that they indeed cancel warming), and suggests that the sum is responsible for more than half of the observed surface warming. What this paper attempts to do is point the way to a simple, physically sound approach to reducing uncertainty and establishing estimates of climate sensitivity that are focused and testable. Such an approach would seem to be more comfortable for science than the current emphasis on models testing models, large ranges of persistent uncertainty, and reliance on alleged consensus. Hopefully, this paper has also clarified why significant doubt persists concerning the remarkably politicized issue of global warming alarm.