Week in review – science edition

by Judith Curry

A few things that caught my eye this past week

Scientists sayGreenland just opened up a major new floodgate of ice into the ocean [link]

Is the Antarctic losing or gaining ice?  [link]

Ocean acidification means jellyfishification [link]

Soil microbes may be able to help plants cope with #climate stress. [link]

NASA found a way to track ocean currents from space. What they saw is troubling [link]

This company is using bumblebees to deliver organic pesticides to crops. More precise than spraying: [link]

#China: Air Pollution Cutting Surface #Solar Radiation, With Major Consequences For #Climate [link]

Beyond consensus: reflections from a democratic perspective on interaction between climate politics & science [link]

New paper finds “no detectable signal of the Arctic sea ice approaching a local bifurcation [tipping point].” [link]

OCO-2 – Orbiting Carbon Observatory 2 – the mission has released an animation [link]

Scientists research deep-sea hydrothermal vents, find carbon-removing properties | [link]…

.@NatureNews explains a new paper on how Antarctic coast meltdown could trigger ice-sheet collapse [link]

Why the Pacific Ocean is so unusually warm right now (and causing all sorts of havoc): [link] …

Climate Models as Economic Guides: Scientific Challenge or Quixotic Quest? | Issues in Science and Technology [link]

Everything you know about ocean acidification may be wrong [link] …

Deadly CO2 pollution could be captured with artificial trees, then “concentrated to carbonate fizzy drinks”[link]

New paper confirms “phenomenon of global dimming & brightening over India” [link] …

New paper finds Gulf of Tehuantepec, Mexico “is a major source of CO2 into the atmosphere”  [link]

Revoking: On Plankton, Warming and Whiplash. Handling science uncertainty when it really, really matters [link]

How competitive should science be? External reward structure may inhibit creative thinking and innovation. [link]

The Nation: Cold Sun Rising [link] …

Deep ocean: Climate change’s fingerprint on this forgotten realm [link]

CLIVAR-ICTP Workshop on Decadal Variability and Predictability: Join the talks remotely on 16-24 Nov. :[link]

World’s largest ocean cleanup operation one step closer:  [link]  @ArthurNeslen reports on Pacific plastic cleanup efforts

Study: Increased deforestation could reduce Amazon basin rainfall [link] …

Study: Declining snowpack may cut many nations’ water [link]

The secrets in Greenland’s ice sheets [link]

248 responses to “Week in review – science edition

  1. Judith, or somebody,

    Can you please give me the most authoritative reference I can use for the central estimate of ECS from GCMs about the time AR5 was closed for new evidence. From memory it was about 3.0, but I don;t have a good authoritative reference I can quote. I needed it quickly.

    I have AR5 WG1, TFE.6, Figure 1, p 83 here http://www.climatechange2013.org/images/report/WG1AR5_ALL_FINAL.pdf , but I actually needs what was generally accepted as the central estimate for GCMs at that time.

    • Peter, I think AR5 is the reference. There need be no specific journal article making such an estimate.

      • David thank you. However, AR5 does not state a central estimate for ECS from the GCM’s that’s the figure I need.

    • AR5 explicitly did not give a central estimate, because of disagreement btwn climate models and observational estimates. Avg of CMIP5 climate models is 3.45C. Observational estimates are much lower (e.g. Otto, Nic Lewis).

      • This is a nice way of putting how hot the models are. To get an avg of 3.45 one presumeably needs half the models to be even hotter. Hence the scare.
        I think one can also derive a central estimate from the AR5 graphic. Neither approach has any statistical significance, but the term ensemble is falsely accorded great weight.

      • Judith,

        I realsie that. But I need to be able to state, from the most authoritative source I can get, a central estimate for ECS produced by the GCMs used for AR5.

        I have submitted a paper and some word changes have been suggested.

        My original version said:
        “Several key parameters assumed for the analyses are on the side of the central estimates that produce higher hypothesised climate damages and lower net benefits than would be the case with more realistic assumptions. The following are some examples:
        • Equilibrium climate sensitivity (ECS) = 3.2oC (c.f. <2oC from empirical analyses and ~3oC from models) “

        The wording suggested rewording says:

        “Though consistent with climate modelling results, the assumed climate sensitivity of 3.2 C is higher than empirical estimates.”

        I’d like to say:

        “The assumed climate sensitivity of 3.2C is higher than the central estimates from climate modelling and much higher than from empirical estimates (e.g. Lewis and Curry, 2014)”

        But I need to cite an authoritative reference to support my statement that 3.2 C is higher than even the central estimate of the modelling results.

      • 3.2 is not higher than CMIP5 models

      • OK. My bad. I’d had that wrong in my head for a long time.

        My motivated reasoning, I guess.

        Thank you.

      • Avg of CMIP5 climate models is 3.45C.

        Is that the answer to my question? if so, I am surprised. I thought it was around 3.0 or 2.9 and had not changed much since AR4.

        Please correct me if I am wrong. If my assumed ~3.0 C is wrong, it’s a demonstration of my bias and motivated reasoning I’ve just become aware of on this particular point.

      • 3.0 has been the value for a long time (incl AR4). However, the divergence between the climate models and energy balance methods considered by AR5 rightfully resulted in their declination to indicate a ‘best’ or median value. Note AR5 WG3 didn’t get the memo until too late, and they used 3.0

      • Judith,

        Thank you very much for this. You’ve saved me making an argument over something I was wrong on.

        Can you tell me where WG3 says it used ECS=3.0?

      • While the mean ECS value of AR5 WG1 Table 9.5 is 3.22 K/doubling, the median is 2.9. The median in general is a better choice for “central estimate” than the mean, although one make that choice clear.

      • Harold, where did you get median 2.9? I don’t see this cited in Ch 9.

      • Source of the median is Excel. ;-)

        I also don’t recall the median being cited, by the way. Multi-model mean always seemed to be used.

    • Peter Lang
      One reference:
      Forcing, feedbacks and climate sensitivity in CMIP5 coupled
      atmosphere-ocean climate models
      Timothy Andrews,1 Jonathan M. Gregory,1,2 Mark J. Webb,1 and Karl E. Taylor3
      Table 1: model mean 3,37

    • Here is an authorative reference:

      The multi-model ensemble mean in ECS is 3.2°C, a value nearly identical to that for CMIP3, while the CMIP5 ensemble range is 2.1°C to 4.7°C, a spread which is also nearly indistinguishable from that for CMIP3.
      s817 -818. Evaluation of climate models. Model mean, Table 9.5.
      https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter09_FINAL.pdf

    • Thank you Judith, David Wojick, HaroldW and Nobodyknows for your responses. I am much better informed now. And I am grateful to Judith and all for warning me I’d got it wrong on my understanding of the central estimate of ECS from the GCMs. I now understand the best central estimate, from the widely cited and accepted as authoritative source, IPCC AR5, is:

      Model mean is 3.2°C +/- 1.3°C, Table 9.5, https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter09_FINAL.pdf

      I’ve now submitted my revised article, so hopefully it will get published before the Paris UN Climate meeting, and all the delegates will be persuaded :)

  2. Perhaps the most interesting aspect of Revkin’s NYT article about the plankton papers is his growing skepticism about “climate campaigners” (his words), as seen here.

    “The eagerness to find the “Pearl Harbor moment” or line of evidence that jogs people to act on the long-term risk of human-driven climate change, combined with the “tyranny of the front-page thought,” will long cause the kind of reaction that the initial plankton paper engendered — and that past papers on frog extinctions, Atlantic Ocean currents, ice-sheet behavior, hurricane dynamics and other facets of the climate puzzle have done.”

    The climate campaigners are not happy about this evolution of the news media from providers of uncritical support to journalists, and even more unhappy about reminders to the public about their past exaggerated publicity about initial research findings.

  3. daveandrews723

    I read that article about the Greenland glaciers study. I don’t think it could have been written in a more alarming way. That study is sure to be a hit in Paris next month. The scientists who performed the study will be “stars.”

    • Dave, there are actually two, the second and last items. Both about the same report on a glacier having lost its floating ice shelf. The report instigator is UC/JPL’ Rignot. The same Rignot who started the Pine Island Glacier alarm on the Amundsen Embayment in 2014 (essay/guest post Tipping Points), and led to the Potsdam modeling work Judith links also this week. Unfortunately for the sea level alarm, Zwally’s new paper says Antarctica is net gaining ice. And NASA’s Greenland team says it did not loose anything in 2014. The DMI data strongly suggest that if glacier iceberg calving was constant, that in January NASA will have to estimate that Greenland net gained >200 Gt in 2015. Shouting alarm based on specific small coastal glaciers draws attention away from the icecaps overall, and that SLR has not accelerated.

      • @ristvan. OT but don’t know if you look at Donna Laframboise’s web site regularly. She posted a link to a fascinating paper that may be of interest to you. The title is
        “The battle against global warming: an absurd, costly and pointless crusade”
        The authors are from the Société de Calcul Mathématic, http://www.scmsa.eu
        The paper is at:
        http://www.scmsa.eu/archives/SCM_RC_2015_08_24_EN.pdf

      • Ristvan, good observation on author Rignot. This article appeared in Science Express just published online. Science Express is the vehicle to get papers published in shortest time at AAAS / Science. In this case ‘peer review’ took a couple of months which is extraordinarily short. The front matter of this issue of Science had Marsha McNutt editor in chief extolling the huge contribution of Michael Mann to climatology referring to his iconic “hockey stick” as the bellweather contribution to our present understanding of anthropogenic global warming. The main Science online issue has a bunch of papers on the devastating effects of man made warming. I mention because AAAS has become “even more” of a political advocacy vehicle and the papers appearing just recently sure seem to be driven by the timing in the run up to Paris COP21.

        Science Express:

        What is Science Express? Science Express provides rapid electronic publication of selected research papers, Perspectives, and other articles that have recently been accepted for publication in Science. Each week we select several papers for online publication in PDF format within two weeks of acceptance. For authors, it’s a chance to get their peer-reviewed results in front of the scientific community much faster than they would otherwise appear in print. For readers, it’s an opportunity to connect with these hot results immediately.

        How are Science Express papers chosen? The editors select the papers appearing in Science Express from among all fields of science. The main criteria are the timeliness and importance of the research. Additional editorial changes in the text and figures may appear in the print version of the paper.

      • RayG, I know of Donna and occaisionally check in. Have read the French paper. Found it more a manifesto than a reasoned science arguement. Maybe just my mood at the time.
        DW, McNutt’s editorial shows what is wrong with Science (the mag) and science (climate). But exposing all the bad papers is like playing ‘whack a mole’. Happen to have had a fast start on SLR because wrote three essays specifically on it in ebook Blowing Smoke. One exposing scientific misconduct, one exposing the inherent uncertainty and ludicrous papers trying to explain wiggles, and one on the evolution of the Greenland and Antarctic ice sheets. Rignot ignores counter evidence in his published papers, misinterprets his own, and gets NASA no less to hype his results. Classic climate science, as in classic bad.

    • We have entered a warm period. This is a time of more snowfall. NASA just published a story that says Antarctic ice is now increasing which proves the Antarctic ice in the plot this alarmist story published is wrong. NASA will publish another story that proves the Greenland ice in plot in this alarmist story is wrong. They should move their Paris meeting sooner, they are running out of time because data is piling up against them.

    • Warhol must have had this current generation of climate scientists when he made his famous quote.

    • Steven, did you see the article at Nova’s concerning the low thermal inertia thermocouples vs the high thermal inertia bulb thermometers? As the transition was made from bulb to bimetal, a higher trend was introduced since the thermocouples respond more quickly, and therefore record higher max temps, than the bulb type.

      • Instrumental error is one of many uncertainties.

      • No didn’t see that.

        There are two ways to do the error analysis.

        Bottom up
        Top Down

        In a bottom up approach you try to a account for all the possible errors.
        instrument change, TOB change, station move, transcription etc
        This is Phil Jones approach. Done this way you end up with really small errors in monthly data and you run the risk of missing errors or of doing wonky adjustments.

        We do top down which avoids this whole approach. Our error is about 8X what the jones error is.

        The approach is simple

        T = C + w +e

        The temperature is a combination of a climate field which is deterministic. A weather field which varies, and finally an error.

        The error is the sum total of ALL errors: we dont attempt to break that down into individual components.

        Quoting ourselves:

        “We note that the correlation in the limit of zero distance, 𝑅 0 = 0.874, has a natural and important physical
        interpretation. It is an estimate of the correlation that one expects to see between two typical weather stations placed
        at the same location. By extension, if we assume such stations would report the same temperature except that each is
        subject to random and uncorrelated error, then it follows that 1 −𝑅 0 = 12.6% of the non-seasonal variation in the
        typical station record is caused by noise processes that are unrelated to the variation in the underlying temperature
        field.
        Since the average root-mean-square non-seasonal variability is ~2.0°C, it follows that an estimate of the short-term
        noise for the typical month at a typical station is ~0.49°C at 95% confidence. It must be emphasized that such
        estimates of noise incorporate all the variations that exist between stations, including those attributable to different
        instrumentation, different measurement procedures, different instrumental settings or microclimates, etc. Further,
        this estimate is also influenced by both historical and regional differences in the way temperature has been
        measured. Prior studies [8] on the reproducibility of temperature observations using consistent instrumentation have
        generally reported much greater precision, e.g. +/- 0.06°C, so it is likely that most of the noise we report here is due
        to differences in instrumentation and measurement approaches. For example, a station that reports mean temperature
        by calculating the simple of average of max and min extremes may vary considerably from stations that average data
        recorded hourly, or via other processes. Our results suggest that estimates based on identical instrumentation and
        methods fail to capture most of the noise that actually exists in the historical weather observing system. However,
        other authors [8] generally assign a large uncertainty to the homogenization process (e.g. 0.8°C). We suspect that the
        large uncertainty they associate with homogenization essentially captures much of the same short-term noise that we
        observe. “

      • Jim2, at NoTricksZone Pierre Gosselin posted a careful side by side analysis in Germany spanning several years, to give the average calibration offset for the German weather service thermocouples.. IIRC, the analysis did not include the thermal inertia issue which will give higher highs. IMO that is how jet blast bunged up Heathrows recent heat record. Paul Homewood at NotaLotofPeopleknowThat did a series of posts on this, but did not cover the differences in thermal inertia.

      • Mosher’s field stuff is statistical nonsense. There is no temperature field.

      • David is sore that he can’t follow the math.
        Standard methods.
        Easily tested.
        Tested and verified.
        So David is left with… They can’t do what works.

      • Mosh wrote:

        It is an estimate of the correlation that one expects to see between two typical weather stations placed at the same location. By extension, if we assume such stations would report the same temperature except that each is subject to random and uncorrelated error, then it follows that 1 −𝑅 0 = 12.6% of the non-seasonal variation in the typical station record is caused by noise processes that are unrelated to the variation in the underlying temperature field.

        Why talk about two stations? To me that seems a little fast and loose—imposing a picture that is abstract and hence awkward to talk about. The same goes for the usual off-the-shelf interpretation that the nugget is the independent ‘random’ error [The quotes (‘) are another way of saying, ‘take this interpretation with a grain of salt.’ It seems to me that here things are messy with respect to temperature sample support.* A thorough consideration of the aspect might prove to be an interesting exercise and will likely appear someday. If I were in the game and in academics I would be looking at it—presuming I already had tenure, of course. :O).
        ——————————————
        Note: Support has a very specific meaning in geostatistics and entails more than sample number. [To me it also overlaps with the idea of a representative volume and up/down scaling.]

        Regarding the BEST approach in particular, the time dependence of the correlation function/semi-variogram remains lurking in the background and that includes the nugget. But these are research items, important but still have to wait their turn. … wish I were younger and glad I am not.

        A couple of sources on the nugget and sample support in geostatistics are below. You can also get a free pdf of Clark’s 70’s classic, Practical Geostatistics. There are legitimate and interesting aspects to be probed and that is not only helps things get better it motivates further effort. Some of that has been covered in the back-and-forth at this blog a couple of years back.

        An article by Isobel ClarkStatistics or geostatistics? Sampling error or nugget effect?
        http://www.saimm.co.za/Journal/v110n06p307.pdf

        Jeffery Meyers in Geostatistical Error Management: Quantifying Uncertainty for Environmental Sampling and Mapping goes quite a bit into support and Gy sampling theory. All though from an environmental and mining perspective there is likely much to be gleaned from the material with respect to surface temperature fields. Here is the google preview

        https://books.google.com/books?id=dUQFc141Cb8C&pg=PA98&lpg=PA98&dq=geostatistics+%22sample+support%22&source=bl&ots=H_UJ6xtKc7&sig=eizO4z3ZpjaGgKOsy-yvWnUwUMQ&hl=en&sa=X&ved=0CDEQ6AEwA2oVChMI2vfXyIyQyQIVQ-QmCh3DEgsS#v=onepage&q=geostatistics%20%22sample%20support%22&f=false

        …hmmm a number of interesting thing pop up in google, e.g., going the other way working with remote sensor data [ http://info.asprs.org/publications/pers/99journal/january/1999_jan_41-50.pdf ], but Clark and Meyers are more in my comfort zone for now.

    • In this case, since the mechanism of a potential source of error is known, because it is known, a correction could be formulated for it – sort of like what was done for buoy temp and ship temp. In this case, the correction would cause less of an uptrend, I’m thinking.

      • It’s better to just detect breaks empirically.

      • Naw. Make a thermal model of the various sorts of sensors and bake that into the code.

      • no jim then you have add and propgate the ERROR due to your modelling of the sensor and you have to assume that every sensor acts like the model of the sensor so you add more structural uncertainty.

      • Yes, we live in an imperfect world. And … ? So, propagate the error if you can.

        One possible approach.

        1. Determine a weighted average of the thermal mass of the bulb thermometers.
        2. From the thermal mass of a given thermocouple assembly, determine the original signal.
        3. Apply that signal to the thermal mass determined in 1., then assign that modified signal to the thermocouple in 2.

      • On second thought, we don’t really have something like a continuous signal for min/max and read-once-per-day thermometers … so … I guess we’re stuck with the mess we have now.

  4. NASA found pretty “odd” behavior in ocean circulation even though this is the first time such capability was used. Also, note that the “sudden and extreme 4-inch sea-level rise on the East Coast.” was followed by a precipitous drop in the last 5 years.

    Over a 5 to 10 year period can anything really be classified as odd?

    • Indeed, nature is an oscillator, on all scales of interest. Thus it is a constant source of scares for those who raise alarms.

    • Willis Eschenbach

      Chris Mooney wrote the article, he’s the king of weasel words like “might” and “could” and “worrisome”. He needs to be moved immediately to the “Safe Space” at Yale so he has less to worry about.

      And yes, the first time such capacity was used, we find that it is Worse Than We Thought™ in Mooneyville …

      w.

  5. Jellyfication. More misleading science by university press release. What the paper actually did was look at about 100 invasive marine species. Anthropogenic, sure. Invasive because spread by shipping, not climate change. They asked the question, what would projected acidification do, hurt or help. Their answers were all over the map, dependent on species and region, according to the abstract. Except for the invasive Mediteranean jellyfish featured in the PR. Hence the PR alarm. And, the PR did not mention yhe recent papers finding (1) that in shallow Pacific reefs, corals do fine at lower pH in the absence of pollution, and (2) that corals have the ability to create their own little pH stabilized microenvironments, enabling yhem to build/maintain their calcareous exoskeletons.

  6. Is the Antarctic losing or gaining ice? [link]
    That link did not work for me just now.
    http://www.earthweek.com/2015/ew151106/ew151106a.html
    This Earthweek link works. I like when they publish something that disagrees. Their publisher will not engage with deniers. Now, he is one.

  7. Jeremy Poynton

    “Scientists say`Greenland just opened up a major new floodgate of ice into the ocean”

    I assume that means a glacier calved?

  8. The PR about the deep ocean fingerprint perspective in Science is awful. Had to be done just for Paris. Equal parts projection of euphotic zone observations into the cold dark deep, and rank speculation. For example, science has not identified many of the deep ocean creatures, but Science says they are all going to suffocate from lack of oxygen caused by less mixing with warming surface water. Guess neither the authors, nor Science, nor the reviewers ever heard about the thermocline. Deep oceans are below. There is no thermal mixing, or there would be no thermocline. And if it is not a problem in the deep Caribbean now, it sure is not going to become a problem in the deep North Atlantic then. Just illogical nonsense.

    • Had to be done just for Paris.

      the Paris thermidor ?

      • Sad news of more shootings in Paris. It must be assumed that the security will be extremely tight for the Paris climate event.

        Tonyb

      • Yes, tragic, got back earlier today, just turned on the BBC’s news at ten.
        My comment above, regretfully now out of place.

      • And the news is getting worse. Mark this down. This is an event that will reverberate around the world. Those nations that have lost control of their borders will see a surge of healthy nativism that has already been evidenced in Europe as a result of the invasion of “refugees” from across the Mediterranean. The most recent elections in Europe show which way the wind is blowing. The left-loon open borders crowd is on the way out.

      • Will the Paris climate summit go ahead? There must be considerable doubt about the ability of the french state to protect 40000 delegates where an attack would be a prime pr triumph for terrorists.

        Personally I view terrorism as a far greater threat than climate change. Those who have thrown open their borders will regret doing so. Angela merkel has gone quite mad but the swedes are not far behind.

        Tonyb

      • Death toll already in hundreds.

      • Tony, I happened to wake early this morning a commented on another thread. I’ll paraphrase again, when is first see the immigrant waves I figure there are some potential threats and we need to monitor. There will be terrorists, but other dynamics are at play too. For every terrorist there will potentially be more assets, informational and actionable. Attacks are inevitable, but power dynamics may be less obvious.

    • ristvan,

      Don’t worry. It’s all Climatological Physics, which bears only a passing resemblance to reality.

      In WarmWorld, less dense warmer water floats down, not up. Cold dense water ascends, due to the magical attractive power of CO2. The thermocline is a denialist conspiracy. Deep ocean currents are caused by magical translation of wind energy from the surface to the depths. And so on.

      Either these people are fools, or they assume all of us are.

      “Clowns to the left of me,
      Jokers to the right, here I am,
      Stuck in the middle with you.” – Stealers Wheel.

      Cheers.

  9. Good spot. The good news is the next major climate wander is also going to look pretty odd and the one after that too. These guys can be writing the same press release for another decade.

  10. “Suppose this technology was suddenly implemented on a large scale, with massive air filters lining our busiest highways stripping the air of excess CO2. …Lackner said there are multiple options for captured CO2. It could be used to produce plastics, concentrated to carbonate fizzy drinks…”

    In light of this visionary plan, which might be coupled with the Russell Seitz idea of micro-bubbling the oceans, my own visionary plan to completely obliterate the climatariat must be looking even more attractive today.

    C’mon, men. It’s us or those white elephants. Trample or be trampled.

    • mosomoso,

      “Take it and feed it to plants,” said Lackner. Carbon dioxide is an essential ingredient in photosynthesis . . .

      Gee. Stop CO2 from getting to the plants, by capturing it, then feed the plants with it . . .

      This could only come from an academic looking for bigger grants – I hope.

      Cheers.

      • You can’t be sure the CO2 will be ethically sourced and not used for experiments on hyper-carbonating Fanta or Dr Pepper. Geoengineers are mostly in the pay of Big Silly, though some work for Big Potty.

    • Deadly CO2 pollution could be captured with artificial trees, then “concentrated to carbonate fizzy drinks.

      Lost me with the POLLUTION bit.

  11. NASA (link) ‘finds’ ocean currents ‘troubling.’ like
    that ‘troubling’ warming period of the 1930’s they
    jest had ter adjust?

    https://notalotofpeopleknowthat.wordpress.com/2014/06/26/massive-temperature-adjustments-at-luling-texas/

  12. Re:

    “New paper finds “no detectable signal of the Arctic sea ice approaching a local bifurcation [tipping point].”

    The abstract seems to be composed of pseudo scientific gibberish. An example: “Furthermore, the power spectrum of the system’s time series reveals the generation of harmonics of the forcing period, the size of which are proportional to how nonlinear the system’s response is becoming with nonlinear effects becoming more prominent closer to a bifurcation.”

    If the authors understand the concept of chaos, they are remarkably adept at hiding it. Maybe, like all dedicated Warmist “scientists” they have redefined terms such as attractor and bifurcation – as usually used – to mean something only understood by practitioners of Warmism.

    Doesn’t anybody in authority keep these people under control? It’s fairly obvious the authors believe what they are writing, but so does a student who answers all the exam questions and still fails by getting all the answers wrong.

    Oh well, another attempt to explain away failure of the Arctic ice to melt on demand. Voodoo science, perhaps?

    Cheers.

  13. Before I go and waste some time –

    “I think that I shall never see
    A poem lovely as a tree.

    A tree whose hungry mouth is prest
    Against the earth’s sweet flowing breast;

    A tree that looks at God all day,
    And lifts her leafy arms to pray;

    A tree that may in summer wear
    A nest of robins in her hair;

    Upon whose bosom snow has lain;
    Who intimately lives with rain.

    Poems are made by fools like me,
    But only God can make a tree.

    – Joyce Kilmer”

    Why not just use trees to capture CO2? Self regulating, low maintenance, useful byproducts, and sometimes they are nice to look at, and walk amongst.

    Cheers.

  14. The comment on the Zwally paper at Realclimate is better linked here
    http://realclimate.org/index.php/archives/2015/11/so-what-is-really-happening-in-antarctica/
    This one really looks like an outlier based on shaky data too, and doesn’t even agree with independent sea-level rise rates.

    • It could mean the estimates of other sources are off.

    • Jim D: This one really looks like an outlier based on shaky data too,

      In your judgment, which of the Antarctic data are non-shaky?

      • This is a case where the skeptics have chosen to unquestionably believe a study based on the bottom line, but not on the methods used being superior in any way, which they are far from.

      • Presumably Zwally hope was to confirm the GRACE data with an independant data source. This hasnt happened so the natural response is to try to resolve this difference. You could just trash the Zwally work but as MM says non of the methodologies are without uncertain assumptions. The correct scientific response seems to be to accept greater uncertainty with respect to Antarctic mass change. After all if Zwally’s data had confirmed the GRACE work that would have add strength to the conclusions.

      • Jim D: This is a case where the skeptics have chosen to unquestionably believe a study based on the bottom line, but not on the methods used being superior in any way, which they are far from.

        Maybe. Which of the Antarctic data sets do you think are non shaky?

        Who has chosen to believe it unquestionably? (do you mean “unquestioningly”?) I think what we have is a ream or so of paper that on the whole does not support the idea that the Antarctic is losing mass. Exactly how much it might be gaining is hard to tell.

  15. Deadly CO2 pollution could be captured with artificial trees, then “concentrated to carbonate fizzy drinks”

    deadly… Huh? Messed up headline above. Do not need that kind of slip.

  16. The Cold Sun Rising article is a bit of comic relief designed to cheer up the skeptics in the face of all the facts around them. It concludes that at least the violins will be good. What is this stuff?

    • Curious George

      Please link to three facts.

      • I was referring to the many links in the week in review that will have skeptics dismayed. Greenland, Antarctic, Pacific, …

      • Jim D: I was referring to the many links in the week in review that will have skeptics dismayed. Greenland, Antarctic, Pacific, …

        I read of bunch of minor contributions to climate science, what Thomas Kuhn labelled “Normal Science”. Which of those reports is supposed to have me in dismay? Greenland, the Arctic, the Antarctic, the Pacific become more complicated year by year, but warnings or allegations of approaching disaster remain unfounded.

        Jim D: The Cold Sun Rising article is a bit of comic relief designed to cheer up the skeptics in the face of all the facts around them.

        It’s the point of view of a solar scientist, somewhat simplified. It’s written in the tone and depth of zillions of warnings about CO2-induced warming. The solar theories and the CO2 theories are full of holes, but their contrasting predictions are being made and recorded, and it should not take long to find out which are the least accurate.

        And in other news, not cited here, warming records and cooling records continue to be set all over the Earth, wherever weather records are kept. Pick your favorites to test, or at least examine, for global climate change attribution.

      • Since you are the audience this article was intended for, did the violinmaking prediction help to make you feel better about the future?

      • Jim D | November 13, 2015 at 8:40 pm |
        I was referring to the many links in the week in review that will have skeptics dismayed. Greenland, Antarctic, Pacific, …

        That’s funny. Using Real Climate to refute a scientific study is about as useful as linking to “The Onion”.

      • PA, you can read the Realclimate critique and say what specifically you don’t like about it, or, more likely, not.

      • Oh JimD! It is unbelievable how your bias continues to shine so brightly!

    • All that varnish research flushed down the hole.

    • Jim D: , did the violinmaking prediction help to make you feel better about the future?

      I had already read the scientific claims. The violinmaking was worth a chuckle.

      Back to your comment about dismay: which of the articles was supposed to cause dismay in skeptics?

      • I suspected that the polar glaciers would be worrying, and the fact that warming continues unabated after the “pause” would cause dismay, but perhaps the skeptics don’t actually care about such things, nor that the 1 C rise so far is still consistent with the AGW projections.

      • Jim D, it’s difficult to imagine any possibility which would not be consistent with AGW projections.
        And if you want to get some idea what causes dismay in sceptics, try walking a mile in a sceptic’s shoes.

      • John Carpenter

        “A PDO-assisted heatwave is ongoing. Enjoy it.”

        No…. Get used to it. No amount of mitigation is going to stop the inertia started. It will be many generations away before the affects of CC would presumably reverse course due to any ‘urgent’ mitigation schemes cobbled together today. Best to work on adaptive measures now and for the immediate future and work on mitigation as a side project.

        The heating is already in the pipeline… The worst is yet to come.

      • John Carpenter:

        The heating is already in the pipeline
        What exactly does that mean?

      • Back when libraries had tons of books about almost anything, I found these books that were speeches given at an annual convention of American piano builders. So I pulled one down and stared reading bits and pieces.

        I found a speech where the speaker started with this bold claim. He said had Stradivarius known of a certain species of wood from North America, it would have been the only wood he would have used.

        These American trees all grew long after the Maunder Minimum.

        So I sought out the best builder of guitars I could find and had him, reluctantly on his part, build me a guitar with that species incorporated. It was an astoundingly good guitar, and it helped start what has become the 2nd golden age of American guitar building.

        This is what the PDO-assisted warming looked like during the 1st golden age of American guitar building.

        So I don’t think the violins will be better when the sun blinks off.

      • That’s just one of the old lines that alarmist drones are supposed to keep repeating, peter.

      • The heat at the end of the pipeline is not enough to get here.
        ================

      • John Carpenter

        Peter3172,

        “The heating is already in the pipeline
        What exactly does that mean?”

        It means there is enough CO2 in the atmosphere today to guarantee a TCR of greater than 2C if you look at the rate of warming plotted by JCH. There is no amount of mitigation that can change that rate based on the consensus science understanding of how long CO2 resides in the atmosphere. Simply put, JCH’s plots indicate we are gonna fry.

        Adapting to the changing climate is the only real near term solution to that problem. COP21 will be a waste of time.

      • John Carpenter | November 14, 2015 at 11:13 am |

        It means there is enough CO2 in the atmosphere today to guarantee a TCR of greater than 2C if you look at the rate of warming plotted by JCH.

        This statement is simply wrong. The CO2 increase happened post 1940 and in 1959 was still only 2.45 GT/Y. The TCR is referenced to a 20 year period, etc. etc. We can compute a upper limit on the TCR for 1959-1999. The 1959 to 2015 warming is 0.5°C (guesses between 0.4 and 0.6 are accepted). The 1959-1999 CO2 change is 64 PPM.
        ln(2) * 0.5/ln (370/316) = 2.2 °C
        Most of the CO2 warming effect is supposed to be immediate. If we remove the IPCC direct forcing from 2000 to 2015 (for 370 to 400 PPM) or .28 °C it becomes:
        n(2) * (0.5-.28)/ln (370/316) =0.96 °C

        Further The CO2 rate of increase (PPM/Y) has been pretty constant for about 40 years. The GISS temperature since 1980 follows a roughly logarithmic output curve which is what would be expected from a constant increase in the CO2 level A log curve goes asymptotic parallel to the X axis. The global warmers show temperature going asymptotic parallel to the Y axis (exponential) which either shows misunderstanding or low math grades.

        The CO2 annual rate of increase only changed from 2.06 PPM to 2.2 PPM (7%) in the face of a 49.2% increase in emissions since 2000. It appears it will stay around 2.0 PPM/Y for the foreseeable future.

      • If the PDO were a real cycle it would have a 2nd half that would look something like THIS

        Unfortunately, the 2nd half of the PDO cycle, the one where the earth used to cool about the same amount that it had warmed, looks like a pathetically weak HIATUS.

      • JCH | November 14, 2015 at 2:56 pm |
        If the PDO were a real cycle it would have a 2nd half that would look something like THIS

        Unfortunately, the 2nd half of the PDO cycle, the one where the earth used to cool about the same amount that it had warmed, looks like a pathetically weak HIATUS.

        Meh.

        The TSR as computed from 1959 to 2000 (see above) is somewhere between 0.96°C and 2.2°C.

        The 2.2 PPM/Y annual increase in CO2 has changed 7% this century in the face of a 49.2% change in emissions. The annual CO2 increase is going to be around 2 PPM/Y unless emissions flatten out, in which case it will decline like it did in the 90s.

        560 PPM by 2100 (assuming we find about 3000 GT more fossil fuel to burn and less if we don’t) is a worst case. There is isn’t anyway to project more than 1°C warming the rest of the century and less than 2.5°C warming simply isn’t harmful.

        Global warmers claim CO2 drives temperature and the effect of solar variation is insignificant. There is a strong possibility, that the warmers have been warned about, that they are arrogantly and stupidly wrong. If the temperatures decline we should be tarring and feathering global warmers and riding them out of town on a rail. I have the tar and feathers but am waiting for the temperature to decline.

  17. NASA found a way to track ocean currents from space. What they saw is troubling

    For me, that is behind a paywall. Is there another link to the report?

    • There is a subset of scientists that are troubled by everything they see.

      Emotionally upset scientists do not produce good science and they should be debarred/RIFed

  18. 17 days to COP21 in Paris, and six terrorist attacks overnight killing 120+ people. Paris in lockdown, 1500 troops called in, state of emergency etc. Might this cause a rethink on the meeting?

  19. re: NASA found a way to track ocean currents from space. What they saw is troubling

    “the current study does not claim to have detected a long-term, downward trend in the strength of the AMOC. The point was more to prove that the technique works. And now that it does, the researchers say they are preparing to do more long-term analyses.”

    there is nothing “troubling” about the results

  20. re: Is the Antarctic losing or gaining ice?

    GRACE mass balance data show a small but steady decline 2002-2014
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2684427

    • Chaam Jamal:

      Thanks for the link. I had a lot of questions about the strength of conclusions from that study.

      For Greenland, Munshi found that:

      At this value of lag, the correlation between mass loss
      and lagged surface temperature is r = 0.243.

      An r^2 of 0.059?

      Since the correlation is positive it indicates that warmer weather causes greater ice loss. Although correlation does not imply causality (Watkins, 2011), the correlation between the detrended series does provide support for the hypothesis that surface temperature participated in the mass loss of the Greenland ice sheet in the period 2002-2014.

      Surface temperature “participated in” mass loss? Even with that weak correlation standard (i.e., barely positive) note that Munshi couldn’t do the same in Antarctica:

      The data for the Antarctica ice sheet are very different from those of the Greenland ice sheet. No temperature data set and no value of lag could produce a statistically significant correlation for the Antarctica ice sheet mass losses. The data for the Antarctica ice sheet do not suggest a causal relationship between surface temperature and the observed mass loss of the ice sheet.

      • If the bulk of Antarctica is cooling the correlation would either be reported positive (indicating the ice mass is growing) or reported as a non-significant correlation if you believed the ice mass couldn’t possibly be growing.

  21. re: Ocean acidification means jellyfishification

    “Ocean acidification may well be helping invasive species of algae, jellyfish, crabs and shellfish to move to new areas of the planet with damaging consequences”

    maybe but it can’t be blamed on fossil fuel emissions
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2669930

    • “re: Ocean acidification means jellyfishification”

      Another piece of Zohnerism.
      Global shipping has created such an invasive species problem that any “climate” caused infestations are insignificant.

      And it isn’t like they didn’t live there in the first part of the interglacial, or during the Eemian when it was warmer.

      Nice Link”
      “Environmentalists and climate scientists have for long struggled with the issue of fossil fuel emissions, because this source of carbon appears to be external and unnatural and foreign to the surface biota, the natural carbon cycle, and to a fragile climate and environmental equilibrium that nurtures life on earth.”

      We really have to debar any climate scientist from the “human carbon bad, natural carbon good” school. Anyone this deluded is going to commit scientific malpractice.

      280 PPM was too low. Plants were having to evolve to survive low CO2. People who think 280 PPM is an ideal level and that any human emission is bad suffer from mental illness or defect that causes them to improperly process information.

  22. Obama and Chairman of The Joint Chiefs Of Staff: “The most serious security problem facing the US and the world is man-made global warming”.

    Al Gore: “I’m out of here”.

    Delegates to Paris Climate Summit after today: “Cancel my reservation.”

    I wonder if Al Gore would have cancelled his climate cavalcade of elites planning wealth seizure if the temps were 2 degrees above normal. I mean that’s a real security concern.

  23. “Artificial trees.”

    More drivel.

    What would the perfect artificial tree look like?
    It would build itself, feed itself by EFFICIENTLY drawing nutrients and energy from the air, ground and rainfall without any human intervention, and replace itself by exponential self-reproduction in the same manner as living organisms can, honing it’s abilities to successfully adapt to changing environmental conditions over many hundreds of millions of years.

    In other words, a goddam tree.

    Where do they get these people from?

  24. This is old but interesting – been looking for data on thermalization of IR by CO2. All I can find so far are a couple of russian papers which describe theoretical and experimental kinetic *cooling* of gases containing CO2 when excited by a CO2 laser. Hard to get full papers because they are old. The preview has the gist of it.

    http://link.springer.com/article/10.1007/BF00892963?no-access=true#page-1
    http://link.springer.com/article/10.1007/BF00906254?no-access=true

  25. Willis Eschenbach

    Steven Mosher | November 13, 2015 at 5:34 pm |

    We note that the correlation in the limit of zero distance, 𝑅 0 = 0.874, has a natural and important physical interpretation. It is an estimate of the correlation that one expects to see between two typical weather stations placed at the same location.

    Thanks, Mosh. I’ve heard you make this claim before, but I fear that I simply don’t understand why two stations next to each other would happen to have a most exact correlation of 0.874 with each other.

    Not only that, but your claim is in total opposition to the results of Hansen and Lebedeff. See the graphic at http://wattsupwiththat.files.wordpress.com/2011/02/hansen-distance-vs-correlation.jpg

    As you can see, the correlation both varies widely, and with decreasing distance between stations it also generally approaches 1.0, not 0.874.

    So … a link to the basis for your claims would be good, as well as an explanation of just how Hansen and Lebedeff got such a different answer from yours.

    Regards and thanks,

    w.

    • http://static.berkeleyearth.org/papers/Methods-paper-Appendix-sm.pdf

      The authority for the claim is:
      Error! Reference source not found

      This does not inspire confidence in the BEST claim.


    • Thanks, Mosh. I’ve heard you make this claim before, but I fear that I simply don’t understand why two stations next to each other would happen to have a most exact correlation of 0.874 with each other.”

      Simple: That is what we observe.

      look up nugget

      • PA and TE

        Regarding the BEST correlation function. Here is a graphic from the current methodology appendix:

        BEST pair-wise correlation: data and model (black line)

        * First, note that there is scatter in the ‘data’, i.e., pair-wise correlations [the individual blue dots], as evident in the finite vertical spread of the cloud at every distance. This spread while variable with pair-distance does persist at small distances.

        * Second, note that the the spatial structure is evident in the curvature of the blue cloud.

        * Third, recall that the correlation value can not exceed one (+1). Thus the scatter at small distance will inevitably is constrained to values at or below +1. It should be obvious then that the spread centers at each distance will be below +1. Hence the nugget is below +1.

        * The difference between the correlation function at zero (0) and +1 is then the spatially independent random error–or so it is assumed.

        BTW expected is a key word in Mosh’s response.
        Note to willis: The correlation function is required for kriging and hence is determined before kriging.

        Anyway that is sorta how things are off the cuff.

        regards
        mw

      • MWG – thank you for the article on kriging and the “nugget” effect. Also, thanks for the clear explanation that had heretofore been missing from this thread.

      • jim2

        My pleasure. I’m glad it was a help.

      • I second jim2’s thanks, Mr Grant.

      • Willis Eschenbach

        mwgrant | November 15, 2015 at 6:22 pm |

        PA and TE

        Regarding the BEST correlation function. Here is a graphic from the current methodology appendix:

        Thanks, MW. There are a couple of problems with your graphic. Here’s an expanded version:

        First, all the colored lines stop at an inter-station distance of 100 km. They go nowhere near the y-axis where the station separation is zero. This makes sense because the lines are smoothed, and there are only a few stations closer than 50 km apart in your graphic. I only find less than two dozen station pairs that are less than 50 km apart. (That in itself seems quite strange, since in the US at least there are lots of stations closer together than that.)

        But the black line goes all the way to zero … how you do dat?

        It is identified as a “model fit” … but it is not a fit, it is the extrapolation of a model beyond of the data to the zero line. And there is only one single dot at zero distance, which is strange in itself since there are often two stations at the same location or quite close with some overlap, not just one out of 30,000.

        Next, the idea that your value for where your imaginary black line based on some model or other hits the y-axis is accurate to three decimal points is a joke.

        Next, regarding your model, it goes straight level starting at 3,000 km, while the data does nothing of the sort. Bad model, no cookies.

        Finally, you say:

        Note to willis: The correlation function is required for kriging and hence is determined before kriging.

        It’s not clear what you mean by the “correlation function”. Is it the “model fit” that you refer to above?

        In short, I see no scientific justification for extending your analysis from the 100 km distance where your colored lines stop, on out past where the data ends, and all the way out to the y-axis. At least in this graphic, you have nowhere near enough data to extend your results to the y-axis.

        My best to you,

        w.

      • Hi Willis

        As noted the graphic is from the current BEST Methodology Appendix, but I throw in the link here as a convenience:

        http://berkeleyearth.org/wp-content/uploads/2015/08/Methods-Appendix-GIGS-13-103a.pdf

        In the cut and paste of the graphic I also included the caption which explains the blue cloud and the various colored lines in the figure.

        First, the black line corresponds to the modeled correlation and is the correlation function. It is a three (3) parameter model fit to the blue dots (pair-correlation vs. pair distance). See Eqn 14 on p. 4 in the Appendix. Because it is a model it is easy to extend it down to distance = 0. The colored lines are move range calculations and that would explain why they do no go all the way down to zero. For the record they have absolutely no bearing on my reply above.

        A key idea behind kriging is that the correlation vs. distance points based on the observed data are replaced by a model of the relationship. This is a necessity in order to solve the kriging equations for weighting factors ultimately used in the interpolation part. To be very clear the ‘observed’ pair correlations and pair distances are not explicitly used in solution of the kriging equations. This applies to kriging in general and not just the BEST approach.

        IMO extrapolation back to a nugget (distance equals 0) having three decimals both isn’t a joke and is a joke. There is no doubt that fitting correlation function is subject to errors. But I do note if anyone wants to assert it is a joke then it would seem to me that they would need to make that statement in terms of quantitative effects on the dowmstream interpolated values. This of course also applies to the BEST team with respect to making statements about the quality of their results. They of course did some error analyses which in theory can be examined. I defer commenting of the history of blog discussions, etc. Life is too short and it is a complex subject.

        I disagree with you on the matter of the on the correlation behavior at large distance–with the caveat that probably if one goes out to the very large distances things fall apart because the number of paired observations drops. Certainly in the figure the spherical model seems to work quite well in tracking with the blue cloud. Also note that the useful part of the correlation function is that part that where the correlation is a varying function of pair separation distance, i.e., the regimes prior to the plateau. That distance is roughly indicated by dmax, one of the three parameters of the model. The discussion on pp. 4-5 discuss this a little.

        Anyway, that is my story, and I’m sticking to it. I of course do not speak for BEST.

        HTH and best regards, Willis … although I suspect you might be mumbling, ” Gee willikers, another model?!’ about now. :O)

        mwg

      • Thanks mw

        the hilarious think is that it was guys like willis, briggs, romanM, a whole load of skeptics who suggested the various techniques we used.

        guess they liked the approach before they saw the result.

        any way who would have thought skeptics would attck an approach that has 8 times the error of the phil jones they attacked for being too small?

        go figure.

      • Steven Mosher,

        You wrote –

        “the hilarious think is that it was guys like willis, briggs, romanM, a whole load of skeptics who suggested the various techniques we used.”

        The silly thing is that the whole exercise was and is pointless. You might just as well have analysed actual surface temperatures, but of course, this would have achieved just as little.

        What are you trying to achieve? Can you think of a single benefit from this whole “climate change” charade?

        It seems you are convinced that the world is somehow “warming”, in spite of four and a half billion years of evidence to the contrary. The only minor problem with your efforts at peering into the future is that you can’t actually provide anything useful. When, where, how much, quantifiable effects – all these seem to be of supreme indifference to such as yourself.

        Even the most pedestrian psychic or astrologer will give you specifics – the tall dark handsome stranger, the four children, future prosperity, and all the rest. At least there is something positive to look forward to, even if it is all complete guesswork – nonsense, even.

        You and your lot predict only gloom, disaster and despair. What a bleak and joyless world you would have us all live in! Or are we all supposed to go back to pre industrial times? Is this your vision of the Golden Age?

        You claim the debate is over. The masses may care to differ. If you wish to continue the debate, some people claim that mass debating to excess will send you blind, or give you hairy palms.

        I’m not sure whether measurebators suffer the same consequences.

        Cheers.

      • willis the easiest way to understand this is to actually do the work yourself.

        Then consider this. you have hundreds of stations with a kilometer of each other. Over the course of time between those stations you will see instrument changes, site changes, etc. That should give given you a clue as to what we are doing.

        but once you realize that the monthly error is on the order of .5C ( and not .05C as Jones thought ) THEN you should understand other debates ( UHI and microsite ) better.

        Still. if you think the error is SMALLER than .5C.. you can publish
        if you think its bigger… publish..

      • TE

        Well,this stuff is jumping around a couple of threads. I assume your figure is from Watts effort but you didn’t reference it. Going ahead with that assumption. Sometime back I played with the USHCN annual temperature looking at the effect on correlation [(co)variance] when restricted to the better data per the classification scheme. I used semi-variogram instead of correlation functions–roughly, SVs and CFs have an inverse relationship.

        As I remember both the plateau (sill) and sometimes the range (think dmax) decreased. However fitting the SVs, at least by eye, was more difficult because greater scatter with fewer data. Also the work was done using lag interval classes and not cloud semi-variograms.

        Other than that I do not know what to say about your plot because you say nothing.

      • correction

        mwgrant wrote:

        As I remember both the plateau (sill) …

        Bad memory…try

        for resid~x+y+z [R formula notation] plateau increases…

    • “Not only that, but your claim is in total opposition to the results of Hansen and Lebedeff. See the graphic at http://wattsupwiththat.files.wordpress.com/2011/02/hansen-distance-vs-correlation.jpg

      Sorry, but

      1. We use more stations than Hansen
      2. I’ve never been able to REPRODUCE that hansen graph..
      ergo its not science.
      3. Do the test yourself.

      The OBSERVED correlation in ALL THE DATA suggests a non zero
      difference at distance zero. This is in effect ALL of the error.

      • Huh?

        Two duplicate weather stations, side by each, have only a 0.874 correlation? As a side note the besties used to claim a 0.8802 correlation but the same is more different than it used to be.

      • If stations were really so highly correlated over large distances, one would not observe all the small scale variation that occurs at finer resolutions.

        But we do observe such variation a la UDEL:

      • TE, you have a good point but the wrong graph.

        Plot the actual station data at a pixel level (multiple stations in one pixel would be combined). Actual data only, no estimated data. Use a 2000×4000 background and a Robinson Projection (GISTEMP uses Robinson). Use grey for pixels with no stations.

        This would make a couple of things clear:
        1. How realistic the data manipulation is. The maps should roughly match.
        2. How bad the data coverage is and how much is simply invented.

      • Steven Mosher:

        2. I’ve never been able to REPRODUCE that hansen graph..
        ergo its not science.

        To be clear, do you mean you attempted to reproduce H&L’s result using their original data or using BEST’s larger database?

      • “as well as an explanation of just how Hansen and Lebedeff got such a different answer from yours”

        One reason is that they are plotting sub-regions. Some approached close to 1, some less.

        Another reason is that H&L only allowed pairs with 50 years of common data, vs BEST 10. The nugget includes the random error of the measure. That’s a lot higher with 10 years.

      • The other big difference between BEST and H&L is that BEST is monthly, H&L annual. Again, monthly has more non-spatial noise.

      • The H&L 1987 procedure resulted in a 1200 km range for correlating stations and infilling/infuencing temps that is still cited and used by GISS.

        The 1200-km limit is the distance at which the average correlation coefficent of temperature variations falls to 0.5 at middle and high latitudes and 0.33 at low latitudes.

        This seems a remarkably lax standard by which to adjust reported (or fill in absent) temperature data.

        Steven Mosher has suggested that BEST data analysis does not replicate these results. That seems like a big deal to me.

      • mwgrant | November 15, 2015 at 6:25 pm |
        PA, TE, willis see
        mwgrant | November 15, 2015 at 6:22 pm |
        https://judithcurry.com/2015/11/13/week-in-review-science-edition-27/#comment-743769

        What in these “correlation” discussions is actually claimed to be measured?

        It sounds like “positional” correlation is claimed to be measured.

        However, if positional correlation was the objective, siting and instrument calibration would be constrained.

        The chart shows that the correlation for instrument/siting differences varies from 1 to 0.2 at zero distance and the model accuracy is a little optimistic.

        That isn’t good. I suspect if an area was saturated with a couple of hundred precision well sited stations and the result was compared to the “computed temperature field” there would be significant differences.

        Again, an objective engineering team should look at the situation and establish a data processing standard because with data this lousy there is a lot of opportunity for adjustment hanky-panky.

      • PA:
        What in these “correlation” discussions is actually claimed to be measured?

        It sounds like “positional” correlation is claimed to be measured.

        ‘Observed’ is in quotes because here the correlation is calculated from the data. Yes I assume without apology a certain level of understanding on the part of the reader–particularly where because the topic has already been around for a while and the [kriging] concepts are basic for any discussion. While it is not universal nor unambiguous I often use quotes to call our a term that might require some reflection.

        ############

        PA: However, if positional correlation was the objective, siting and instrument calibration would be constrained.

        Well I guess that you can say it is an interim objective given that

        A key idea behind kriging is that the correlation vs. distance points based on the observed data are replaced by a model of the relationship. This is a necessity in order to solve the kriging equations for weighting factors ultimately used in the interpolation part.

        ############

        PA wrote: … The chart shows that the correlation for instrument/siting differences varies from 1 to 0.2 at zero distance and the model accuracy is a little optimistic.

        As I have noted elsewhere the error analysis is important in the bigger picture. [To be clear here I am trying to give an idea of what is going on in the general scheme and am not judging details of the effort.]

        ############

        That isn’t good. I suspect if an area was saturated with a couple of hundred precision well sited stations and the result was compared to the “computed temperature field” there would be significant differences.

        Maybe may be not. There are a lot a variables. Someone else could say

        “I suspect if an area was saturated with a couple of hundred precision well sited stations and the result was compared to the “computed temperature field” there would be good agreement.”

        Location, location, location… :O) You pick say Salt Lake County in Utah and they pick a rural county in eastern Iowa. Load them up with instrumentation, and measure away. I suspect both parties would walk away feeling vindicated. But suspicion can be a help in teasing out problems but at limited in resolving them.

        ############

        Again, an objective engineering team should look at the situation and establish a data processing standard because with data this lousy there is a lot of opportunity for adjustment hanky-panky.

        LMAO. Ain’t no human objective, there human.

        ‘Hanky-panky’ — come on! you just undermine your objectivity with stuff like that.

      • TE

        If stations were really so highly correlated over large distances, one would not observe all the small scale variation that occurs at finer resolutions.

        I should have been more specific above in regard to this comment. Visually the small scale (distance) variations are reflected in the width of the cloud in the figure and the fact that the correlation function is a single line. Yes, there is error but then again kriging is an estimator.

        So what can one say or speculate about that? Well, if you happened to have followed some exchanges in posts earlier posts you might have noticed some references to accounting for physiographic effects, land cover, drainage winds, and that sort of thing. These can and do occur a regional and smaller scales.

        I think it is particularly important to keep in mind that (in BEST at least) the kriging is not directly calculating/estimating the temperature anomalies but is acting on the residuals from the regression of temperature against latitude and altitude [nonlinear dependencies]. After the kriging is done one has build the local temperature estimates from the regression models, and kriged residual field then the anomalies. A lot the smaller scale variation likely is buried in the regression even before tackling the residuals kriging. Indeed looking at the USA alone (and for the individual years) the effect of scale on the correlation is evident. Time is another matter for some day in the future.

        Now that I have kicked up some sand what to do? Well it seems to me that if one wants to try assimilate the nature and magnitude of the effects of estimation one need to tease apart the BEST error analysis along with the estimation. Yeah, we are talking about a lot of effort. All of this is clearly still work in progress. OK.

      • LMAO. Ain’t no human objective, there human.

        ‘Hanky-panky’ — come on! you just undermine your objectivity with stuff like that.

        Well, there are obviously a number of factors to correct for. One would expect that the trend of the corrections on historic data would be trend neutral. Instead they are temporally positive. Claiming that 1910 has cooled relative to 2000 0.26°C since 2008, and that the adjustment equal to 60% of the original trend is justified, is indefensible.

        In 2018 or so the adjustments will exceed the original difference. At that point we should fire everyone at NASA and NCDC that is responsible for producing the adjusted data. The “we have to adjust data to make it better” dance has gone on long enough.

        A specification for temperature data handling should be written and the processing responsibility turned over to a private sector engineering organization.

      • TE

        “If stations were really so highly correlated over large distances, one would not observe all the small scale variation that occurs at finer resolutions.”

        Wrong.

        1. The stations ARE highly correlated over large distances. The way you prove they are NOT is by doing math, not writing sentences.

        2. The small scale variations are the result of different factors
        a) different stations
        b) different methods

      • PA

        ‘That isn’t good. I suspect if an area was saturated with a couple of hundred precision well sited stations and the result was compared to the “computed temperature field” there would be significant differences.”

        Err no.

        There is a simple thing you can do.

        Take the thousands of sites in the US and hold out 100 of them
        Hold out the CRN.
        CRN are the gold standard according the skeptics.

        Build a field using the thousands of horrible sites.

        Use the field to predict the temperature at the Gold standard.

        Compare.

        1. Will there be differences? Yes.
        2. Will they be “significant?” No.

        Recall the metric we are most interested in: Change in temperature over time.

      • “To be clear, do you mean you attempted to reproduce H&L’s result using their original data or using BEST’s larger database?”

        somebody got the clue!

        Willis used a chart that cant be checked because the original data used to generate it was not published with the paper.

        bad willis.

        See What Nick Stokes says to understand the other problems with Willis’ attempted gotcha.

      • “This seems a remarkably lax standard by which to adjust reported (or fill in absent) temperature data.”

        Another person who doesnt understand.

        There are times when 1200km will give you a high bias and times
        when it will give you a low bias. It all depends on what happens in the region where you dont sample.

        if you use 250Km you see the SAME PROBLEM… sometimes the bias is high, some times it is low.

        What this says is that you want an interpolation method that REDUCES bias.. there will always be some bias..

      • “But we do observe such variation a la UDEL:”

        pretty funny?

        UDEL has temperatures for the south pole before any measurements were made there.

        Opps.

      • PA.

        A specification for temperature data handling should be written and the processing responsibility turned over to a private sector engineering organization.

        Having worked largely in the private sector I see that poin-of-view and have no problem with it. Personally I would substituted ‘qualified’ for ‘engineering’. and both scientists and engineers are limited in seeing their biases and exude too much confidence leading to bad hearing—IMO. There are some niche aspects to the scheme. Of course the meaning of ‘independent’ might be a little contentious–just a little.

        Critique the data all you like—and that is appropriate here—but as for me, I attempting to help with the methodology a little. It still seems to baffle folks,

        mw

      • PA, MWG, WE, SM, TE — I rather enjoyed the discussion of the correlation function that was used in the kriging.

      • 1+2=2 & 3+4+5=15. I say, let there be more math.
        ==========

      • matthewmarler

        Yeah it is interesting stuff and one hasn’t even gotten to the interpolation, i.e., kriging. Thanks PA, WE, SM, TE, …

      • TE

        Regarding the graph (CONUS) see

        mwgrant | November 16, 2015 at 12:39 pm |
        https://judithcurry.com/2015/11/13/week-in-review-science-edition-27/#comment-743868

      • Steven Mosher:

        There are times when 1200km will give you a high bias and times when it will give you a low bias. It all depends on what happens in the region where you dont sample.

        As stated by you, “what happens” is unknown (i.e., you didn’t sample). Therefore you are attempting to fill in or project an assumed temp onto the blank spots on the map.

        H&L performed various tests to determine the degree to which they could rely upon infilling across various distances. Shorter distances, other things being equal, should produce stronger correlation but at the expense of apparent global coverage. As stated in H&L 1987:

        If the 1200-km limit described above, which is somewhat arbitrary, is reduced to 800 km, the global area coverage by the stations in recent decades is reduced from about 80% to about 65%.

        H&L elected to stick with 1200 km which produced an average explanatory power of only 0.33 across approximately 40% of the earth’s surface (the tropics) — and not much better than that across a significant portion of the rest of the globe.

        In addition, even utilizing the 1200 km method, the southern hemisphere does not achieve 50% “coverage” until about 1940 in the H&L study.

        I also have wondered how much of their correlation in higher latitudes could be driven by residual (that is, unsuccessfully filtered) seasonal fluctuations given that more poorly correlating tropical zone lacks large seasonal temperature drivers. H&L’s analysis relies upon annual averages but they do point out that, at monthly resolution levels:

        …although the seasonal cycle has been removed to first order, the effect of changes in the seasonal cycle are still present.

        Happy to be pointed toward papers that address these matters.

      • Nick Stokes:

        Another reason is that H&L only allowed pairs with 50 years of common data, vs BEST 10.

        In the graph at issue H&L used 50 years of common data but elsewhere in the paper they used 20 years of common data overlap within the 1200 km range because “a limit greater than 20 years caused a significant reduction in the global area with station coverage.”

      • “As stated by you, “what happens” is unknown (i.e., you didn’t sample). Therefore you are attempting to fill in or project an assumed temp onto the blank spots on the map.”

        1. technically what happens 1 meter from a thermometer is Unknown!
        what happens 10cm away is unknown! all that is known is what
        happens AT the thermometer.
        2. By definition we predict what is not measured. When I tell you the
        average swede is 6.1257 feet tall I am making a prediction
        about all unmeasured swedes. I am predicting that 6.1257 will
        minimize the error of prediction.

        “H&L performed various tests to determine the degree to which they could rely upon infilling across various distances. Shorter distances, other things being equal, should produce stronger correlation but at the expense of apparent global coverage. As stated in H&L 1987:

        If the 1200-km limit described above, which is somewhat arbitrary, is reduced to 800 km, the global area coverage by the stations in recent decades is reduced from about 80% to about 65%.

        H&L elected to stick with 1200 km which produced an average explanatory power of only 0.33 across approximately 40% of the earth’s surface (the tropics) — and not much better than that across a significant portion of the rest of the globe.

        In addition, even utilizing the 1200 km method, the southern hemisphere does not achieve 50% “coverage” until about 1940 in the H&L study.

        I also have wondered how much of their correlation in higher latitudes could be driven by residual (that is, unsuccessfully filtered) seasonal fluctuations given that more poorly correlating tropical zone lacks large seasonal temperature drivers. H&L’s analysis relies upon annual averages but they do point out that, at monthly resolution levels:

        …although the seasonal cycle has been removed to first order, the effect of changes in the seasonal cycle are still present.

        Happy to be pointed toward papers that address these matters.”

        The point is this. H&L method Doesnt use all the information in the data to do the interpolation. The only information they use is latitude and longitude ( and the latter doesnt tell you much ) . Next, they dont use all the data. They only use long stations.

        As to papers that address these matters there isnt a lot because the differences in approaches don’t lead to significant differences in the final estimates. I refer to these types of problems as having “technical” interest. That is guys like nick stokes and mwgrant and a few others (carrick ) and me find them interesting NOT because they will over turn the science of global warming ( that science was true before the first temperature series was ever constructed ) we find them interesting because
        a) the details are cool
        b) some corners of reginal climate change may get more light
        with better approaches.

      • “Shorter distances, other things being equal, should produce stronger correlation but at the expense of apparent global coverage”

        yes but shorter distances can lead to a HIGH warming bias.. as with NCDC this year

      • TE..

        Nice chart of conus. too bad they wont release the data— going on 3 years now.

        bad science.

      • PA

        “Huh?

        Two duplicate weather stations, side by each, have only a 0.874 correlation? As a side note the besties used to claim a 0.8802 correlation but the same is more different than it used to be”

        #######################

        NO.

        IF they were duplicate there would be no difference.

        Here is what you do.

        You plot correlation versus distance using the actual data.
        Then you fit that data to a model. it will intercept zero distance.

        Now, picture this. in all your raw data you will have stations that are close together ( say a few km apart) BUT they differ in temperature.
        When you fit the correlation you will see that the correlation at distance zero is Less than 1.

        That difference from 1 is due to all the possible differences between stations… sensor change, asphalt, noise.. or the total error.

        Its a different approach to estimating all the error that could impact a site.

        The other approach is bottoms up.

        1. you detail all the sensor changes
        2. you detail all the site differences
        3. you detail all the observation changes
        4. you ASSUME you got all those changes correct.
        5. You estimate the error for all those change
        6. you combine the errors.

        bottoms up versus top down.

        In top down you are assuming that the correlation model you fit to the observations is correct.
        you assume that it is correct from your shortest actual distance between stations ( say a couple of km ) down to zero.
        You assume that the difference from 1 is a result of all error processes.

        Time permitting I might put together a subset of only those stations athat are really close to each other

    • “It is an estimate of the correlation that one expects to see between two typical weather stations placed at the same location.”

      Thanks, Mosh. I’ve heard you make this claim before, but I fear that I simply don’t understand why two stations next to each other would happen to have a most exact correlation of 0.874 with each other.?

      ###############
      err I didnt say that they would.. quoting my words should have helped you.

      1. They would NOT happen to have this EXACT correlation.
      2. It is an ESTIMATE of what one would EXPECT to see for two
      TYPICAL weather stations at the same location.

      the difference from 1 is a result of all error process and noise.

      So for example.. if you have two co located stations and one has an instrument change and the other does no
      One goes out of calibration and the other does not
      One changes time of observation and the other does not.

      When I tell you that the average swede weighs 189.54678 lbs
      I am not telling you that every swede weighs this exact amount.
      I am telling you that this estimate minimizes the error. don’t repeat the phony skeptical “false precision” arguments. they are straw men

  26. Johnathan Jones

    Meanwhile scientist frogs are freezing and boiling in pans of water to death consecutively year after year this decade.

  27. Johnathan Jones

    The saddest thing about scientists sitting in a pan (or orb) of water being heated on a stove, is not that they can’t recognize decades of year after year consecutive record hot and cold temperatures, no! it is that more of
    the 7.5 billion people on earth understand frogs screaming that steadily increasing water temps immobilize them from jumping than the language scientist speak. It’s like they can’t really talk at all. Then we could ask them them, hey after year after year of consecutive record hot and cold temperatures, do you really think the world is just going to heat up or cool down and kill us or is something else going on? Like water is a refrigerant that carries retained heat up to space off the surface in a day to keep temps at levels life can exist at? I know many are scratching their heads after reading “ordinary speak” to address scientific principles, but don’t underestimate it as a language of science. As the Earth retains more heat from a more insulative atmosphere, more cooling events transpire, what would it be called? Planetary refrigerant cooling via the refrigerant know as water theory? I mean really, those smartiaxe trying to proving warming or cooling via science gooblelyspeak, just ani’t getting the read from the billions or the 10’s of thousands in the media, ribit, rebit, ribit…wt..f over?

    • The silence is of the tomb. The echoes within the chamber are fantasies. The barrier between the machined imagination in the core and the glorious Gaiaous nature surrounding it is impermeable.

  28. Would someone please show me that there is a merciful god.

    • Rev 3:20 Behold, I stand at the door, and knock: if any man hear my voice, and open the door, I will come in to him, and will sup with him, and he with me.

    • Don’t blame this on God. Blame it on the people who did it.

      Andrew

    • RiHo08

      As mankind slowly reveals the myriad layers of the climate puzzle, which have in concert conspired to constrain climatic range and conserve/replenish the resources required for the system to continue for millions if not billions of years, in contrast with the fate of Mars for instance, one may want to come to grips with a single question:

      Is this an incredible series of lucky dice rolls, or are we the benefactors of an amazing and unearned grace?

      The choice is yours to make, and will affect you more than it will the world around you.

    • RiHo08,

      I’m still alive and enjoying a quiet life.

      Will that do?

      Cheers.

    • All you offer are words. Is that sufficient? I believe not.

      Our President offers…what? more of the same: bad behavior. Rewards in the political arena.

      Divine intervention is not only unlikely, but, according to pundits, climate scientists, and those with a malevolent agenda, these will succeed. Much the pity.

  29. “The Nation: Cold Sun Rising”
    Quotes:
    “The last time we became concerned about cooler temperatures – possibly dangerously cooler – was in the 1970s.”

    Conflating regional with global, it was very warm in Europe 70 to late ’76, positive NAO, solar minima have increased negative NAO.

    “But that’s not the only cooling effect of a solar minimum. A dim sun doesn’t deflect cosmic rays away from Earth as efficiently as a bright sun. So, when these rays enter our atmosphere, they seed clouds, which in turn cool our planet even more and increase precipitation in the form of rain, snow and hail.”

    During the coldest run of years of Dalton in CET (1807-1817), El Nino frequency roughly doubled. That decreased rainfall across many continental interiors. The same in the colder years of the Gleissberg Minimum (1885-1895), increased negative NAO and El Nino, and a warm AMO, giving increased drought.

    • one problem.

      you cant find ANY observational evidence that clouds increase when GCRs increase.

      • Another problem; not enough blackboard room or time to explain clouds, a la Stevens @ Ringberg.
        ==================

      • At a conference.

      • “you cant find ANY observational evidence that clouds increase when GCRs increase.”

        Good, as there was increased continental interior drought through the stronger negative NAO periods during the solar minima.

      • one problem.

        you cant find ANY observational evidence that clouds increase when GCRs increase.

        As I understand it, that’s not what the theory predicts.

        The theory is that cloud albedo increases with increasing GCR because more CCN ( cloud condensation nuclei ) change the droplet distribution ( greater number of droplets but with smaller size ). This makes clouds more reflective, but not greater in number or coverage.

      • The theory is that cloud albedo increases with increasing GCR because more CCN ( cloud condensation nuclei ) change the droplet distribution ( greater number of droplets but with smaller size ).

        Of course, there’s not overwhelming evidence of this, either.

  30. stevenreincarnated

    It’s pretty much over once the climate community starts blaming cooling on global warming. The populace is going to shove them in little cars and point the way to Barnum and Bailey.

    • Words mean exactly what they want them to mean, nothing more, nothing less.

      • A shutdown of the AMOC has long been discussed. Poking a stick at an angry beast.

      • Poking a stick at an angry beast.

        Good creative writing, not so much for science.

      • In the IPCC Fifth Assessment Report, it was reported that it is very unlikely that the AMOC will undergo a rapid transition(high confidence’)

      • JCH | November 14, 2015 at 3:29 pm |
        A shutdown of the AMOC has long been discussed. Poking a stick at an angry beast.

        I have poked water before and unless there is a fish in there it is pretty unsatisfying.

        From my extensive experience poking water with a stick your worry about it attacking is unjustified.

      • Lol… I am not worried. It is funny that the Water Chef constantly posted comments that claimed abrupt climate change – the shutdown of the AMOC – was in the cards this century, and skeptics rarely said a word… apparently because they thought the over worried chef was a fellow skeptic.

      • Never really made too much sense, just another speculation from the calamity crowd.

        If polar seas are covered with ice, they can’t generate deep(er) water.
        If polar seas incur less sea ice, they generate more deep(er) waters.

        AGW could generate greater circulation just as likely as it would result in reduced circulation.

      • JCH | November 14, 2015 at 4:40 pm |
        Lol… I am not worried. It is funny that the Water Chef constantly posted comments that claimed abrupt climate change – the shutdown of the AMOC – was in the cards this century, and skeptics rarely said a word… apparently because they thought the over worried chef was a fellow skeptic.

        Well, it is nice they are trying to measure it…

        But that doesn’t tell us anything about the AMOC trend in the 20th century.

        So there are two issues:
        1. What is the uncertainty with the new measurement?
        2. What is normal?

        The problem with the Arctic ice, sea level, temperature, etc. is we don’t really have comparable data going back to 1900. We can’t distinguish between cyclical behavior, new problems, old problems.

    • stevenreincarnated

      The question would be what poked it in the MWP to create a LIA. I’ll throw in a little face paint as they leave.

  31. My apologies for being wildly off topic, but in the spirit of Entente cordiale to our French friends across the Channel at this difficult time, can I combine the comic silliness of the British, in the form of Mr Bean, together with perhaps the most evocative of all French songs ‘La Mer.’ which epitomises the special character of French nation.

    All you need to know is that Mr Bean has spent the entire holiday trying to get to the South of France and unexpectedly succeeds at the very end, in a charming homage to the special character of the French nation. Turn up loud!

    To skip ad go to bottom right hand side of the video.

    Viva La France !

    tonyb

  32. Here is an interesting paper on increases in evapotranspiration associated with climate change:

    http://www.nature.com/articles/srep15956

  33. Willis Eschenbach

    Steven Mosher | November 13, 2015 at 5:34 pm |

    We note that the correlation in the limit of zero distance, 𝑅 0 = 0.874, has a natural and important physical interpretation. It is an estimate of the correlation that one expects to see between two typical weather stations placed at the same location.

    Steven Mosher | November 15, 2015 at 11:22 am

    ”Thanks, Mosh. I’ve heard you make this claim before, but I fear that I simply don’t understand why two stations next to each other would happen to have a most exact correlation of 0.874 with each other.”

    Simple: That is what we observe.

    That’s great. Which station pairs at zero distance did you use for the “observations”?

    I ask because I doubt greatly that it is from “observations”. I suspect that instead that a) it is a result of your kriging calculations, and b) you have never tested it against station pairs at zero distance.

    But I’m happy to be proven wrong, all you have to do is identify the station pairs that you used to test your accurate-to-three-decimals claim.

    look up nugget

    Gotta repeat for the Nth time, your cryptic posting style sucks. Look up “nugget”? Say what? If you have a link, post it. If you don’t, please don’t play cute, it’s not doing your reputation any good.

    w.

    • Willis Eschenbach

      Mosh, let me add that your cryptic posting style is frustrating, but not because it lacks content. It is frustrating precisely because I know that the few cryptic words generally represent interesting ideas that I’d like to hear about …

      w.

  34. Does charge gradient power wind?

    The last step in this chain reasoning seems crazy to me. Curious what people make of this.

  35. It has been difficult convincing the public of climate models here on Earth, but
    today Smithsonian’s Dr. Kenneth M. Towe claimed Solar eruptions are 100% predictable.

    https://www.researchgate.net/post/Is_there_convincing_evidence_of_AGW#view=5648f4566225ffef5c8b45c0

  36. Does anyone here know anything about the Group on Earth Observations? They just had a big meeting to approve their ten year plan. The report says “GEO is a voluntary partnership consisting of Members and POs that are coordinating efforts to build the Global Earth Observation System of Systems (GEOSS). GEO Members are the European Commission (EC) and any UN member state that formally endorses the GEOSS 10-Year Implementation Plan. POs are intergovernmental, international and regional organizations with a mandate in Earth observations (EO) or related activities who have formally endorsed the GEOSS 10-Year Implementation Plan and been approved by the GEO Plenary. Currently, GEO consists of 99 Member countries and the EC and 87 POs.”
    http://www.iisd.ca/geo/12/html/enbplus220num5e.html

    Sounds interesting.

  37. New Climate Model Predicts Likelihood Of Greenland Ice Melt, Sea Level Rise, And Dangerous Temperatures
    The model is based on idealized representations of societal, technological and policy factors,” said lead researcher Jeremy Fyke, of the Los Alamos National Laboratory

    “global temperature rise sufficient to lose the Greenland Ice Sheet and generate seven meters of long-term sea level rise, or tropical region warming to a level that is deadly to humans and other mammals.”
    http://www.ladailypost.com/content/new-climate-model-predicts-likelihood-greenland-ice-melt-sea-level-rise-and-dangerous

    • “On the other hand”, he said, “there will be over a million square kilometres of extra arable land available, in Greenland alone.”

      “Models show that after accounting for isostatic rebound effects, sea levels will actually drop.” he added.

      “It’s actually better than we thought, and I’m off to the pub for a self congratulatory drink.”

      Cheers.

  38. Judith did you see the new Tamsin Edward’s work?

    http://www.bbc.com/news/science-environment-34859398