Reviewing the Climate Science Special Report

by Judith Curry

Calling on CE Denizens to review the Climate Science Special Report:  nominate an official reviewer, or participate in the CE Crowdsourced Review.

Pat Michaels recently posted an article at CE National Climate Assessment and the Tump administration. His post described the process and history of the national assessments, and voices Michaels’ concerns.

The first of these reports has been released in draft form: the US Global Change Research Program Climate Science Special Report  (CSSR) Fifth-Order Draft.

A good summary of what has been going on with this Report and the reactions to it is provided in an article by Andy Revkin.:  The entire article is well worth reading, some excerpts are provided below:

Earlier this month, someone involved in the government’s latest report on climate change provided The New York Times with a copy of the version submitted to the Trump administration for final approval. The main intent of the leak, according to several people tracking the report, was to complicate any attempt to suppress the study or water down its findings.

The report was submitted in late June and the Trump administration has broad authority to review its findings. Any one of a number of government agencies can block its release, which is ultimately subject to presidential review.

Some of the scientists involved in preparing the document expressed concern that it might never see the light of day.

They said there are signs that the Trump administration would subject the draft climate report to a “red team” vetting process in which a group of scientists would be invited to vigorously question its premises.

In many ways, the 669-page “Climate Science Special Report” is utterly unremarkable. It is a review of existing science that concludes human activities are largely responsible for the warming of the planet. Worsening climatic and coastal impacts are almost inevitable unless the world’s industrial nations significantly reduce emissions of greenhouse gases.

What makes the report significant now is the challenge it poses to a White House that has been moving aggressively to reverse the Obama administration’s policies and rules on climate change

Trump administration officials declined to comment on the climate science report as long as it is in draft form.

Drafted and reviewed by dozens of scientists within and outside government and endorsed earlier this year by the independent National Academy of Sciences, the report details findings drawn from a host of studies that are as close to certainties as science can produce.

There is some reasonable material in the report, but the hype is typified by this tweet by Katherine Hayhoe:

Official review process

David Wojick provided the following text:

From the USGCRP June Newsletter:

Scientific reviews. Once the TSU has conducted a technical edit of the First Order Draft for consistency and clarity (and the authors have approved those revisions), it will be sent to USGCRP member agency principals for the second of at least six review periods. A public comment period and concurrent review by a committee of experts at the National Academy of Sciences is anticipated for Fall 2017. In addition, we will be soliciting nominations for Review Editors to ensure that all submitted comments are appropriately addressed. More information can be found below, under Upcoming Opportunities to Engage.

The National Academies calls for reviewers [link]; nominations due by August 25, 2017.

NOAA is also soliciting nominations:

NOAA, on behalf of USGCRP, is soliciting nominations for Review Editors for the Fourth National Climate Assessment (NCA4). Refer to the NCA4 Outline (accessible via http://www.globalchange.gov/​content/​nca4-outline) for a sense of the areas of expertise we seek.

The report will adhere to the Information Quality Act requirements (http://www.cio.noaa.gov/​services_​programs/​info_​quality.html) for quality, transparency, and accessibility as appropriate for a Highly Influential Scientific Assessment (HISA).

Nominations should be submitted via the web address specified below (https://contribute.globalchange.gov/​) and must be received by September 8, 2017.

Submit your nominations!

CE Crowdsourced Review

I am not sure how the official reviewers will be selected by the USGCRP, but I am not too optimistic that this will provide a very critical review.

A red team effort is needed, with people selected outside of the USGCRP establishment.  It seems possible that the Trump Administration will soon convene an adversarial review of the Climate Science Special Report.  To give any formal Red Team a leg up on their work, I’m proposing that the CE Denizens conduct a crowdsourced review of the Report.  I’m hoping that the collection of comments posted here might receive more attention than public comments submitted to the USGCRP.

So its pay up time for the Denizens.  Its time to put some effort into critiquing this report.  I’m asking for your help in identifying false, misleading, incomplete and/or overconfident statements in the draft.  In your comments, please  point to specific figures or text in the CSSR draft, with criticism backed up by citations of published data or models  or analyses from the technical blogosphere (including previous CE posts).

A good example of a useful critique already in hand is Paul Homewood’s criticism of CSSR Figure ES.5 .

I look forward to seeing your responses and the comments they induce.

Moderation note:  Moderation on this thread will be STRICTLY enforced.  I’m posting in the parallel Discussion Thread for broader comments, and the usual hot air,  sniping etc.  This thread needs to be kept STRICTLY technical.

319 responses to “Reviewing the Climate Science Special Report

  1. Pingback: Reviewing the Climate Science Special Report – Enjeux énergies et environnement

  2. Claims the only significant natural climate drivers in the industrial era are changes in solar irradiance, volcanic eruptions, and ENSO. Fails to mention the AMO, and the indirect solar variability driving it.
    https://www.linkedin.com/pulse/association-between-sunspot-cycles-amo-ulric-lyons

  3. I got as far as p. 26. Don’t know whether I can stand going back for more.

    Comments on 5th draft of Climate Change Report

    Almost the very first sentence of the report (p. 13, line 15) gives an idea of the care with which it was put together. We are told that the global temperature has gone up “by more than 1.2 oF (0.7 oF)” (sic).
    It is stated in the box between lines 9 and 15 that the global temperature in 2016 “surpassed 2015”—but there is no indication of the uncertainty or whether the difference was significant.
    The claim is made that temperatures in recent decades have been much higher, and have risen faster, than at any time in the past 1700 years. But some estimates of temperatures in 1000-1300 are higher than present. Also, why limit it to 1700 years? Temperatures were probably higher during the Holocene Optimum 6000-9000 years ago. Also, proxies of temperature change are often low resolution, possibly masking much faster wobbles, so the claim that temperatures have risen faster recently cannot be validated. Finally, temperature rose equally fast in the 1910-1940 period, yet CO2 was much lower then and cannot be solely responsible for the increase.
    Only the surface temperature is shown. There is no discussion of the problems with surface temperature (tremendous drop in thermometers in the last few decades, poor siting, poor coverage in Africa and other areas.)
    Temperatures from satellites, balloons, and radiosondes are not mentioned in this section. It is no good saying that is because they are not at the surface—the greenhouse gas theory (GHG) states that temperatures should be increasing faster in the troposphere.
    Please discuss the (missing) tropical tropospheric Hot Spot and explain why GHG theory should not be considered falsified thereby.
    p. 14 lines 7-14. The statement that natural variability can not account for the temperature rise ignores the obvious point that natural variability has indeed accounted for much greater rises and falls of temperature in the past. As long as GHG theory has no explanation for these major changes, it is not logical to say that natural changes could not have produced the observed temperature increase. There is an apparent 1000-year cycle (peaks in the Roman and Medieval periods) with valleys in between (e.g., the Little Ice Age.) Perhaps this thousand-year cycle is responsible for the warming occurring since about 1750.
    p. 17. The discussion of temperature in the US (2.6% of global area) is irrelevant to the main argument—we are discussing GLOBAL warming. (See p. 24, particularly lines 22-23, for a very good discussion why US temperatures are irrelevant.) The statement (lines 5 and 6) that surface and satellite data are consistent in showing rapid warming in the US is arguable, since even the satellite measurements (RSS 4.0 and UAH 6.0) are now showing departures from the previous agreement. Also, balloon and radiosonde measurements are not mentioned. Are they also “consistent”?
    p. 17 lines 9-15. This section considers estimates of future increases. Since the models have consistently run hot, is there any reason advanced for why we should trust them now? What improvements have been made? When will at least one model be validated according to normal engineering practice?
    p. 17 lines 16-21. Glad to see the urban heat island is mentioned, after many years when it was ignored. Is there evidence that UHI is properly considered in the temperature maps? For example, many airports are more “urban” (larger, more impermeable surfaces) than they were before.
    p. 20 Figure ES-5. Considering the great loss in thermometer numbers occurring in the past decades, it would be desirable to accept only the best station s with the longest records in comparing temperature extremes. Was this done? For example, since so many high records were set in the 1930s, it would seem that only stations including those years should be considered.
    p. 22. “Tornado activity has become more variable”. What does this even mean? Please provide an indication of the tornado strength (i.e., high wind speeds, economic damages, with proper allowance for the greater satellite and airplane coverage, inflation, etc.)
    There is no mention of hurricane history in this section. Please provide number of landfalls (particularly level 3 and higher), economic damages, etc. over time.
    pp.22-23. This section consists only of projections. See my previous remarks about why we can not trust models.
    p. 24 line 3. “Box ES-1” stands alone. Is there a missing section here? Or is the next section meant to be in the box. OK, I now see that box ES-1 ends on p. 25.

    • Thank you for reciting the usual denialist talking points; it offers a good opportunity to rebut them again.
      Basically: why are you raising spurious objections to the report, when your objections have already been repeatedly debunked in the literature? Do you expect this report to repeat all the previous debunkings, just because you decided not to read the peer-reviewed literature?

      Re: “The claim is made that temperatures in recent decades have been much higher, and have risen faster, than at any time in the past 1700 years. But some estimates of temperatures in 1000-1300 are higher than present. Also, why limit it to 1700 years?”

      Cite said estimates. In the meantime, here are some other estimates showing the hockey stick pattern:

      “Continental-scale temperature variability during the last two millennia”
      “A global multiproxy database for temperature reconstructions of the Common Era”
      “Medieval Warm Period, Little Ice Age and 20th century temperature variability from Chesapeake Bay”
      “Inter-hemispheric temperature variability over the past millennium”
      “Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures: Examination of criticisms based on the nature and processing of proxy climate evidence”
      “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia”
      “Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate”
      “A Reconstruction of Regional and Global Temperature for the Past 11,300 Years”

      Re: “Temperatures were probably higher during the Holocene Optimum 6000-9000 years ago.”

      So what? Past non-anthropogenic climate change doesn’t change the fact that most of the recent global warming was caused by humans through the production of anthropogenic greenhouse gases. To say otherwise is as ridiculous as saying that humans can’t cause forest fires now, since humans didn’t cause a forest fire that occurred hundreds of years ago:

      “These examples illustrate that different climate changes in the past had different causes. The fact that natural factors caused climate changes in the past does not mean that the current climate change is natural. By analogy, the fact that forest fires have long been caused naturally by lightning strikes does not mean that fires cannot also be caused by a careless camper.”
      https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-6-1.html

      “Critics have argued that, if temperatures were as warm or warmer than current conditions before the onset of anthropogenic forcing, this would provide evidence that “natural” fluctuations alone could explain current conditions, since greenhouse gases were only ~280 ppmv during Medieval time (versus 400 ppmv today).
      […]
      With the increase in irradiance and a decline in explosive volcanism in the early 20th century, global temperatures might then have returned to an unperturbed level similar to that of the MQP [Medieval Quiet Period], but *the rapid rise in anthropogenic greenhouse gases propelled temperatures well beyond that level, as positive anthropogenic radiative forcing overwhelmed natural variability (Myhre et al., 2013).”*
      https://www.researchgate.net/publication/291523102_The_Medieval_Quiet_Period

      Re: “There is no discussion of the problems with surface temperature (tremendous drop in thermometers in the last few decades, poor siting, poor coverage in Africa and other areas.)”

      Those issues have been dealt with to death in the literature, with the surface record confirmed using multiple sources. For example, try reading the following:

      “Independent confirmation of global land warming without the use of station temperatures”
      “A global multiproxy database for temperature reconstructions of the Common Era”
      “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends”

      Re: “the greenhouse gas theory (GHG) states that temperatures should be increasing faster in the troposphere.
      Please discuss the (missing) tropical tropospheric Hot Spot and explain why GHG theory should not be considered falsified thereby.”

      The hot spot occurs with any large tropical surface warming (especially over of the oceans), whether that warming is cause by El Nino, increased CO2, or increased solar output. So no, the hot spot is not a GHG fingerprint. And the hot spot has been found multiple times, with mid- to upper tropical tropospheric warming being greater than warming nearer the tropical surface. For example:

      In satellite data:
      #1 : “Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends”
      #2 : “Temperature trends at the surface and in the troposphere”
      #3 : “Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies”, table 4
      #4 : “Comparing tropospheric warming in climate models and satellite data”, figure 9B

      In radiosonde (weather balloon) data:
      #5 : “Internal variability in simulated and observed tropical tropospheric temperature trends”, figures 2c and 4c
      #6 : “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”, figure 1 and 2
      #7 : “New estimates of tropical mean temperature trend profiles from zonal mean historical radiosonde and pilot balloon wind shear observations”, figure 9
      #8 : “Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations”, figure 3 and table 1

      In re-analyses:
      #9 : “Detection and analysis of an amplified warming of the Sahara Desert”, figure 7
      #10 : “Westward shift of western North Pacific tropical cyclogenesis”, figure 4b
      #11 : “Influence of tropical tropopause layer cooling on Atlantic hurricane activity”, figure 4
      #12 : “Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim”, figure 23 and page 351

      Re: “The statement that natural variability can not account for the temperature rise ignores the obvious point that natural variability has indeed accounted for much greater rises and falls of temperature in the past”

      Let me know what non-GHG is known to cause all of the following things scientists have evidence of:

      1) Tropospheric warming
      2) Stratospheric cooling
      3) Mesospheric cooling
      4) Thermospheric cooling
      5) Post-1970s increase in radiation absorption at a wavenumber of ~666 cycles per cm
      6) Ocean acidification, due to ocean uptake of CO
      7) Decreased atmospheric ratios of C14 and C13 isotopes of CO2
      8) Estimates of climate sensitivity (ex: from paleoclimate data) in combination with the observed CO2 increase seen, resulting in estimates where most of the post-1950s global warming is caused by CO2

      Re: “Since the models have consistently run hot, is there any reason advanced for why we should trust them now?”

      Addressed in the literature. This is less of a problem with models, and more of an issue of stuff like observational uncertainty and error in inputted forcings. For example, read the following:

      “Comparing tropospheric warming in climate models and satellite data”
      “Causes of differences in model and satellite tropospheric warming rates”
      “Internal variability in simulated and observed tropical tropospheric temperature trends”

      Re: “Glad to see the urban heat island is mentioned, after many years when it was ignored. Is there evidence that UHI is properly considered in the temperature maps?”

      Again, that issue has been addressed to death in the peer-reviewed literature. For example, warming remains after homogenization that addresses UHI:

      “Quantifying the effect of urbanization on U.S. Historical Climatology Network temperature records”
      “Evaluating the impact of U.S. Historical Climatology Network homogenization using the U.S. Climate Reference Network”
      “Urban heat island effects on estimates of observed climate change”
      “Assessment of urban versus rural in situ surface temperatures in the contiguous United States: No difference found”
      “Urbanization effects in large-scale temperature records, with an emphasis on China”
      “Urbanization effects on observed surface air temperature trends in north China”
      “Correcting urban bias in large-scale temperature records in China, 1980–2009”
      “Contribution of urbanization to warming in China”

      Really, at this point I’m not interested in the rest of what you have to say, when you keep repeated long-debunked denialist talking points.

      • I’m not convinced that you are speaking as a scientist. You could very well have a social science degree, with an advanced divinity degree, and make the same presentation, ala Al Gore.

        You want to be a change-agent, that is an activist, for a cause that you believe to be righteous. Good for you! You reveal your sentiment, as well as your scientific ignorance, by labeling skeptics of manmade C02 as threats to world stability, peace and harmony, as “denialists”.

        You support Mann, Bradley and Hughes 98. It required an arcane statistical analytical methodology called principal components analysis. Bradley and Hughes knew nothing about it. Mann said he did, but he had no statistics degree. No one ever explained why, in the advanced scientific age of “experts”, no PhD statistician with specific credentials in PCA, was invited to be a coauthor on a world-changing landmark paper. No one at Nature explained why, in the modern world of experts, Nature’s editors did not demand a statistics expert be made a coauthor on a paper that required such expertise in order to be credible. No one at Nature asked why a first-author, a postdoctoral fellow at U Mass, had not yet received his Yale PhD. Postdoctoral fellow traditionally meant someone who had in hand a PhD was doing work subsequent to receiving his/her PhD, not someone who had failed to receive it doing “postdoctoral work” in order to be deemed qualified to be conferred his PhD.

        Then, after MBH’s PCA was proved erroneous, i.e. bad science, subsequent “good science” replicated bad science’s findings. In the Olde Days of Science, such an outcome was inconceivable.

        The scientific truth is, the earth experiences periodic and quasi-periodic periods of warming and cooling. We are well within the bounds of natural periods of warming at our current time. We are nowhere close to approaching the climates of times in which Crocodilians inhabited northern Canada.

        Life forms are supposed to proliferate and survive where nature’s changes enable/encourage them to do so. If climate change makes life more enjoyable to humans in the arboreal north and far south, we can migrate towards the poles, and when global climate cools, we can migrate again towards the equator. If you (or your progeny) can’t, that is what natural selection’s extinction is about.

      • Maybe Mann et al. were just lucky, but the latest work by PAGES2k still supports their result from nearly 30 years ago.
        https://img.newatlas.com/pages-climate-graph-1.jpeg?auto=format%2Ccompress&ch=Width%2CDPR&fit=max&h=700&q=60&w=616&s=548e9f1d1dca14e8fa321e768fc878e3

      • I suppose the question of whether paleo-climate hockey sticks are better science than MBH’s “Nature trick” or simply less clumsily constructed bad science is a question not to be revealed anytime soon — unless one already knows by the answer by religious doctrine. If tree-ring proxies cannot detect modern warming how could it preclude instances of paleo-warming? The Pages2K press statements that the last decades erased 2000 years of cooling forget to mention they cannot eliminate this rate of warming may have also happened several times in the past. Pre-global thermometer our view is smeary and blurred.

        The MBH hockey stick was Dr. Richard Muller’s inspiration to investigate climate and start Berkeley Earth.

      • By conservative estimates, the 21st century average may be 2 C above pre-industrial. Visualize what that would look like on those graphs?

  4. richardswarthout

    The CSSR draft does not adequately consider the MWP and the scientific relevant records of that time:

    ‘The Medieval Warm Period (MWP) is generally thought to have occurred from about 950–1250, during the European Middle Ages.[4] In 1965 Hubert Lamb, one of the first paleoclimatologists, published research based on data from botany, historical document research and meteorology, combined with records indicating prevailing temperature and rainfall in England around 1200 and around 1600. He proposed, “Evidence has been accumulating in many fields of investigation pointing to a notably warm climate in many parts of the world, that lasted a few centuries around 1000–1200 AD, and was followed by a decline of temperature levels till between 1500 and 1700 the coldest phase since the last ice age occurred.”‘

    https://en.m.wikipedia.org/wiki/Medieval_Warm_Period

    Richard

  5. richardswarthout

    A further note on my previous comment:

    The CSSR draft relies, too much, on questionable paleoclimate data. For the MWP the historical recorded data should be considered much more reliable.

    Richard

    • Please cite the primary source (paper or data) that ESTABLISHES
      that historical recorded data is more reliable ( accurate and precise) than paleo data.
      Pleae cite the sources ( paper or data ) that illustrate the problems with the paleo data the report relies on.

      Reviewing is not easy.

      you actually have to KNOW SOMETHING to review well, not just have opinions

      • Steven,

        You still operate as if today’s science establishment is problem free. What, tens if not hundreds of thousands of PhD’s who need to earn a living. To the point science is now thinks it cannot only tell us how many angels actually can dance on the head of a pin, they can determine which dances they are performing. Who doesn’t believe that models and statistical manipulation can’t tell us every thing?

      • It would be nice if the purveyors of paleo data based on proxies showed the correlation of observed data to modeled data during the calibration period. It would even be nicer it these agreed with one another. Since calibration of proxies depends on comparison with measured data, proxies can hardly be better than measured data.

    • Yes richard, 3 things that speak ter serfs re a global MWP:

      (1) Glacier retreats, Greenland colonization and perma-frost
      farms, silver mines uncovered in Switzerland, record of Glacier
      retreat in Western Antarctica based on radio-carbon dating.

      (2) Tony Brown’s collated historical data of the period, lotsa’
      cross referencing.

      (3) That Climategate email showing politics of cli-sci, ‘We hafta’
      git rid of the Medieval; Warming Period.’ Et voila!
      Down
      the
      memery
      whole….

      O/t Say, richard, a new post on Serf Under -ground on free will
      and more from Guilio Tononi. Hope yer look in. bts

    • My favorite MWP study was by Oppo at Woods Hole which addressed the Indopacific Warm Pool and found that temperatures there during the MWP were comparable to today’s.
      http://www.whoi.edu/main/news-releases/2009?tid=3622&cid=59106
      https://www.nature.com/nature/journal/v460/n7259/full/nature08233.html

  6. Richard;
    “For the MWP the historical recorded data should be considered much more reliable.”

    What “historical recorded data” do You have in mind?

  7. alanlonghurst

    Figure on p. 13. Mean of any series of anomalies must be 0.00, which in this case its clearly not! They’ve cut out earlier years to emphasise warming – to be expected, I suppose.

  8. alanlonghurst

    Figure on p. 14. Mean solar irradiation at1.25 Wm-2 seems odd. Units problem? Usually expressed as around 1360-1363 Wm-2 at top of atmosphere.

    I’ll stack up further comments!

  9. Figure 1.6 is interesting. The caption describes the colors wrongly. But it makes a CMIP5 model average from a combination of ocean and land, rather than the usual bottom layer air temperature. This counters a criticism of the usual comparisons with land/ocean observations, which don’t correspond to the air temps cited. Fig 1.6 here seems to compare like with like, and it is a better match. On this basis, models are not running warm.

    https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/08/fig1.6.png

    • The Richardson method was discussed at https://judithcurry.com/2016/06/28/towards-reconciling-climate-sensitivity-estimates-from-climate-models-and-observatiions/. Also re-basing does help hide the fact that the models run hot in both their projections and in the historic period.

    • Just ask them to show the regional model output for the same years.
      That should be fun!

      • Surely you can’t build a global picture without knowing the local, regional, hemispherical data. You build a house from the foundations up, not the roof down.

        Tonyb

    • However, there seems little evidence that the noticeably faster warming of 2 m marine air temperature than sea-surface temperature in CMIP5 models exists in the real climate system. There is much to be said for using SST rather than marine air temperature, but unless the climate science community changes to doing so in the warming projections that are taken into account by politicians, it seems more appropriate to focus on, and make comparisons with, model projections based on 2m air temperture for the ocean, if necessary adjusting observed SST onto a 2 m airr temperature basis by reference to observation-based evidence.

  10. Clarification: the nomination links I list above are for reviewers of the NCA4, not the CSSP. This is a chance for skeptics to try to get on the inside of the National Assessment process.

    • David do you have a link to the NCA4 draft document? All I have found is the outline, clearly different from the Special Report.

      • David Wojick

        No NCA4 draft has been released. They are shooting for October for the public comment draft to be released.

        The CSSP is intended to be edited into just one chapter of NCA4, the science chapter. But NCA4 will have about 28 chapters so it is mostly about US impacts, by sector and region, as per the outline. Given that the CSSP report is about the same size as NCA3 I expect it ti get cut way down in NCA4.

  11. Again this period in time in the climate is not unique as far as the rates of temperature rise, the absolute warmth both of which pail in comparison to temperature changes since the end of the last glacial maximum.

    In addition it has been shown that when CO2 levels have been lower such as the Medieval Warm Period temperatures were at least as high if not higher then what they are currently.

    In addition satellite data and model initialization data show the temperature trend since 1998 is essentially flat and the temperature trends thus far for year 2017, are cooler then for year 2016 and I am convinced this is just the beginning.

    Until (which is NOT going to happen ) global temperatures exceed Holocene Optimum temperatures AGW proponents have nothing to stand on.

    Another point is the basic premise for AGW theory which is a positive feedback would occur between CO2/ water vapor and this would show up as a lower tropospheric tropical hotspot have NOT materialized , not to mention their call for the evolution to a more positive AO/NAO is not happening.

    • And some further denialist talking points to debunk.

      Re: “Until (which is NOT going to happen ) global temperatures exceed Holocene Optimum temperatures AGW proponents have nothing to stand on. Again this period in time in the climate is not unique as far as the rates of temperature rise, the absolute warmth both of which pail in comparison to temperature changes since the end of the last glacial maximum. In addition it has been shown that when CO2 levels have been lower such as the Medieval Warm Period temperatures were at least as high if not higher then what they are currently.”

      First, post-1950s warming doesn’t need to be as great as past warming, in order for post-1950s warming to have significant effects. To say otherwise is as ridiculous as saying that cancer needs to kill as many people as global wars (or vice versa) in order for cancer (or global wars) to have significant effects.

      Second, the hockey stick pattern shows the MWP is unlikely to have been as warm as today:
      “Continental-scale temperature variability during the last two millennia”
      “A global multiproxy database for temperature reconstructions of the Common Era”
      “Medieval Warm Period, Little Ice Age and 20th century temperature variability from Chesapeake Bay”
      “Inter-hemispheric temperature variability over the past millennium”
      “Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures: Examination of criticisms based on the nature and processing of proxy climate evidence”
      “Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia”
      “Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate”
      “A Reconstruction of Regional and Global Temperature for the Past 11,300 Years”

      Re: “In addition satellite data and model initialization data show the temperature trend since 1998 is essentially flat”

      First, satellite data analyses show warming. Actually read the scientific literature:
      “Tropospheric warming over the past two decades”
      “Comparing tropospheric warming in climate models and satellite data”
      “Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment”
      “A satellite-derived lower tropospheric atmospheric temperature dataset using an optimized adjustment for diurnal effects”

      Second, stop engaging in the ridiculous cherry-picking, where you choose the strong El Nino year of 1998 as your start-point, without a correspondingly El Nino near your end-point. Scientifically-minded people aren’t falling for the game denialists like you play, where you pretend your short-term trend reveals long-term climate, and not primarily the effects of shorter-term factors (like ENSO):

      “Given the results of this nuanced analysis, we conclude that claims that the global mean temperature has not changed in recent decades are not supported by evidence. In addition, our nuanced analysis gives much needed rigor to the claim that using 1998 as a reference year amounts to *“cherry picking”* [Leber, 2014, Stover, 2014], see also Supplemental Section for detailed discussions).”
      https://statistics.stanford.edu/sites/default/files/2015-16.pdf

      “Other claims of ‘‘no significant warming over the last X years’’ are also sensitive to the choice of starting point and analysis time scale.”
      http://journals.ametsoc.org/doi/pdf/10.1175/JCLI-D-16-0333.1

      “Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Niño event of 1997–1998, which also tends to suppress trends starting during those years.”
      http://onlinelibrary.wiley.com/doi/10.1002/qj.2297/full

      “The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”
      http://www.remss.com/blog/recent-slowing-rise-global-temperatures

      Re: “the temperature trends thus far for year 2017, are cooler then for year 2016 and I am convinced this is just the beginning.”

      First, drop your end point bias:
      “Unusually cold winters, a slowing in upward global temperatures, or an increase in Arctic sea ice extent are often falsely cast as here-and-now disconfirmation of the scientific consensus on climate change. Such conclusions are examples of *“end point bias,”* the well documented psychological tendency to interpret a recent short-term fluctuation as a reversal of a long-term trend.”
      http://www.tandfonline.com/doi/full/10.1080/17524032.2016.1241814?scroll=top&needAccess=true

      Second, you’re the guy who (famously) said on Roy Spencer’s blog that this would be decade of global cooling. So far, you’re wrong. Thus, I really have no reason to take your predictions seriously, especially when you repeatedly contradict scientific evidence.

      Re: “Another point is the basic premise for AGW theory which is a positive feedback would occur between CO2/ water vapor and this would show up as a lower tropospheric tropical hotspot have NOT materialized”

      First, the hot spot is a sign of the lapse rate feedback, which is a negative feedback, not a positive one (in contrast to positive feedback from water vapor):
      “Positive feedback in climate: stabilization or runaway, illustrated by a simple experiment”
      “Physical mechanisms of tropical climate feedbacks investigated using temperature and moisture trends”
      “Regional variation of the tropical water vapor and lapse rate feedbacks”
      “An assessment of direct radiative forcing, radiative adjustments, and radiative feedbacks in coupled ocean–atmosphere models”

      Second, there’s plenty of evidence of positive water vapor feedback. For example:
      “Global water vapor trend from 1988 to 2011 and its diurnal asymmetry based on GPS, radiosonde, and microwave satellite measurements”
      “Observations of climate feedbacks over 2000–10 and comparisons to climate models”
      “Anthropogenic greenhouse forcing and strong water vapor feedback increase temperature in Europe”

      Third, there’s plenty of evidence of the hot spot. For example:

      In satellite data:
      #1 : “Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends”
      #2 : “Temperature trends at the surface and in the troposphere”
      #3 : “Removing diurnal cycle contamination in satellite-derived tropospheric temperatures: understanding tropical tropospheric trend discrepancies”, table 4
      #4 : “Comparing tropospheric warming in climate models and satellite data”, figure 9B

      In radiosonde (weather balloon) data:
      #5 : “Internal variability in simulated and observed tropical tropospheric temperature trends”, figures 2c and 4c
      #6 : “Atmospheric changes through 2012 as shown by iteratively homogenized radiosonde temperature and wind data (IUKv2)”, figure 1 and 2
      #7 : “New estimates of tropical mean temperature trend profiles from zonal mean historical radiosonde and pilot balloon wind shear observations”, figure 9
      #8 : “Reexamining the warming in the tropical upper troposphere: Models versus radiosonde observations”, figure 3 and table 1

      In re-analyses:
      #9 : “Detection and analysis of an amplified warming of the Sahara Desert”, figure 7
      #10 : “Westward shift of western North Pacific tropical cyclogenesis”, figure 4b
      #11 : “Influence of tropical tropopause layer cooling on Atlantic hurricane activity”, figure 4
      #12 : “Estimating low-frequency variability and trends in atmospheric temperature using ERA-Interim”, figure 23 and page 351

      • “First, post-1950s warming doesn’t need to be as great as past warming, in order for post-1950s warming to have significant effects. To say otherwise is as ridiculous as saying that cancer needs to kill as many people as global wars (or vice versa) in order for cancer (or global wars) to have significant effects.”

        Actually it is in the context of alarming ppl of physical consequence. If you start to say “temperature are rising” and add “but not as much as [previous period where we don’t know of any historical climate assault on mankind], you can’t then argue “and sea level will rise” for instance.

        Now of course you can argue that there will be “significant effect” (and they will but the question for voting citizen will not take “significant” for an answer). Fact is US citizens voted for someone who was an impossible vote for anyone thinking there actually was a serious climate threat. Anyone worried of having Trump like leadership shall understand what didn’t work in the debate previously and up their game. Apparently people want to compare “cost to prevent GW” vs “cost to let GW happen” and so far, they’re not enough to be convinced the later is the bigger threat to get a leader on that agenda anymore, at least in the US. That’s the starting point of every new discussion on the topic.

  12. Yes, this is the correct version

  13. I have the same question. There appear to be 2 distinct documents: The CCSR and the 5th draft of Climate Change Report. The structure and content are not the same. I am inclined to dig into the CCR5 rather than the CCSR, but not sure of the implications of that choice. Not inclined to do both.

  14. The ECS evaluation on page 110 and 111 should be falsified.
    This is most important because the whole climate case rests on a high ECS.
    Too complex to put details in WordPress comments:
    See this document:
    CE Crowdsourced Review ECS and Feedback.pdf http://bit.ly/2v7klYu

    • I, for one, would like to see discussion of what you’ve shown. It’s over my head to critique it, but I’d like to hear what other people who are competent in the field have to say about what you’ve written there.

      As you say, ECS is a critical parameter for justifying any mitigation policy, and for estimating social cost of carbon, and in fact whether global warming will be net beneficial or net damaging.

      • @Peter Lang
        So far I never received any argumented critique of what I’ve written about IPCC’s ECS estimate.
        The 97% consensus has been served many times; but doesn’t count (as I’m no denier, I would belong to this cohort).
        Thus, I would welcome critical comments.

      • Michel,

        I’ve just seen your reply to my last comment, with the temperature graph. Thatr you show a temperature graph rather the a graph of net impacts (e.g.overall cost-benefit) versus temperature illustrates exactly the point I was trying to make to you. Temperature change is not a measure of impacts. It is irrelevant for policy analysis and for justifying expenditure on policy. I suspect you did not refer to the link I gave you, so you do not understand the issue. I’ll say it again: temperature change is irrelevant – only the impacts of temperature change are relevant.

    • Michel,

      My interests are in policy, not the down-in-the-weeds science. So I cannot critique your method. I want to know what number to use for ECS in the Integrated Assessment Models (AMs). I lean towards accepting Nic Lewis’s estimate of 1.65-2.0, say 1.8 or 2.0, as the best central estimate to use for policy analysis.

      I doubt global warming is harmful. I suspect global warming up to 4C may be more beneficial than harmful. I explain some reason in a series of comments starting here: https://judithcurry.com/2016/11/25/week-in-review-science-and-policy-edition-3/#comment-826494 (and other comments elsewhere on CE about the results IAM analyses of social cost of carbon).

      • @Peter Lang
        A hjgher-than-down-in-the weed-approach, unscientific, but offering some hint to your question:
        We know that climate models run too hot; this is seen on this diagram:
        http://blog.mr-int.ch/wp-content/uploads/2016/02/cmip5_90_models_global_tsfc_vs_obs1.jpg
        Compared with reality, the calculated temperature increase is exaggerated by a factor 2 to 3.
        (don’t look too much on the “average” black curve, there is no reality in calculating such a mean, as any one model can be 100% right, and all the others being inadequate. Most probably, all these models are insufficient and inaccurate).
        Over the time scale of this simulation, the climate system should respond according to Transient Climate Response (TCR) rather than to ECS’s decadal or centennial response.
        IPCC deems ECS to be “likely in the range 1.5°C to 4.5°C (high confidence)”.
        In the AR5 report, table 9.5, an average value of 3.2 °C ±1.2 is given; and TCR is estimated at 1.8 °C ± 0.6. These were estimated by CMIP5 models.
        Applying the “exaggeration factor” seen on the graph to IPCC’s ECS, it would be at 1.1-1.6 °C, and TCR at 0.6-1.3 °C.
        All of that totally hypothetical and probably as wrong as any other estimate. But, and this is the secret of climate science, nobody knows better.

      • Not sure why MichelR clings to that old chart:

        https://www.climate-lab-book.ac.uk/files/2014/01/fig-nearterm_all_UPDATE_2017-1024×509.png

        Source: https://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

        Perhaps the uptick post 2012?

        ***

        > Compared with reality

        Reality ought to be a bit more complex than adjusted modulz data from the top of the atmosphere.

      • I wonder what 2018 and later will look like. If drops down again we’ll know 2016 was an El Nino surge. If it doesn’t (well not as much as 2013 level) climate science will gain a substantial credibility boost for many people, me included.

    • Michel,

      I’ve just seen your reply to my last comment, with the temperature graph. That you show a temperature graph rather the a graph of net impacts (e.g.overall cost-benefit) versus temperature illustrates exactly the point I was trying to make to you. Temperature change is not a measure of impacts. It is irrelevant for policy analysis and for justifying expenditure on policy. I suspect you did not refer to the link I gave you, so you do not understand the issue. I’ll say it again: temperature change is irrelevant – only the impacts of temperature change are relevant.

      • @Peter Lang
        With this graph I just showed another, quite simplistic, approach to understand the exaggerations stemming from current models.
        Your point on what counts, temperature or the overall [economic] impact that a temperature rise would have on our living conditions:
        This requires a double speculation.
        First, on the magnitude of temperature increase (as a function of the emitted CO2, which itself is speculative). This is what I pointed to.
        And second, on a series of possible impacts, more or less quantifiable, positive and negative ones, and which may materialise, or not, in a near or far future (+ the question: which interest rate to take to estimate NPV to be able to compare scenarios?).
        Good luck for this part! Your comments on the post that you indicated show how tricky it is to make predictions, in particular into the future. The Stern report was sufficiently controversial.

        In any case, if point 1 is of lesser importance than claimed by IPCC, then point 2 shall not be that worrisome. But the cost of implementing futile and off-target mitigation measures will be quite real if (and this is a big if) implemented on the short term.

      • Michel,

        I am afraid you have demonstrated a complete lack of understanding of how policy analysis is done and what information it needed to evaluate whether a policy is justifiable. Good luck yourself trying to get any rational economists to take any notice of your babble-ings about temperature change when you cant provide the numbers needed to justify expenditure on policies. You clearly haven’t a clue whether temperature changes would be harmful or beneficial. Good luck with that!

  15. Section 4.2, page 186, is a very brief and incomplete description of the emissions pathways used in the climate models. This section needs to be reviewed by individuals familiar with fossil fuel resources and economics.

    I realize most audiences tend towards the cornucopian (endless resources) side of the spectrum, however, in recent months we have seen comments by industry heavy weights mentioning issues such as a forthcoming peak oil event. We are also seeing a serious disconnect between the production forecasts and resulting emissions, the destruction of the Venezuelan oil reservoirs (which will lead to a large reduction in that country’s oil reserves), drilling out of the Bakken sweet spots, and similar topics which do need to be discussed.

    The point has to be made that government policy can’t be made based on reports which treat this critical issue with such neglect, a review needs to be done by people without a political or business agenda. I mention this because the fossil fuel resource/reserve subject is treated with a lot of bias. There are very few players who don’t have a hidden agenda, or aren’t trying to cover for liability caused by agents such as the democrat state attorney generals with their “Exxon knew” law suits, etc.

    • Any chance you could elaborate on this phrase:

      … the destruction of the Venezuelan oil reservoirs (which will lead to a large reduction in that country’s oil reserves)

      ?

  16. “Assuming storm characteristics do not change, sea level rise will increase the frequency and extent of extreme flooding associated with coastal storms, such as hurricanes and nor’easters (very high confidence)… However, there is low confidence in the magnitude of the increase in intensity and the associated flood risk amplification, and these effects could be offset or amplified by other factors, such as changes in storm frequency or tracks.” (p. 515-6)

    Does this analysis does not inspire confidence?

  17. Having just browsed it so far, it seems to exemplify the major shortcomings of overall descriptions coming from the convinced. There is too little from the wider space-climate and paleoclimatology communities. It is very weak on solar-climatic coupling (offering constraints on modeling), disregarding records older than the common era, in fact: Javier’s whole account, which I have called for and very much welcomed as it’s been laid out (Thanks Javier). It’s half climatology. They illustrate how/why the 97% is rather likely around 70% (very roughly), and the 95% around 50% (also very roughly).

  18. This statement in Cloud feedbacks on p. 113 is unsurportable, highly misleading, and contradicted by AR5.

    “The net radiative effect of cloud feedbacks is positive over the industrial era, with an assessed value of +0.27 ± 0.42 W/m2 22 per 1.8°F (1°C) warming (Vial et al. 2013).”

    Vial 2013 relates purely to CMIP5 model behaviour, but the statement is not qualified by starting “In models,”.

    The report omits the key conclusion in AR5 (Section 7.2.5.7 Observational Constraints on Global Cloud Feedback) that attempts to “seek observable aspects of present-day cloud behaviour that reveal cloud feedback” had not succeeded.
    AR5 concluded at the end of paragraph2 of the section: “In summary, there is no evidence of a robust link between any of the noted observables and the global feedback, though some apparent connections are tantalizing and are being studied further.”

  19. The statement on p.122 lines 24-26:
    “Multiple observational and modeling studies have concluded that
    increasing the number of aerosols in the atmosphere increases cloud albedo and lifetime, adding to the negative forcing (aerosol–cloud microphysical interactions)”. is out of date.

    Some recent studies have found that, in much higher resolution models, increasing the number of aerosols in the atmosphere does not increase cloud lifetime, Siefert et al 2015 doi:10.1002/2015MS000489; Zhou et al 2017 doi:10.5194/acp-17-21-2017

    • And don’t forget stevens (2017) “Clouds are unfazed by haze” http://www.nature.com/nature/journal/v546/n7659/full/546483a.html about the ( only modelled) aerosol-cloud-Interactions.

    • Nice review nic.

      It’s funny what you find at higher resolutions. In fusion physics (another field plagued with poorly performing models) there was a critique of electrically accelerated designs that claimed to show they could never produce power (and was probably responsible for a number of funding decisions), but a decade or two later it was shown that (with e.g. self-consistent collisional treatment of the ion distribution function) a lot of those conclusions turned out to be unwarranted.

  20. This statement on p.122 lines 9-11 is totally unjustified:
    “While significant uncertainty remains in the quantification of aerosol ERF, we assess with high confidence that aerosols offset about half of the positive forcing by anthropogenic CO2”.

    In 2011 CO2 forcing was 1.8 W/m2, relative to preindustrial, with fractional uncertainty that is small relative to that for aerosol forcing. Although AR5 assessed median aerosol forcing at -0.9 W/m2, its 5-95% bounds for aerosol forcing were -1.9 to -0.1 W/m2. On that basis one could only say with 90% confidence that aerosol forcing offsets between ~5% and ~105% of CO2 forcing – hardly “about half”.

    • How does one get probabilistic confidence intervals for this sort of known unknown? These intervals are normally a property of statistical sampling of a population, which is not the case here. Are they subjective in this case? Or based on model ranges, which is the same thing.

      • The AR5 forcing uncertainty ranges are subjective, based on ‘expert judgement’. They are not true confidence intervals. I think the AR5 authors use a mixture of model-based and observationally-based, and compositive model-observation, results in arriving at their judgements.

    • This is interesting:
      «The aerosol forcing, inferred as a residual in the atmospheric energy balance, is estimated as −0.8 ± 0.4 W m−2 for the 1980s and early 1990s. It increases in the late 1990s, as is required for consistency with little surface warming over the last decade.»

      Ref: «Revisiting the Earth’s sea‐level and energy budgets from 1961 to 2008» John A. Church et. al. 2011.

      In other words, the amount of aerosol seems to have been used as a fudge factor to account for the lack of warming.

  21. Re sea level rise chapter (Ch 12), specifically Fig 12.2 and discussion in section 12.3:

    “The rate of rise in the last century, about 14 cm/century, was greater than during any preceding century in at least 2,800 years (Kopp et al. 2016; Figure 12.2b)”

    This is an accurate citation of Kopp et al. However, the implication is that the rapid sea level rise (since 1800, actually) is caused by humans. The rate of sea level rise during the first half of the 20th century was comparable to that in the 2nd half of the 20th century, when any contribution from CO2 was small
    https://www.nature.com/nature/journal/v517/n7535/full/nature14093.html

    http://www.carbonbrief.org/media/367997/hay-et-al-2015-fig2.png

    • Please tell the whole story from Carling et al.(2015) (the abstract):

      “…and find a rate of GMSL rise from 1901 to 1990 of 1.2 ± 0.2 millimetres per year (90% confidence interval)…”

      and;

      “Our analysis, which combines tide gauge records with physics-based and model-derived geometries of the various contributing signals, also indicates that GMSL rose at a rate of 3.0 ± 0.7 millimetres per year between 1993 and 2010, consistent with prior estimates from tide gauge records. The increase in rate relative to the 1901–90 trend is accordingly larger than previously thought; this revision may affect some projections11 of future sea-level rise.”

      Are You suggesting there is no notable difference between the first half of the 20th century, and the latest 20 – 30 years? Why stop in the 20th century, when we have 16 years more of data?

      • Rune

        The number of tidal gauges that were operating from the same place during that period you quote is very small. The number of gauges themselves is very small, especialy in the southern hemisphere. They show a widely variable rate of sea level change, some up some down. We do not have anything that could be called a reliable global record until the second half of the 20 th century

        You are right to wonder what has happened to sea levels this century.

        According to the university of Colorado, official keepers of the records, the rate of increase has slowed in the last decade, not speeded up. Fasullo ET al writing in 2016 have speculated as to why the expected acceleration has not taken place in this paper

        http://www.nature.com/articles/srep31245

        You are aware that there are many places in the world where sea level is falling, although isostatic variance complicates matters?

        You are aware of the chain of castles from the sixteenth century built around britains welsh coast where the sea gate is now high and dry?
        Tonyb

      • TonyB, sea level rise in the time scale we are discussing now – some hundred years – has almost nothing to do with geology. Regions in northern parts like The inner Baltic has still sea level fall up to 8 mm/year due to landrise after the last ice age, other parts like New Orleans has more rapid sea level rise due to land fall. There are also regions like the eastern coast of North America and other parts of the world that has temporarily changes due to ocean oscillations (currents.)

        The overall Global rise is due to global warming, for the most part thermal expansion, a lesser part due to melting of land ice, pluss several other causes with even lesser effect such as expanded use of fresh water etc.

      • Rune

        Some places in the world are cooling, some are warmimg. Similarly with sea level rise some areas are seeing a drop on sea levels in other places it is rising, complicated by isostatic reaction.

        We lose the nuances when we average everything out globally and it is the reasons for those nuances that is the interesting thing.

        As for the rate of sea level rise, it appears to be decreasing. Of course if you won’t accept fausullos satellite data you will also want to reject satellite data on arctic ice extent?

        Tonyb

      • I have noe idea where “fausullos satellite data” comes in here. My very simple approach to this question is; If the Global temperature goes up, Global sea level will rice, if Global temperature goes down, Sea level will fall. No way around that.

        No Global warming, no Sea level rise and vice versa.

      • Judith recently linked this paper that may have some bearing on the satellite sea-level rise issue. Punchline: it’s accelerating.
        http://www.nature.com/news/satellite-snafu-masked-true-sea-level-rise-for-decades-1.22312?WT.ec_id=NEWSDAILY-20170717

      • jimd

        there are always excuses when things don’t go as their models suggest they should and previously reliable data has to be shown to be wrong.

        ‘He presented the as-yet-unpublished analysis on 13 July in New York City at a conference sponsored by the World Climate Research Programme and the International Oceanographic Commission, among others.’

        We had the same thing with Pages.

        Quite how these glaring mistakes in so many fields of supposedly robust and expensive climate science are never caught until they start to contradict the narrative is surprising. There have been decades to catch any mistakes.

        tonyb

      • Jim

        what with continual and sometimes fundamental adjustments to our understanding of paleo, global temperatures, sea level rise, ice levels, sea temperatures etc etc I have to come to the conclusion that many aspects of climate science are still at far too early a stage in their development to provide definitive answers as to what has happened to the worlds climate over the duration of the Holocene and what is likely to happen in future.

        Science is meant to evolve of course but our understanding of the earth and the technology we have to measure it, still seems much too basic to serve its purpose. Disappointing, bearing in mind the huge amounts of money and resources spent on it.

        tonyb

      • The new data lines up with independent estimates from tide gauges and Greenland ice melt rates while the old data didn’t, so that part is better.

      • Geoff Sherrington

        Rune
        You simply cannot state that present observed sea level change is due to temperature alone, and not involving geology, without knowing that the bottom 50% of the oceans is a stable container. There is too little observation of deep oceans to assume their constancy for sea level purposes.. We know that there is movement of deep ocean floors, but we have insufficient, reliable, quantitative data. Also, land erosion and sub-sea sedimentation are known qualitative, but not adequate quantitative, effects. Geoff

      • Geoff Sherrington

        The ‘anomaly’ method of temperature expression should not be used in a report of this gravity. Absolute temperatures are the appropriate metric.
        For example, a body of literature used a 1951-1980 reference period. Another body of literature asserts that global warming from GHG effects commenced about 1950. Therefore, temperature effects earlier than 1950 cannot be anomalies because the physical basis has changed after 1950, if both bodies of work are accepted. (They remain unsettled).
        Also, the estimation of errors is treated differently for the anomaly method as opposed to absolute. This is an unscientific distinction. The errors in absolute mode must apply to anomaly mode because averaging and subtraction aa performed are not steps with the ability to reduce error in this case. Geoff.

      • > Absolute temperatures are the appropriate metric.

        You might like CliveB’s take on this:

        https://twitter.com/clivehbest/status/897066268985085952

        Hint: it may not exist.

    • This comment does not take a position on the rate of rise but rather is intended to spotlight the level of uncertainty identified by some papers on the acceleration rates of lack thereof. Reading some of these papers, even those that favor acceleration, admit to mathematical contortions to get their conclusions.

      Wenzel et al (2014) says in part “Although the acceleration found for the global mean, +0.0042+-0.0092mm/yr, is not significant, local values values range from -0.1mm/yr in the central Indian Ocean to +0.1mm/yr in the western tropical Pacific and east of Japan.”

      Wenzel further says “…there is still a disagreement where there is an acceleration in global sea level rise……or not.”

      Excellent discussion of the issues involved and complexity in calculations and identifying those papers not supporting acceleration.

      Note the error bars on the acceleration above +0.0042+-0.0092mm/yr. Is that cause for confidence?

      Watson (2016) provides an excellent discussion on all the difficulties in arriving at a rate of acceleration. The paper states in part “…at the 95% confidence level, no consistent or substantial evidence (yet) exists that recent rates of rise are higher or abnormal in the context of the historical records availaable for the US,….”. The real value of the paper, however is the in depth discussion of why caution should be taken in believing any value for acceleration.

      Parker & Ollier (2017) states in part ” We show that the sea level rises estimate for California as well as the IPCC for the entire world are up to one order of magnitude larger than what is extrapolated from present sea level rise rates and accelerations based on tide gauge sets …..”

      While Haigh et al (2014) supports acceleration of SLR, they rightly go into extensive discussions about the challenges of computing that amount. They admit “…a consensus has yet to be reached about the existence and significance of any further acceleration in recent years, which would be indicative of a high sea level projection pathway.”

      Larsen & Clark (2006) “There is no clear proportional exponential increase in the rate of sea level rise. If proportionality exists among sea level, atmospheric CO2, and temperature, there may be a significant time lag before the anthropogenic increase in the rate of sea level rise occurs.”

      Given all the influences on local and regional sea level rates and the need to remove inter-annual and multi-decadal oscillations how can it be so easy to have such certainty about the rate of global sea level acceleration?

      • Ceresco kid

        Excellent comment. You said

        ‘Given all the influences on local and regional sea level rates and the need to remove inter-annual and multi-decadal oscillations how can it be so easy to have such certainty about the rate of global sea level acceleration.’

        This is what I have greatest difficulty about. We know that sea level is actually decreasing in places. We know that in some places temperatures have been falling not rising..

        To determine a ‘global’ average on almost any climate measure is a bit like having a global average telephone number. The state does not exist and whether it is a useful measure is doubtful, as we miss out the nuances of those many aspects not conforming to the ‘global average.’

        tonyb

      • Ceresco kid
        Concur that it is a useful comment on sea level rise uncertainty.

        tonyb, Any thoughts of future sea level rise in Part 2 for the Roman high stand?

        Considering the 120 meter rise from the last glacial maximum 18,000 to rougly 7,000 years ago, the .01mm to 3mm is likely to be survived with adaptable measures.
        Scott

    • Dr. Curry ==> It is my understanding that PSMSL records are not explicitly corrected for vertical land movement — which at many Tide Gauge sites is substantial (for instance, NY City’s Battery site has a negative vertical movement of 1.3 mm/yr according to Dr, Richard Snay at NOAA CORS — personal communication).

      It seems inconceivable that global sea level rise estimates are still being made on uncorrected data with a known major confounding factor being ignored. There are sites with paired long-term (long-term enough) Continuously Operating Reference Systems which could be used — and only these. The land movements can be fully as large as the suspected rise or fall of Mean Sea Level at a location.

      Vertical Land Movement, which can be “up” or “down”, moves the ruler!

      • Kip

        The land movement is something I often mention when talking about sea level rise, often it can be greater than the increased water volume.

        However, there is a new complication in as much the estimates for land rise or fall-due to glacial action/earthquakes etc-are now deemed to be highly questionable.

        It seems we know, with certainty, very few climate parameters. Those we think we do know seem to get adjusted to conform to what was expected of them.

        Physical proof, such as farming at higher altitudes in the past than possible now, or tree lines at higher altitudes to now, or mines that are only now becoming visible as ice melts, all seem to be ignored or ridiculed.

        tonyb

      • Kip, see my recent post on SLR, acceleration, and closure. iIRC, there are 146 PSMSL tide gauges with decent proximity Differential GPS land motion correction. About half are suffciently long record (>60 years) to be useful for SLR trends. These show ~2.2mm/year and NO acceleration. The sample is of course northern hamisphere biased, but there is no reason to think that significantly biases the result. My own closure estimate (calculated in the guest post) is 2.3mm/year. Alternatively, Cazenove’s peer reviewed 2014 estimate is 2.2mm/yr. So closure triangulates well to the land motion corrected PSMSL estimate.

      • Rud

        Speaking of Cazenove, I enjoyed rereading that 2014 paper this morning. While I didn’t mention it above, it reminded me of all the complexities in understanding SLR. A good read.

      • Rud – TonyB ==> Yes, PSMSL database contains about 1800 tide stations, and only ~75 of them can reliable be used for Global Sea level Rise calculations. Of those 75, it is uncertain how many are affected by entirely local or regional sea level effects (like that occurring in the US Southeast Atlantic coast where ocean circulation has been found piling water for the last decade). Others of the 75 may have similar purely local effects.

        Makes the whole thing a difficult issue — I’m not sure how to resolve it.

  22. I’ve deleted numerous comments that should have been posted on the discussion thread.

  23. I have written a post suggesting an orientation for judging these assessments, and illustrate with three examples.

    In starting to read the draft Climate Assessment reports, it strikes me that the difference between alarmists and others is not so much in the data or facts, but in the perspective through which one sees and interprets the information. From experience the last few years, I suggest that readers of these reports need to be alert for two errors that crop up often. The general impairments are stated below followed by some examples for illustration.

    CO2 Alarm is Myopic: Claiming CO2 causes dangerous global warming is too simplistic. CO2 is but one factor among many other forces and processes interacting to make weather and climate.
    CO2 Alarm is Lopsided: CO2 forcing is too small to have the overblown effect claimed for it. Other factors are orders of magnitude larger than the potential of CO2 to influence the climate system.

    Examples:
    Greenland Ice Sheet Melting
    Movement of Ecological Life Zones
    Rising Temperatures
    https://rclutz.wordpress.com/2017/08/21/impaired-climate-vision/

  24. The Information Quality Act requires that federal agencies: (A) issue guidelines ensuring and maximizing the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by the agency.

    NOAA inform that:
    «The report will adhere to the Information Quality Act requirements (http://www.cio.noaa.gov/ services_ programs/ info_ quality.html) for quality, transparency, and accessibility as appropriate for a Highly Influential Scientific Assessment (HISA).»

    The NOAA information quality guidelines contains the following definitions:

    Objectivity is defined in the following way:
    «Objectivity consists of two distinct elements: presentation and substance. The presentation element includes whether disseminated information is presented in an accurate, clear, complete, and unbiased manner and in a proper context. The substance element involves a focus on ensuring accurate, reliable, and unbiased information. In a scientific, financial, or statistical context, the original and supporting data shall be generated, and the analytic results shall be developed, using sound statistical and research methods.»

    Transparency is defined in the following way:
    «Transparency is not defined in the OMB Guidelines, but the Supplementary Information to the OMB Guidelines indicates (p. 8456) that “transparency” is at the heart of the reproducibility standard. The Guidelines state that «The purpose of the reproducibility standard is to cultivate a consistent agency commitment to transparency about how analytic results are generated: the specific data used, the various assumptions employed, the specific analytic methods applied, and the statistical procedures employed. If sufficient transparency is achieved on each of these matters, then an analytic result should meet the reproducibility standard.» In other words, transparency – and ultimately reproducibility – is a matter of showing how you got the results you got.

    It follows from the information quality act, and the invitation by NOAA, that the report should both be objective and transparent.

    However, in the introduction to the report it is stated that the report is based on the following approach:
    «The approach used to represent the extent of understanding represented in the Key Findings is done through two metrics:
    Confidence in the validity of a finding based on the type, amount, quality, strength, and consistency of evidence (such as mechanistic understanding, theory, data, models, and expert judgment); the skill, range, and consistency of model projections; and the degree of agreement within the body of literature.
    Likelihood, or probability of an effect or impact occurring, is based on measures of uncertainty expressed probabilistically (in other words, based on the degree of understanding or knowledge, e.g., resulting from evaluating statistical analyses of observations or model»

    Furhter, «The terminology used in the report associated with these metrics is shown in Figure 2. This language is based on that used in NCA3 (Melillo et al. 2014), the IPCC’s Fifth Assessment Report (IPCC 2013a), and most recently the USGCRP Climate and Health assessment (USGCRP 2016). Wherever used, the confidence and likelihood statements are italicized.»

    It should be quite clear that level of confidence and likelihood that are derived in the manner described will necessarily be subjective. The confidence and the likelihood will be in the eye of the beholder, whether that beholder is an individual or a group. Anyhow, there is no way that a finding derived in a subjective manner and expressed as confidence and likelihood can be objective and reproducible.

    (Be aware that ´likelihood´ should not be confuced with a reproducible frequency distribution that can be derived by sound statistics from repeated comparison of predictions and measurements with a traceable reference.)

    Further, even though IPCC has introduced the approach of assigning subjective level of confidence and subjective likelihood to their findings, there is no sound scientific and no sound philosophical basis for expressing scientific findings in that way. The introduction and adoptation of that approach by IPCC should not be regarded as a sufficient and sound scientific basis for that approach.

    As the report and its findings are at least partly subjective, it is therefor my opinion that it gives partly a false impression to state that the report fulfils the requirements of the Information Quality Act to scientific objectivity, transparency and reproducibility.

  25. The so-called Representative Concentration Pathway RCP8.5 and the consequences of that scenario are widely referred to throughout the report (70 mentions). The Representative Concentration Pathway scenario RCP8.5 was also referred to may times throughout the Summary for Policymakers in IPCC;WGI;AR5.

    The following quotes from the IPCC report are informative about the scenarios:
    «Following the decisions made by the Panel during the scoping and outline approval, a set of new scenarios, the Representative Concentration Pathways, are used across all three Working Groups for projections of climate change over the 21st century.»
    IPCC;WGI;AR5;Preface vii

    «For the Fifth Assessment Report of IPCC, the scientific community has defined a set of four new scenarios, denoted Representative Concentration Pathways (RCPs, see Glossary). They are identified by their approximate total radiative forcing in year 2100 relative to 1750: 2.6 W m-2 for RCP2.6, 4.5 W m-2 for RCP4.5, 6.0 W m-2 for RCP6.0, and 8.5 W m-2 for RCP8.5. … These four RCPs include one mitigation scenario leading to a very low forcing level (RCP2.6), two stabilization scenarios (RCP4.5 and RCP6), and one scenario with very high greenhouse gas emissions (RCP8.5). The RCPs can thus represent a range of 21st century climate policies»
- IPCC;AR5:WGI;SPM; Box SPM.1: Representative Concentration Pathways (RCPs)

    Note that a reasonable interpretation of that section, is that if no climate policy is imposed upon the people, then RCP8.5 is a likely scenario. Another reasonable interpretation is that a policy to curb CO2 emissions is required to get RCP6, which is referred to as a stabilization scenario.

    IPCC also stated:
    “..Representative Concentration Pathways, are referred to as pathways in order to emphasize that they are not definitive scenarios, but rather internally consistent sets of time-dependent forcing projections that could potentially be realized with more than one underlying socioeconomic scenario. … They are representative in that they are one of several different scenarios, sampling the full range of published scenarios (including mitigation scenarios) at the time they were defined, that have similar RF and emissions characteristics. … The primary objective of these scenarios is to provide all the input variables necessary to run comprehensive climate models in order to reach a target RF … »
    IPCC;WGI;AR5;Page 1045

    However, by the following paper, both RCP 8.5 and RCP 6.0 seem to be exaggerated: The implications of fossil fuel supply constraints on climate change projections: A supply-driven analysis

    From the abstract: «The emission scenarios used by the IPCC and by mainstream climate scientists are largely derived from the predicted demand for fossil fuels, and in our view take insufficient consideration of the constrained emissions that are likely due to the depletion of these fuels. This paper, by contrast, takes a supply-side view of CO2 emission, and generates two supply-driven emission scenarios based on a comprehensive investigation of likely long-term pathways of fossil fuel production drawn from peer-reviewed literature published since 2000. The potential rapid increases in the supply of the non-conventional fossil fuels are also investigated. Climate projections calculated in this paper indicate that the future atmospheric CO2 concentration will not exceed 610 ppm in this century; and that the increase in global surface temperature will be lower than 2.6 DegC compared to pre-industrial level even if there is a significant increase in the production of non-conventional fossil fuels. Our results indicate therefore that the IPCC’s climate projections overestimate the upper-bound of climate change.»

    Figure 2 and figure 4 in the linked paper indicates that both RCP8.5 and RCP6.0 may be unrealistic ( SD-PC = supply-driven peak conventional fossil fuels’scenario / SD-PCU = supply-driven peak conventional & non-conventional fossil fuels’scenario )

    It is clear from page 206 and 207 that the authors of the report have no reservations against using RCP8.5 as a likely scenario. It is even mentioned that the emissions may exceed the RCP8.5 scenario. Under the heading «Major uncertainties» the authors even states: None.

    By the paper referred to above, it may seem that large parts of the report is biased by what may seem to be unrealistic emission scenarios. Anyhow, emission scenarios that are based on demands that seem to exceed known reserves and estimated resources should deserves some scrutiny. Supply-driven scenarios should at least deserve some mentioning.

    • Actually the IPPC source material makes it clear that RCP8.5 is pretty unlikely.

      Riahi et al., 2011 “RCP 8.5—A scenario of comparatively high greenhouse gas emissions” that describes the development of the scenario states:“Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case”. Further details of the “conservative” assumptions are covered in the text.

      The accompanying overview paper van Vuuren et al (2011) “The representative concentration pathways: an overview” makes it abundently clear what RCP8.5 is and what “conservative” means in this context: “… RCP8.5 leads to a forcing level near the 90th percentile for the baseline [no climate policy] scenarios, but a recent literature review was still able to identify around 40 scenarios with a similar forcing level.”

      As you note “Business as usual” refers to what happens with no climate policy, something that is increasingly unlikely with ongoing international agreements, and not as is regularly implied, something to be expected.

  26. Whoever is reviewing the temperature section of the report will notice on pages 289 and 291 that they compare the current warm period of 1986-2016 to the warm and cool period of 1901-1960. They eliminate the data from the cool period of 1961-1985. This creates a spurious warming trend.

    For example if a station had the exact same average temperature from 1901-1960 to 1961-2016 then calculation they used will result in warming instead of no warming.

    What they should have done is to compare the current warming period 1986-2016 (30 years) to the previous warm period of 30 years. That is the information the public should be made aware of. Not cherry picked comparative data periods.

    Side note: The Crosbyton, TX station shows virtually no difference between the current warm period (1986-2016) to the previous warm period. Just one data point.

    • Hansen in his climate dice work uses 1951-1980 as a reference period. It works well because there is little trend. When looking at how temperature distributions change over time, trends complicate it by spreading the distribution, so that is why he chose that period, and he also chose it because there are people old enough to still identify with that timeframe.

  27. The entirety of chapter 2 conclusions can be shot down with two simple observations. No need to slog through all the warmunist pal reviewed paper details pointing to flaw after flaw.
    1. AR4 WG1 SPM fig SPM.4 said the warming from ~1920-1945 was not AGW; not enough rise in GHG. Mostly natural. Yet chapter 2 concludes the statistically indistinguishable warming from ~1976-2000 is AGW. The logic flaw is that natural variation cannot be presumed to have stopped in 1975. Provably flawed attribution.
    2. Except for a now cooled 2015-16 El Nino blip, it hasn’t warmed this century except by Karlization (or, newly, Mearsation). Yet this century comprises ~35% of the total rise in atmospheric CO2 since 1958 (Keeling curve),

    Separately both these observations can also be used to fully discredit CMIP5, upon which much else in the 5OD depends. Details in recent guest post “Why Models Run Hot” at WUWT. Essentially the ‘why’ behind Christy’s ‘what’ congressional tesrimony on 29 March 2017.

    • Your logical flaw is that you are trying to count natural variation twice by adding its effect on top of the +/- 0.1 C that is there in the background. Yes, this variation is large when GHGs only contributed about 0.2 C in the first half of the century, but no it can’t be added again in the second half where GHGs have contributed 0.6-0.7 C because its contribution is the same as in the first half, not larger. Think of it as a noise level of 0.1 C. By itself it can’t exceed 0.1 C warmer than 1900 by much. The only way to get beyond that noise level relative to 1900 is to add a forcing.

      • Wrong.
        First in asserting natural variance is only +\- 0.1.
        Which bit of 92% to 123% did you miss?
        Which means NV can cause a 31% variation in the AGW result.
        1.2 F from 1951 to 2000. That’s 0.4 variation or a third of result assumed
        Not to mention the real range of NV which has to be greater than what was assumed as you only have one result you are judging by. It could easily be twice as large statistically.
        Note the paper used these figures for recent warming, not early 1900’s.

      • If the warming since 1950 is 0.6-0.7 C, your 31% is 0.2 C, which is +/-0.1 C because it can be positive or negative of course. No inconsistency there. Lovejoy puts the standard deviation on millennial scales at 0.2 C, but those solar and volcanic effects would have been noticeable if they occurred since 1950.

      • JimD, I made no logic flaw. You did. See Jennifer Merohasy’ new paper using artifical neutal,networks (ANN, a big data AI method) trained on 50-1830 6 proxies to predict 1831-2000. Two summaries at GWPF, full paper at her blog. Upshot is that the ~1975-2000 warming is at most ~20% GHG and the rest natural variation. ANN even picked up the post 1980 proxy divergence problem. First credible attribution analysis I have seen since started looking in 2011. Rather that pick out AGW, pick out natural variation based purely on analysis of observational temperature proxies. No statistics, just ANN. Interestingly, they are using the same ANN trained on past Australian weather data to predict month out rainfall. The program is consistently beating the BoM GCM intermediate 1 month forecast.

      • Rud, can you provide link to this. i saw GWPF, but clicked on a few things there and didn’t find the paper

      • The logical flaw is comparing an early century warming with a late century warming at the same rate on top of that and saying therefore the later one can be natural. This is a classic and deeply flawed skeptical meme the way you phrased it, but perhaps you didn’t mean it to sound like all the others. If that ANN is anything like the recent AR fitting exercise we saw here, it is also not a good way to understand trends at a fundamental level.

      • jennifermarohasy.com/2017/08/recent-warming-natural/

      • Thanks Ordvic. Only just returned to thread and you beat me to it. I should have put the direct link in my comment rather than the three indirect hints.

      • ristvan, Yes I got to it through Tallblokes shop that links the paper and the article on her website.

      • The one thing I wondered about is why does it decline after 1980. I asked her on her website but it’s still in moderation (new commenter I suppose.

      • Thank you from me as well ordvic. Since this new paper may receive intense scrutiny, there would be some benefit in a very objective analysis, free of the usual score one for the home team.

      • The Jennifer Marohasy climate A.I. derived ECS looks like valid approach to understand CO2 sensitivity. I hope they continue the research with a much wider geographic range of proxy input data. Since the way this A.I. system is configured it does not seem too sensitive to proxy genre (tree ring, pollen, sediment, ice cores) so there should be additional analysis using Asia, Africa, Polar and global volcanic events to confirm the .06C projection. I consider this could be a good ‘proof of concept’ example of A.I. in climate research.
        Would like to have seen a little more on the hardware/software configuration required for this analysis.

      • I have known Jennifer Marohasy as a motivated and fairly lightweight climate blogger. I do like this study – the linked (by methodology) hydrological papers less so.

        “Examination of many of these proxy temperature records shows they typically consist of complex oscillations or cycles about a mean value, with the amplitude and structure of the temperature signal depending on the geographical location considered. In the pre-industrial era, these oscillations represent the compound effect of natural phenomena both internal (e.g. North Atlantic Oscillation, El Nino Southern Oscillation) and external (e.g. solar, volcanic activity).” https://authors.elsevier.com/a/1VXfK7tTUKabVA

        It you hit a climate problem with a Fourier analysis hammer you will get sine waves. If data such as long term Nile River flows are analysed from a Hurst perspective there is a different result – one resembling step functions.

        “The story of British hydrologist and civil servant H.E. Hurst who earned the nickname “Abu Nil”, Father of the Nile, for his 62 year career of measuring and studying the river is now fairly well known. Pondering an 847 year record of Nile overflow data, Hurst noticed that the series was persistent in the sense that heavy flood years tended to be followed by heavier than average flood years while below average flood years were typically followed by light flood years. Working from an ancient formula on optimal dam design he devised the equation: log(R/S) = K*log(N/2) where R is the range of the time series, S is the standard deviation of year-to-year flood measurements and N is the number of years in the series.” http://blog.revolutionanalytics.com/2014/09/intro-to-long-memory-herodotus-hurst-and-h.html

        Mandelbrot rediscovered and popularized Hurst’s work in the 1960’s – and renamed K to H. Persistence is a period of flows that hover about a mean with a specific variance – and then the system shifts to another flow regime with a different mean and variance.

        Persistence can be easily seen in modern data on the globally coupled ENSO quasi standing wave in Earth’s spatio/temporal chaotic system.

        https://www.esrl.noaa.gov/psd/enso/mei/ts.gif

        There is a sharp delineation between a La Nina dominated regime (blue) and an El Nino (red) dominated regime in 1976/77. The Pacific Ocean shifted again to a somewhat cooler state after 1998.

        Transitions in climate are typically abrupt. There are external triggers at thresholds and an immense and complex response in planetary ocean and atmospheric flow fields. There is no comfort for skeptics here – shifts are unpredictable and may be extreme. The next Pacific Ocean climate shift –
        if it runs true to the form shown over a millennia – is due in a 2018-2028 window. Predicting this more precisely – with it’s global implications for rainfall, biology and atmosphere and ocean heat content – is impossible.
        And anthropogenic greenhouse gases may perturb the climate system flow sufficiently to cause some or major change in the global pattern of coupled quasi standing waves.

        It seems clear to me that spatio-temporal chaos is the not so new climate paradigm. One that provides a far more powerful theoretical underpinning for consideration of climate processes than cyclomania ever could.

      • Ordvic. Read her blog, read the paper, read the MSM article quoting her. You point to the proxy/instumental divergence she points out that led to ‘Hide the decline shenanigans. I don’t think we really know why the apparent divergence. My own view is that (1) the temporal resolution of proxies is coarser than instruments ( same thing that tripped up Marcott 2013. See the first of my two guest posts he on him at the time), and (2) the divergence needs to be with reference to the region from which the proxy is taken, not GAST. For example for the Southern hemisphere is there a divergence fromthe instrument records for Tasmania, New Zealand South Island, and sourthern most South America. Dunno. The divergence problem may simply be an artifact of all the GAST infilling that Tony Heller and Paul Homewood like to illustrate, such as above average rising temperatures for central Africa where there are literally no GHCN stations.

        Separate comment on this new paper for you and Judy. She finds non- natural maybe 0.2C on 0.85C total so something less than 25% attributable to AGW, zomething more than 75% to natural variation. Very credible.
        But I personally do not think the final portion of her paper on sensitivity is credible at all. (1) Don’t see how she derives her ECS estimate, and the paper is not clear. (2) the no feedbacks CO2 only ECS is 1.1-1.2C. That is beyond dispute. Monckton’s own numbers drawn from IPCC give 1.16C. It is highly unlikely due to water vapor that the feedbacks are net negative. Lindzen and Choi 2011 (revised) is flawed. And lab spectroscopy results are not credible in a real Earth context. In the real world, cloud feedback is likely neutral or negative, not significanly positive as in CMIP5 and AR5. See essay Cloudy Clouds. Water vapor feedback is maybe half of CMIP5 and AR5 (giving an implied Bode f of ~0.5). See essay Humidity is still Wet and also my guest post on the math of Monckton’s ‘Irreducably Simple Equation’ paper; the last section of that post uses my simplification of his equation to estimate a net Bode of 0.25-0.3 (from observational first principles) and calculate a resulting ECS of ~1.7. No where near this new paper’s asserted but IMO unexplained 0.6.

      • Rud, thanks for that information. I thought the paper would explain it as you say. I had to be the ignorant one to pose the question anyway.

      • Why have they not applied this analysis to every surface station and extrapolated on that to see if it has any predictive value? Thermometers are better than proxies and serve as a fundamental test of their assumptions. I think it would be magical if you can take 100 years of New York temperature readings from 1900-2000 and predict the next twenty from it. However that would have global warming built in, so it might do quite well. As it is, has she said you can tell what trees are going to do in the next century just based on a few centuries of past data. Incredible stuff! Get on that prediction method if you believe it. How does a tree in a given location know what global climate will do to it? Somehow it does. Is this some Gaia thing? I think it is another form of cyclomania. Or maybe someone can explain why it is not because I have not read it much and could have missed something.

      • Jim D:
        “The ANN was trained for the period 50 to 1830; test period was 1830 to 2000.”
        Not sure I understand it. About 1,800 years would allow whatever it is to reveal itself better than 75 year time frames.
        Assume the ANN is learning from a chaotic system. It learns past breaks from one wing of the Lorenz butterfly to the other. Given enough time, it can learn that formula to 2 decimal places (yes I made that up). However, this assumes a few butterflys. There are many butterflys. Since the paper did it by proxy locations I assume, the ANN could learn that location’s tendencies. Which seems more tractable than global ones. For instance, understanding 6 big regional butterflys for starters. Next, the six butterflys know of each other. Then they communicate with each other.

        “…there is a different chaotic oscillator like the Lorenz butterfly at every point of space (so there is an infinity of them) and that they are all coupled strongly with each other in a non linear and time dependent way.”

        I wish them luck with their paper. It’s been said, skeptics need better ideas.

      • Ragnaar, so do you think New York’s thermometer data can be trained to predict the future at New York? Or if we got the tree rings from the New York vicinity up to 1850, could that have predicted the temperature there today? Call me skeptical. It would be a miracle. It is overly simplistic to say just the data at one site can predict the future at that site. The world is connected.

      • Jim D:

        “New York’s thermometer data can be trained to predict the future at New York?”

        Shorter time frames would be more difficult. I was suggesting AI can learn past regime shifts for many different time scales. For instance, a 30 year regime as well as 2 year regime both existing at the same time along with a 10 and a 15 year regime. What we want is a formula for everything to 20 decimal places. Given an infinite length of time to train, we could have that. With the Lorenz butterfly, it is driven by a formula and to correctly model a system it’s likely we need many decimal places or we will miss the jumps to the other lobes.

        Reflecting on my previous remark, Tsonis 2007 used 5 indices I think. Now we could have 5 regional paleo series. With 5 ANNs each training on 1 of the 5 series. We then try to visual the ANNs communicating. For instance, the ENSO region communicates with a lot of the planet by warming the GMST during an El Nino. It generally commands the GMST to warm and then cool. It is not important if ENSO actually commands or is a global driver. It’s meant to show communication. A specific ANN could learn what its temperature should be from the others to some extent. A specific location is X determined by its area and Y determined by the rest of the planet.

        It also occurred to me the Karl discussion as explained by Mosher. There is also an attempt to reach over distances. To connect two or more separated stations. I think that this too is in the neighborhood of this alleged communication I speak of. For instance, Is there an assumed communication using atmospheric temperatures (the temperature of a parcel of air) with what Karl did? To me, a moving parcel of air communicates the temperature of one location to another. We could also sub in energy for temperature.

      • What about volcanoes, or if solar variations on century scales are irregular and not just a combination of frequencies. The training period needs to be independent of the verification period, and you need to see how many proxies they discarded to find some that fit. They won’t say that, of course. They don’t say these are the only ones they have tried. I’m skeptical.

      • Jim D:

        I don’t think there is much of an answer for volcanoes. The ANN could learn that once in awhile there is a large eruption one and incorporate that into the long term average. Call it a very weak permanent vector arrow in the GMST.

        Solar I don’t know. I’d say there is some kind about 11 year cycle but that alone I don’t think is the answer.

    • You train neural nets to proxies.
      Input is proxy. Output is temperature.
      All you prove is that the much maligned proxies aren’t so bad. You’ve proved nothing about causation.

  28. IMO, we shouldn’t apply an isostatic rebound correction. We aren’t concerned about the volume of the sea. We are only concerned with level of the sea vis a vis the level of the shore.

    • For climate puposes you want sea level relative to geostationary land over long records >60 years. Isostatic rebound (and slump) models are a very crude geostationary approximation. The revision in 2013 to GRACE plus modeled iso after diff GPS was available for actual Antarctica showed the iso model off by ~4x. Resulting in far too high an ice loss estimate. McIntyre covered this in detail at Climate Audit a few years ago.
      The only legit way is long record diff GPS land motion corrected tide gauges. See my guest post here ‘SLR, acceleration, and closure’ for details.

  29. Interestingly, a search of the doc shows the discussion of the role of the sun generally and solar activity in particular is a discussion of “solar irradiance” (e.g., the word “sunspot” can only be found twice and in each instance, only as a word in the title of a reference) as discussed in Chapter 2.

    Chapter 2 can pretty much be seen as nothing more than a statement of fact with no more objective evidence to back up the claim other than being in accord with claims made in previous Assessment Reports by the IPCC, as follows:

    —–

    Assessment of confidence based on evidence and agreement, including short description of nature of evidence and level of agreement

    There is very high confidence that anthropogenic radiative forcing exceeds natural forcing over the industrial era based on quantitative assessments of known radiative forcing components. Assessments of the natural forcings of solar irradiance changes and volcanic activity show with very high confidence that both forcings are small over the industrial era relative to total anthropogenic forcing. Total anthropogenic forcing is assessed to have become larger and more positive during the industrial era, while natural forcings show no similar trend. (see, pg. 21)

    —–

    Also interesting is that the above concludes with no mention at all of the Grand Maximum occurring in the latter half of the 20th century. Nor is it mentioned that US’ ‘anthropogenic forcing’ has gotten smaller (e.g., Greenhouse gas emissions in the United States fell to their lowest level in 17 years in 2012, according to a new report from the Environmental Protection Agency, says MIT (“Recent data clarify the trends causing emissions to shrink to their lowest level since 1995.” ~Mike Orcutt, May 6, 2014)

  30. 1. The increased precipitation maps for the USA show a left hand side increase only.
    This is consistent with natural variation rather than climate change due to increased temp from increased CO2.
    Which should cause a band of increased precipitation across the USA at the appropriate level, presumably more middle and north.
    The fact that the use of USA precipitation charts which shows the opposite of what should occur and is palmed off as support shows that the scientists are looking for a result ( increased precipitation) to bolster their argument but do not understand where that precipitation should be (slightly further from the equator in a band.
    Using longitudinal strips is a real mistake of desperation and ignorance.
    Hope this helps

  31. NCAR 4 uses the base period of 1901-1960 for the analyses I referenced. They did not use the base period of 1951-1980. Their methodology creates a spurious warming by eliminating the cool period of 1961-1985.

  32. Since my previous (admittedly somewhat tongue in cheek) comment ended up on the cutting room floor let’s try again. A brief extract from the “Arctic” section of the CSSR “Final draft”:

    Because of its high sensitivity to radiative forcing and its role in amplifying warming, the Arctic cryosphere is a key indicator of the global climate state. Accelerated melting of multiyear sea ice, mass loss from the Greenland Ice Sheet, reduction of terrestrial snow cover, and permafrost degradation are stark examples of the rapid Arctic-wide response to global warming. These local Arctic changes influence global sea level, ocean salinity, the carbon cycle, and potentially atmospheric and oceanic circulation patterns. Arctic climate change has altered the global climate in the past and will influence climate in the future.

    Surely there’s nothing to quibble about there?

    • Gads of quibbles.

      Because of its high sensitivity to radiative forcing and its role in amplifying warming, the Arctic cryosphere is a key indicator of the global climate state.
      Run-on statement. Sensitivity makes a key indicator.
      Role in amplifying does not make it an indicator.

      Accelerated melting of multiyear sea ice
      And how does one distinguish between ice melted in place and ice lost through the Fram Strait?
      And why is multi-year ice pushed against the Canadian Archipelago, not circling around the presumably cooler north pole?
      Arctic sea ice a ripe for confirmation bias.

      mass loss from the Greenland Ice Sheet
      A few years of satellite records can also welcome confirmation bias.
      Should Greenland ice be losing or gaining ice wrt global warming?
      Ice core data indicate that accumulation rates and temperature are positively correlated over the longer term. Why wouldn’t this persist?

      reduction of terrestrial snow cover
      Fall and winter snow cover have increased, of course, while Spring snow cover has decreased. Perhaps this is related to global warming, perhaps not, but its also the stuff of confirmation bias.

      Arctic climate change has altered the global climate in the past and will influence climate in the future.

      What’s also missing from the statement is this:
      An icier Arctic necessarily corresponds to an increased pole to equator temperature gradient and a more extreme climate.
      A less icy Arctic necessarily corresponds to a decreased pole to equator temperature gradient and a less extreme climate.

      You can argue that you want an icier Arctic, but you are also arguing for more extreme climate.

      • TE – A couple of quibbles in return.

        Surely “ice lost through the Fram Strait” melts?

        Re your hobby horse about the “decreased pole to equator temperature gradient”, have you visited South West England recently? An anecdotal example:

        http://www.bbc.co.uk/news/uk-england-cornwall-40650406

        Looks like it leads to greater precipitation, both solid and liquid? Where’s Tony B when you need him?

      • Jim

        Having studied a thousand years of English weather via a variety of sources including the archives aNd library of the met office, these flash floods can be seen throughout our history. Many places have been destroyed by them and landscapes changed. Who can forget the lynton flood disaster in north Devon only some 60 Years ago?

        Generally the climate was wetter than today for large periods of our past but things seemed to settle down in victorian times.

        So I don’t think this is out of the ordinary. However I was on the EA flood defence committee for some 10 years so got involved with Boscastle and the Somerset levels flooding.

        Cornwall, and Devon to a lesser extent, have got a problem and they related to very steep sided valleys.

        Low pressure bringing Heavy rainfall seems to get stuck over these very small areas and keep on circulating and renewing themselves over the high ground. Flooding is worse if trees block the watercourse or bridges restrict flow or if the tide blocks one end as was the problem with Boscastle

        We reckoned there were some 60 such valleys in Cornwall and around 40 in Devon. We have one a couple of miles away and we deliberately chose not to live in the nice village at the bottom of the valley.

        I haven’t been following the discussion so I don’t know if this confirms or denies any comment by TE

        Tonyb

      • Thanks Tony,

        So what do you make of this recent Met Office paper in Nature on the “High risk of unprecedented UK rainfall in the current climate”?

        https://www.nature.com/articles/s41467-017-00275-3

        In winter 2013/14 a succession of storms hit the UK leading to record rainfall and flooding in many regions including south east England. In the Thames river valley there was widespread flooding, with clean-up costs of over £1 billion. There was no observational precedent for this level of rainfall. Here we present analysis of a large ensemble of high-resolution initialised climate simulations to show that this event could have been anticipated, and that in the current climate there remains a high chance of exceeding the observed record monthly rainfall totals in many regions of the UK.

      • Jim

        I had many battles with the EA over this and the Met office are as bad

        Generally, as regards rainfall they do not have much knowledge of events prior to around 1950 or so when they have good records of the lynton flood. So they are not really looking at anything more than a few decades of data on a national scale although some spot local records are better.

        They do have access to the UEA rainfall data compiled by Phil jones, some of which goes back to around 1800 or so, but this is not a true national record as of course by definition the wettest places were not inhabited because they were jolly wet!

        Another problem was the manner in which the rainfall was collected. The means of doing do was not consistent and the quality of the observation highly variable. The observers ranged from socialites to the main engineer on Brunels railway here in the west contry around 1850 . The latter would be reliable, the former not.

        I met the met office guy at dawlish when the railway broke which I guess is what the report you cite is referring to?

        I wrote an article about it as I lived a couple of miles away. I will dig it out and post it here.
        Tonyb

      • TE – While Tony’s digging out that article of his, perhaps you’d care to comment on another recent learned journal article?

        http://tos.org/oceanography/article/winter-2015-16-a-turning-point-in-enso-based-seasonal-forecasts

        The inability of the global climate models to predict large-scale climate anomalies likely results from the models’ over-sensitivity to tropical forcing. We argue that Arctic influences were also important in causing the observed weather patterns of winter 2015/16, in particular, diminished Arctic sea ice cover, extreme warm Arctic temperatures, and extensive Siberian snow cover. The weak response of the models to Arctic forcing contributed to seasonal forecast errors. To improve seasonal climate forecasts, we recommend complementing the influence of the tropical ocean with contributions from Arctic factors.

        It’s now past my bedtime (UTC). I’ll catch you both on the morrow.

      • Hi jim

        Sorry I was so long but the article was so good I had to read it all again. :)

        https://wattsupwiththat.com/2014/02/09/black-swans-dispatches-from-the-front-line-of-climate-change/

        It’s worth reading all the comments as well. As I suspected it was about the same series of storm events as the met office paper you cite iand indeed I referenced it myself in the article. It was very superficial I think

        My research was probably much more detailed than the Met office item as it focused on just one event.

        However, They do not do history! as the custodian of their achives will tell you..the trouble is that for many modern researchers if an item is not digitised it does not exist. A great deal of material has yet to be digitised and passes under the radar of many people.

        Hope you enjoy the piece

        Tonyb

      • tonyb “…the trouble is that for many modern researchers if an item is not digitised it does not exist.”

        I was amused to see Key Finding 1 Chpt 1 “The global climate continues to change rapidly compared to the pace of natural variation in climate that have occurred throughout Earth’s history” (page 58).

        As evidence they cite previous assessments, but curiously given the claim is about “Earth’s history” it references only the observational data (Appendix 1).

        I suspect the point needs to be extended to say that if it hasn’t been included in a model it can also be ignored.

      • I’ll return the favour if I may Tony? My own humble reportage of the flooded Somerset levels and washed away railway lines.

        http://econnexus.org/the-weather-report-from-soggy-south-west-england/

        Sorry about the video. Network Rail appear to have pulled it. I’ll see if I can locate an adequate replacement.

        You won’t find any pertinent comments from me on your WUWT article however. Anything with my name on it falls straight to the cutting room floor over there!

      • Surely “ice lost through the Fram Strait” melts?

        Yes, that’s the point – in warmer or colder mean temperature regimes, ice transported toward the equator melts. So that anomalous portion, which is not well known, that melted because of transport is not indicative.
        That anomaly could be zero, or even negative, of course.
        But we do know a lot of ice melts this way because of buoy drift.

        http://psc.apl.washington.edu/northpole/pngs/Allyears_buoy_drifts.png

        We also know that that the maximum ice thickness is a narrow pile which is visibly pushed up against the Canadian Archipelago. Were thermodynamics more significant, I for one would expect more symmetry of the distribution. But the distribution is quite asymmetric, suggesting the fluid dynamics are more important, though not exclusive of, thermodynamics.

        Re your hobby horse about the “decreased pole to equator temperature gradient”,

        Ha-ha. Unfortunately many participants in “climate change debate” (present company on CE excluded of course), even, or perhaps especially the well educated, are ignorant of basic atmospheric science.

      • You’ve lost me TE.

        Of course thermodynamics is important when discussing sea ice. Here’s a primer for you:

        https://nsidc.org/cryosphere/seaice/processes/thermodynamics.html

        Once MYI is gone, whether via the Fram, Nares, CAA or “melting in place” as you put it, it remains gone. And it’s not coming back except via thermodynamics to start with. What “distribution” are you referring to?

      • “Of course thermodynamics is important when discussing sea ice.”
        I never wrote that it wasn’t.
        What I wrote was that the distribution of ice is suggestive that fluid dynamics have been more significant than thermodynamics
        https://nsidc.org/cryosphere/seaice/processes/thermodynamics.html

        What “distribution” are you referring to?
        The 5m ice along the Canadian Archipelago is not there because it’s much colder there, it’s there because wind blew it there where it piled up.
        https://www7320.nrlssc.navy.mil/hycomARC/navo/arcticictn/nowcast/ictn2017082218_2017082000_046_arcticictn.001.gif

        Wind also blew a lot of ice out through the Fram Strait. More than ususal? There’s not really a long term record to say.

        “Once MYI is gone, whether via the Fram, Nares, CAA or “melting in place” as you put it, it remains gone. And it’s not coming back except via thermodynamics to start with.”

        Simply, perhaps, ICE = FREEZING – MELTING – ADVECTION

        If ADVECTION ( out of the Arctic ) decreases, ice accumulates.

      • TE – “If ADVECTION ( out of the Arctic ) decreases, ice accumulates.”

        r/Arctic/Central Arctic Basin/

        Not if future winters are anything like last winter. A picture’s worth a thousand words, but you can’t embed them here. See:

        http://GreatWhiteCon.info/2017/06/facts-about-the-arctic-in-june-2017/#Jun-11

        Plus the largely MYI free “pack” is much more mobile than in the good old days.

      • Tony – “It’s been a very cool August”

        In the Gulf of Mexico?

      • Jim

        Sorry jim, but It has been just over 1 degree centigrade cooler than normal this August up to Monday of this week

        Here is the met office blog which goes up to 15 August

        https://blog.metoffice.gov.uk/

        So that downpour in Ireland came in a cool August. So climate change or do you concede TE’s supposition?

        Tonyb

      • “If ADVECTION ( out of the Arctic ) decreases, ice accumulates.”

        Not if future winters are anything like last winter. A picture’s worth a thousand words, but you can’t embed them here. See:

        ?
        Wintertime Arctic Ocean temperatures rise because of the latent heat of freezing. The high temperatures are indicative of greater ice formation. So high winter anomalies indicate more freezing, not more melting. Now, you could imagine that summer time melt is increasing faster than a theoretical positive advection anomaly is decreasing. Perhaps, but I don’t think they’ve been distinguished.

        I’m not arguing that Arctic Sea ice loss is inconsistent with AGW. It’s very consistent, and to the extent that one wants a “less extreme climate”, one has to have a reduction in Arctic sea ice.

        But I am pointing out that you can’t and don’t know exactly how much of the decline is from advection of ice out of the Arctic and how much is from advection of warm moist air or sea water in to the Arctic.

        And given the excitement level, many are predisposed to confirmation bias that all the change is from AGW.

    • Tony – It seems Network Rail have deleted every last one of the time lapse videos they made of their repairs to the main line at Dawlish.

      Moving on from real live black swans to the Somerset Levels flooding in early 2014, your article makes no mention of them. Here’s the UKMO on that thorny topic:

      http://www.metoffice.gov.uk/binaries/content/assets/mohippo/pdf/1/2/recent_storms_briefing_final_slr_20140211.pdf

      There is an increasing body of evidence that shows that extreme daily rainfall rates are becoming more intense, and that the rate of increase is consistent with what is expected from fundamental physics. Although formal attribution is still challenging, it is possible to identify a contribution from climate change for some major flooding events, as the recent paper by Peterson et al. (2013) on the attribution of extremes showed. It is worth emphasizing that there is no evidence to counter the basic premise that a warmer world will lead to more intense daily and hourly heavy rain events.

      Presumably you disagree?

      • Some more anecdotal evidence?

        http://www.bbc.co.uk/news/uk-northern-ireland-41019610

        More than 100 people had to be rescued in Northern Ireland after being trapped by overnight flooding.

        Firefighters rescued 93 people who were trapped by flood water in their cars or homes. At one point, the fire service got an emergency call every 45 seconds.

        They should be battening down the hatches in Texas as we speak.

      • Jim

        Thanks for your various links. Whilst you will know the history, other readers here may not.

        The Somerset levels were in effect a great inland sea. King Alfred was forced to hide in this corner of England to escape the vikings. As the area was only accessible by punts and causeways it was a good hiding place and it was here he was supposed to have burnt the cakes.

        http://www.steveroberts.org.uk/mediapool/139/1397887/data/King_Alfred.pdf

        The area was gradually drained by a series of dikes and pumps and became fertile farmland. Population increased but unfortunately, perhaps did not realise of the Anglo Saxon names for some of the areas, for example ‘great island’ which signified it was, at other times, an island rather than a perpetually dry place.

        Around 2004 the EA came to our committee and said that to meet EU directives they were intending to turn large parts of the levels into a nature reserve. We asked how they intended to do this and were told by turning off the pumps. Dikes would also be allowed to fall into disuse. We protested and the following year they came back with a further plan which was all a piece with their idea of managed retreat. This would mean permanent flooding rather than the occasional winter flooding as part of a water meadow scheme

        We told them it was a harebrained idea to try to create a nature reserve here to which they replied it was a Political issue.

        So they turned off the pumps then sold some of them. It was inevitable that the first wet winter or summer the land would return to a semblance of what it was in king Alfred’s time and sure enough that’s what happened

        Those living on the islands were ok. The roads and railway lines were not, nor some of the newer housing. Nothing to do wth wetter weather merely humans ignoring topography and weather for political and green reasons.

        As for ploughing downhill, that was an eu health and safety directive. So instead of ploughing along the contours (with a danger of the tractor toppling over) they ploughed downhill with inevitable results of soil washing into rivers which ironically the EA would not dredge because of habitat reasons. You couldn’t make it up.

        Tonyb

      • Jim

        It’s been a very cool August so if you are suggesting this is climate change this is surely contradicting the thrust of your arguments to TE?

        Downpours constantly happen but an inquiry will determine if the processes worked. For example were rivers kept clear of sediment, , ditches cleared, drains not clogged with debris, trees cleared from near bridges etc.

        Tonyb

  33. Jim, in general not. Anyway, what is missed is a dicussion of the reasons for the for the retreat i.e. of the SIE after 2007. See this blogpost https://www.climate-lab-book.ac.uk/2017/gmst-arctic-changes/ which suggests that there is a bigger part of internal variability at work. IMO in some sections of the report in question the attribution is a littel bit too certain.

    • fc, in my opinion attribution is FAR to certain, and should be one of the principle criticisms. See my comments above. Regards.

  34. In addition originally AGW theory said the NAO /AO would evolve into a more positive mode which would suggest a more zonal atmospheric circulation and a cold Arctic, while lower latitudes would be milder.

    I do not think this has happened.

  35. Pingback: US Climate Science Special Report, 2017 | Turbulent Eddies

  36. Dr. Curry, et. al., I have composed an evidenced list of objections with the executive summary here.

    Synopsis:

    There are likely benefits of warming wrt agricultural productivity, energy use, human health, water resources, and natural ecosystems.

    Multi-decadal internal fluctuations are likely not negligible as the report implies.

    The 5C warming by the end of the century is unsupported by evidence and should be withdrawn.

    Global carbon dioxide emissions have decelerated since 2013 correlating with decelerating population trends.

    Graphical depictions of hindcast temperature correlations are misleading, particularly when models are tuned to past observations.

    In contradiction of the report, heat waves have decreased in the US for over a century.

    The report misplaces confidence in global extremes when most of the world lacks persistent and consistent observational database from which to assess trends of extremes.

    The report contradicts the physics of models indicating a decrease of temperature variability in association with global warming.

    The report fails to not observations of accumulated cyclone energy which do not support the claim of increased tropical cyclone energy with warming.

    I find a fair amount of confirmation bias, adherence to unsupported extremes, and misplaced confidence through out.

    • Nice contribution. Fernandoleanme made the same observation differently upthread. Somewhere (I will have to search it out, don’t remember offhand) I wrote a brief comparison of the AR4 scenarios (with IIRC A2 or A1B being BAU) to the four AR5 RCPs with RCP 8.5 implied as BAU because of the ways used in AR5 and elsewhere. Point of the writing was that AR4 BAU is somewhere between RCP4.5 and RCP 6.0, and nowhere near RVP8.5. And IMO the switch was deliberate to make AR4 incomparible to AR5 in order to hide AR4’s failed predictions and make AR5 intentionally scarier. All well hidden in prior year working committee documents just ‘accepted’ by AR4 and AR5 without illumination.
      Found it before posting. Was in the first draft of essay Hiding the Pause for ebook Blowing Smoke, where those scenario paragraphs did not make the final draft because albeit interesting and ‘correct’ only an aside not central to the essay’s simple central thesis proven just using visual comparisons of successive IPCC report exactly comparable published figures. I tried to keep obiter dicta out of the essays. This was judged an obiter dictum.

      • Above in a comment I mentioned van Vuuren et al (2011) “The representative concentration pathways: an overview” that describes the genesis of the RCPs.

        “The four selected RCPs were considered to be representative of the literature, and included one mitigation scenario leading to a very low forcing level (RCP2.6), two medium stabilization scenarios (RCP4.5/RCP6) and one very high baseline emission scenarios (RCP8.5). … The Fourth Assessment Report (AR4) identified only 6 scenarios that lead to forcing levels below 3 W/m2, but by now more than 20 scenarios in the literature lead to similar forcing levels as RCP2.6. RCP4.5 corresponds to the ‘category IV’ scenarios in AR4 (containing the far majority of the scenarios assessed in AR4, i.e. 118). The number of mitigation scenarios leading to 6 W/m2 in the literature is relatively low (around 10)—but at the same time many baseline scenarios (no climate policy) correspond to this forcing level. Finally, RCP8.5 leads to a forcing level near the 90th percentile for the baseline scenarios, but a recent literature review was still able to identify around 40 scenarios with a similar forcing level.”

      • Figure ES.3 exhibits observed annual CO2 emissions continuing to accelerate.

        Since 2013, though, CO2 emissions have decelerated
        https://turbulenteddies.files.wordpress.com/2017/08/globalcarbonemissions21.png

        Now, they’ve decelerated over a few years before, but never with sustained growth at today’s demographic profile.

        The USGCRP should include the latest secular deceleration of CO2 emissions as part of the reality that emissions may continue to fall irrespective of advocated policy.

        BTW, is this paper intended to inform or advocate? Mentions of Paris Accord are clearly advocacy.

  37. Whoever is reviewing the precipitation section of the report will notice on page 322 and 324 that they will compare the current time period for precipitation 1986-2015 to the base period of 1901-1960. The base period of 1901-1960 had the severe droughts in the 1930s and the 1950s. They conveniently select a period of drought to be compared to a period not influenced by drought. Again, like temperature, the period of 1961-1984 is eliminated from the analysis.

    On page 325 the graphs will use the period of 1958-2016. They also used the starting point of 1958 in the 3rd NCAR report. In that report they noted it was the beginning point of a trend in upward precipitation. Tony Heller has analyzed this starting point in the past and has shown that it was a low point in precipitation. Once again the 4th NCAR report cherry picks starting points and base periods to create large changes in precipitation and temperature.

    • To clarify, Tony Heller showed that the average percent of days with more than 2 inches of rain at all US HCN stations was at its low point in 1958. That is why the NCAR author’s used this as a starting point for their analysis.

      • Steven Mosher

        HCN…oh christ..
        Precipitation records. . Very in homogenous.

        Never use HCN.

        For any reason. Go to source data and do a proper job.

      • The HCN is part of a global effort to assemble a network of more or less reliable stations for weather and climate monitoring.

        http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn.html

        The real problem is extreme hydrological variability over millennia – of which the 20th century is a small and relatively balmy subset.

      • Mosher:

        Too funny.

        The NCAR 3rd report states that they began the study at the beginning of a trend and ended the study at the end of a trend. NCAR 3 did not include any data from 1901-1957. I guess precipitation studies start in 1958. NCAR 4 also begins their study in 1958. Go figure.

        The US HCN data supports the fact that NCAR 3rd report began the study at the beginning of a trend (cherry picked the starting point).

        The difference is that an independent private citizen shows all of the data prior to 1958 so the American citizens are informed of the historical climate record. Something you will not find in NCAR 3 or 4.

        The government and university climate scientists cherry pick start dates, end dates, eliminate decades of data. And we pay them with our tax dollars. Go figure.

      • HCN…oh christ..
        Precipitation records. . Very in homogenous.
        Never use HCN.
        For any reason. Go to source data and do a proper job.

        ???

        HCN is peer reviewed, QC’d, raw data.

        Precipitation distribution is likely not homogeneous because precipitation is not homogeneous. The rain shaft of a heavy thunderstorm can be a kilometer or less, skewing local distributions.

        More problematic is all the missing data in the raw record, which is not correctable.

  38. Another red team target should fact that increased precipitation means increased negative feedback

    • I find myself descending into the abyss of uncertainty.

      “increased precipitation means increased negative feedback”

      Most recently, NOAA and the Army Corps of Engineers (ACE) were unable to assess precipitation over a 6 month period related to the Great Lakes water levels. All models; all forecasts; all advisories have been wrong for the 2017 Great Lakes water levels assessments according to the ACE Detroit District reports. Why is that? Could it be that models are all wrong? NOAA’s and ACE’s trust in models. No one foresaw the water levels observed today just 6 months ago. Why is that? Of course… and then there are the ancient mariners of the Great Lakes who say: “water levels goes in cycles and…. they don’t know.” History; experience; carefully programed computer models and still…? We still don’t know. All this money spent on predicting the future and… we don’t know.

  39. Throughout the report, there are numerous examples of a poor citation practice that is illustrated by the following example:

    «Description of evidence base The Key Finding and supporting text summarizes extensive evidence documented in the climate science literature and are similar to statements made in previous national (NCA3; Melillo et al., 2014) and international (IPCC 2013) assessments. The analyses of past trends and future projections in extreme events and the fact that models tend to underestimate the observed trends are also well substantiated through more recent peer-reviewed literature as well (Seneviratne et al. 2014; Arnell and Gosling 2016; Wuebbles et al. 2014; Kunkel and Frankson 2015; Easterling et al. 2016; Donat et al. 2016; Berghuijs et al. 2016; Fischer and Knutti 2016).»
- Page 59

    In the particular example quoted above, no less than 8 papers are listed, but the significant data, information or statements in these papers is not clearly identified or quoted. This kind of citation practice effectively minimize transparency, utility, and scrutiny by the public. It is also reasonable to assume that this kind of citation practice complicates the job for reviewers and that reviewers, in practice, may not be able to fully scrutinize the report.

    By the Information Quality Act NOAA is supposed to: «(A) issue its own guidelines ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by the agency;»

    Utility is defined in the following way:
    «Utility refers to the usefulness of the information to its intended users, including the public. In assessing the usefulness of information that the agency disseminates to the public, NOAA considers the uses of the information not only from its own perspective but also from the perspective of the public. As a result, when transparency of information is relevant for assessing the information’s usefulness from the public’s perspective, NOAA takes care to ensure that transparency has been addressed in its review of the information.»

    In this case, transparency of information is clearly important from the public´s perspective. Hence, the citation practice used in this report does not seem to be fully in accordance with the Information Quality Act.

    Obviously and unfortunately, it can be argued that this citation practice is widely used. The same practice is also by IPCC. That may be true, but that argument does not help to provide the transparency that the public can reasonably expect from a federal agency.

    As an apropos, The finding of the following paper may serve as one documented illustration of the risks with poor citation practices:
    «Analyses showed that 48.9% of publications misinterpreted the conclusions of cited sources, with a bias towards claiming jellyfish populations are increasing, with a single review having the most influence on the network. Collectively, these disparities resulted in a network based on unsubstantiated statements and citation threads. As a community, we must ensure our statements about scientific findings in general, are accurately substantiated and carefully communicated such that incorrect perceptions, as in the case of jellyfish blooms, do not develop in the absence of rigorous testing.»
    Flawed citation practices facilitate the unsubstantiated perception of a global trend toward increased jellyfish blooms (Marina Sanz-Martın et. al 2016)

    • I think it is somewhat worse than you say. I happened to look at the citations for Chpt 1 Key Findings 5 (page 62): “Natural variability, including El Niño events and other recurring patterns of ocean–atmosphere interactions, impact temperature and precipitation, especially regionally, over months to years. The global influence of natural variability, however, is limited to a small fraction of observed climate trends over decades.”

      In describing the evidence base they state: “The role of natural variability in climate trends has been extensively discussed in the peer-reviewed literature (e.g., Karl et al. 2015; Rahmstorf et al. 2015; Lewandowsky et al. 2016; Mears and Wentz 2016; Trenberth et al. 2014; Santer et al. 2017a,b)”

      Karl et al. 2015 deals with a reanalysis of past temperatures with no discussion of natural variability; Rahmstorf et al. 2015 is a modelling study based on reconstructions that shows results that empirical studies have called into doubt; Lewandowsky et al. 2016 does some trend analysis of the pause and declares it natural, and tests experts’ ability to recognise pauses; Mears and Wentz 2016 deals with satellite period recalibrations and is silent on natural variability; Trenberth et al. 2014 deals with measurement of precipitation and the difficulties of separating out natural variation; and Santer et al. 2017a,b deal with the fit between satellite troposphere temps and models and trends in them, suggesting respectively that the models only run about 1.7 times as hot and that recent trends had not been flat.

      The overwhelming sense from this selection of literature is to show the recent temperatures are not anomalous when compared with models, rather than directly addressing natural variability. So not an extensive discussion of the role of natural variability in climate trends (that I would have largely focused on the variability of pre-mid 20th century data).

  40. Chapter 6 Temperature Changes in the US.
    Key findings.

    4. Extreme temperatures in the contiguous United States are projected to increase even more than average temperatures.

    This claim is contradicted on the next page, 6.1.1 para. 4, where elaboration on this point reveals “In most regions average minimum temperature increased at a slightly higher rate than average maximum temperature.” This means since the start of recorded temperatures in the US (and also in the world) winter extremes are milder and killing frosts less frequent to a larger extent than summer heat waves are more frequent. In short, this mean less extreme, (not more extreme,) trend in climates.

    This narrowing of the diurnal temperature range can only logically result in the average temperature increasing more than the extremes, the exact opposite of the stated key point headline.

    • They imply that the distribution widens which does allow their statement to be true for the extreme ends that represent the trend in annual maxima. So they are comparing Figures 6.7 an 6.8 with this statement.

      • That makes the report egregiously in error, because theory(
        1979, Manabe and Wetherald: “The reduction of meridional temperature gradient appears to reduce not only the eddy kinetic energy, but also the variance of temperature in the lower model troposphere.”
        1995, “Climate Change 1995: The Science of Climate Change”, the IPCC cites numerous examples of decreasing variability with increasing greenhouse gasses.
        2013, Huntingford, et. al. also demonstrated the decrease of temperature variability from an increase in greenhouse gas induced warming in CMIP5 models.
        2014, Screen found: Arctic amplification decreases temperature variance in northern mid- to high-latitudes
        2015, similar to Manabe and Wetherald, Schneider, Bischoff and Plotka found: “Scaling arguments show that the reduction of meridional potential temperature gradients that accompanies polar amplification of global warming leads to a reduction of the synoptic temperature variance near the surface.
        ) indicates a decrease of temperature variability with AGW.

        And US observationsindicate a decrease of temperature variability:
        https://turbulenteddies.files.wordpress.com/2017/03/figure3.png

        The report should correct this error.

      • Just from the shifting of a bell curve, the probability of hot extremes increases rapidly, so I am not surprised that the annual maximum temperature will shift faster than the mean. With a shift of one standard deviation, one in 100 year events become one in 10 year events. Those records will fall faster.

      • You can compare their Tables 6.4 and 6.5 to see what their statement is about. If you have quantifications of that type, maybe you need to point to them. Have they compared the change in the annual mean to the change in the annual maximum, or are you talking about apples and oranges here?

      • Just from the shifting of a bell curve, the probability of hot extremes increases rapidly, so I am not surprised that the annual maximum temperature will shift faster than the mean.

        If temperature variance remained constant, one would expect high temperature extremes to increase at the same rate that mean temperatures increased.

        But that is not what’s observed over the longer term past, nor modeled by physics of the future.

        In the US, maximum temperature variance has decreased, remarkably by a rate greater than the mean has increased!.

        It’s not just this report than needs correcting.
        The IPCC considers mean and variance and remarkably omits the case, case X below, that’s both observed and modeled:
        http://climatewatcher.webs.com/IPCC_EXTREMES-3.png

      • No, you can also see that from the report on the trend so far, Table 6.1. Looks like you are not checking what they actually say, and I have to report it to you.

      • No, you can also see that from the report on the trend so far, Table 6.1. Looks like you are not checking what they actually say, and I have to report it to you.

        Figure 6.1 represents changes in the mean, not changes in the extremes.

        Where temperatures have been measured consistently and persistently for more than a century, which is mostly the US, extreme temperatures have decreased because temperature variance has decreased more than mean temperatures have increased.

        That temperature variance has decreased is predicted by climate models and physical considerations.

        So by observation and theory, the USGCRP report is in error.

      • Maybe you didn’t have time to check 6.4 and 6.5 yet. What they say about the future lines up with those tables. If you have different numbers for that, you need to find them.

      • Jim D, considering the US and world temperature record shows a net decrease in extreme temperatures it would behoove the report to explain why models are projecting this longstanding world trend to reverse.

        Not only does past observation contradict the report’s projection but also dynamics of greenhouse gases which are hypothesized to limit both incoming and outgoing radiation at the top of the atmosphere. The gas’s greater opacity to outgoing radiation should impede night time and winter cooling but also (to a lesser extent) impede warming on clear summer days. Record high summer temperatures would only be achieved by starting from a higher daily low in the morning.

        The report’s projection of a steeper trend in extreme temperatures than in average temperatures is scientifically unfounded. Further, the point that winters should become less extreme in the north with fewer killing frosts in the south seem intentionally obscured. For example, figure 6.5 shows the dramatic increase in historical record high temperatures. These are shown in red, easily misleading one to think they are hotter summer days instead of being primarily unseasonably mild winter days. The chapter does reveal that the 1930s still far outstrip current US climate in record highs and heat waves.

        Jim D, do you know why the report’s model projections in table 6.4 and 6.5 have the lowest warming in the US southeast? If land temperatures are hypothesized to lead ocean temperature due to lower heat content, and the US’s prevailing wind is west to east, shouldn’t the west have the lowest increase? It seems the models are somewhat extrapolating the pattern of the US record rather than projecting hypothesized physics.

      • It is interesting why extremes get hotter faster than the mean in the future. It may be related to extended drought and heatwave periods because typically extremes are associated with those. It is commonly suggested that this will happen in addition to increased extreme rainfall events. We have just reached 1 C of warming so we don’t have a long enough record to see that signal, but when we reach 4 C, it should be obvious enough.

        As for the other question, the SE is the moistest part of the US. This tends to protect it more from the extreme temperatures, even if the heat index can be really uncomfortable. The western US tends to be drier, except the NW.

      • Jim D, I think all must agree that there are enough conflicting climatic effects that one could give a rationalization to just about any pattern. However, figure 6.1 shows a trend map of the US and it seems to contradict your explanation. The entire east coast, from Florida to Maine has warmed more than the drier, less marine affected, central southern states. My explanation would be that the east coast areas have seen the most population growth and development, making there stations more prone to picking up micro-climate and local land affect warming, (which are proven to decrease diurnal temperature range, [Oke]) rather than greenhouse warming. Care to take another shot?

      • The fastest warming areas of the world like central Russia and Canada have little population, so that removes that as a factor. China has not been warming faster than average despite their rapid growth of urbanization. You should look at a global map of warming rate, and you would see no correlation to population growth. On the other hand some of the least warming areas are populated and this may be attributed to pollution aerosols.

      • Don’t you think we ought solve the attribution problem for a measely 1C of warming before we go throwing around a 4C of warming? (you don’t get there by fitting the keeling curve to giss and conveniently ignoring the solar data)…

      • Maybe you didn’t have time to check 6.4 and 6.5 yet. What they say about the future lines up with those tables. If you have different numbers for that, you need to find them.

        Figure 6.4 depicts the long term decrease in ‘Warm Spells’ and ‘Heat Wave Magnitude Index’, which is consistent with the long term decrease of hot days and hot months in the US. So, Figure 6.4 contradicts the reports claim of increasing extreme temperature.

        Figure 6.5 can be misleading – it depicts the ratio of high to low extremes not the frequency of either high or low extremes. Imagine long ago there were 10 low temp records and 10 high temp records. Then imagine recently there were 0 low temp records and 1 high temp record. The long term US data, which I pointed out above, are consistent with this. There are fewer high temperature extremes and many fewer low temperature extremes. The reduction of cold extremes is larger than that of hot extremes ( because both mean increase and reduction of variance tend to reduce low temperature extremes, while only reduction of variance reduces high temperature extremes ). But the data is clear – reduced extreme temperatures.

        So, again, the report is in error both with long term observation and theory.

      • TE, I was talking about Tables 6.4 and 6.5 which back up the statement they made. Ron Graf, they comment on the numbers they present, not anything beyond that. As I keep saying, if there are other numbers from the CMIP5 or other projections, you need to present them, but so far despite all the outcry as though they exist, I haven’t seen them.

      • fonzie, and which way do you think the solar trend has been since 1950? Negative if sunspots are anything to go by.

      • Jim D: “As I keep saying, if there are other numbers from the CMIP5 or other projections, you need to present them…”

        Jim, my point is that the models to date show no skill in predicting the future. So why give them weight? In the case of the southeast cooling I found today I was not the first to ask the question of why.

        The entire southeast is cooling over the past 120 years, and not one single climate model in every run we have ever checked has been able to reproduce that, not once out of 50 some odd. – Dr. John Christy’s sworn testimony to the House Committee on Government Reform July 20, 2006.

        So, for the models no to be predicting a cooler southeast in the future just demonstrates the model skill is really just the investigator’s being projected onto their prop (which is quite alarming). I do not have my own projections because I do not claim to have created a crystal ball in which I could persuade. The observed cooling of the southeastern US was not predicted and has no good physical explanation beyond natural variability, what used to be called climate change ( before that label meant human caused).

      • You make a statement that is equivalent to having to prove the future before you believe models. As it is, measured warming rates correspond to an effective 2.3 C per doubling taking the CO2 and temperature record since 1950 and this is in line with what the science and, yes, even models, would predict. Only skeptics are surprised that the warming has been a degree for half a doubling.

      • The entire southeast is cooling over the past 120 years, and not one single climate model in every run we have ever checked has been able to reproduce that, not once out of 50 some odd.

        This is a unremarkable:

        http://news.nationalgeographic.com/news/2014/05/140509-global-warming-hole-southeast-climate-change-science/#/79586.jpg

      • JCH

        Much has been made by some that the Medieval Warm Period is not a true global phenomenon since there is absence of evidence of homogeneous warming in all corners of the globe. With this evidence of cooling in the Southeast US, I wonder if the term Global Warming should be amended by using an asterisk after Warming, such as was suggested when Roger Maris broke Babe Ruth’s Home Run record. Or maybe something like Global Warming (Well, Almost).

        Having a map like this begs the obvious question, how many other regions show similar cooling for the last 117 years?

      • Cersco kid

        The answer is one third.

        I did the work on this back in 2010

        https://wattsupwiththat.com/2010/09/04/in-search-of-cooling-trends/

        I sent the work to muller some time before he published BEST and he agreed that some one third of global stations showed a cooling trend.

        Mosh will be along soon to give you all sorts of caveats. I have suggested to him that it would be very interesting to see his data showing any stations that had been cooling for some time but I don’t think he ever has.

        As I posted here a year or so ago, CET has been cooling since around 2000 but with some notably earm years mixed in?

        I don’t know why climate researchers don’t look at the nuances of the climate instead of averaging everything, then they would see these trends within a trend

        Tonyb

      • In the case of the SE USA, people living there would have noticed a large increase in haze since automobile traffic increased in the 50’s. along with the oil industry there. The humid environment amplifies the effect of aerosols. It’s an exceptional region with one of the largest cooling trends during the so-called “global dimming” period of the 60’s that may be largely attributed to manmade aerosols. The downstream west Atlantic also cooled, perhaps because it was downstream of the pollution rather than an AMO cycle because it cooled less than the eastern US.

      • Jim, I appreciate that you have discarded your earlier hypothesis that the greater humidity in the southeast was the cause of its cooling. Let’s explore your current explanation: sun blocking pollution aerosols. The CSSR also tenders this possibility (along with every other on p272 ln4). But you point out something the report carefully avoided — the prevailing wind cooled the ocean from this land cool spot. This is not supposed to happen. As the report explains global sea surface temperature rose less than land becuase of greater heat content. [p268 ln 35] But not only this, the air generally cooled as it swept across the entire North-american continent to get to the southeast then it even cooled the Atlantic, which was supposed to be cooler than land. I am so confused at to how that happened and am even more interested in your reply.

        The CSSR acknowledges that no model predicted southeast cooling. [p272 ln3] Certainly all models are fed RCPs that include correct aerosols. The emissions and land use of the southeast US is certainly well understood. The “warming hole” seems also to be a hole in the “settled science.”

      • OK, you seem interested in how humid environments tend to amplify aerosol cooling effects and how that affected the temperature of the downstream Atlantic. Which part of that should I explain further? Let’s look at the 1960’s anomaly relative to the 20th century (which is also cooler if you just use 1901-1950 as a baseline). The Atlantic pattern looks like a downstream continuation of the American cool anomaly. I say they are related.
        https://data.giss.nasa.gov/tmp/gistemp/NMAPS/tmp_GHCN_GISS_ERSSTv5_1200km_Anom112_1961_1970_1901_2000_100__180_90_0__2_/amaps.png
        Yes, the oceans warm less quickly in general under global forcing, but this is a case of a localized forced cooling due to blocked solar radiation, kind of like volcanoes cool the ocean too. Are you surprised that a cloud of aerosols can cool the ocean?

      • And, yes, while there was a temporary cooling in the SE in the 60’s, now it is participating in the warming at the same rate as the rest of the US, so I don’t know what Christy is talking about.
        https://data.giss.nasa.gov/tmp/gistemp/NMAPS/tmp_GHCN_GISS_ERSSTv5_1200km_Anom112_2007_2016_1951_1980_100__180_90_0__2_/amaps.png

      • Another panelist refuted Dr. Christy in this way –
        “Yes. Looking at one very small location or
        region to try to infer, climate models are not capable of
        resolving at the level of one city or one small region at this
        point, so the issue of one small region in California
        disagreeing with some inference about what–climate models talk
        about things on larger scales, continental, southeast United
        States, that kind of a scale it can talk about. It can’t talk
        about at the county level or the sub-State level.
        I mean, that is not what we are able to do, so I don’t
        think that we can disprove climate model simulations by looking
        at temperature records in one location. That is basically what
        I would say. So I don’t think that those kinds of studies
        refute climate model predictions in any way.” – This was Dr. Curry

      • All skeptics have to do to establish the MWP is do the work. For some reason they don’t. Have no idea why. Probably the same reason nobody does the work for “SE cooling is meaningful” nonsense.

        It was going away in 2012; by now, who cares. It’s not a big issue.

      • ‘If it disagrees with experiment (experience) its WRAWNG.’

      • Steven Mosher

        beth;
        There was feynman in Theory ( as he explains science to freshman) and
        Feynman in PRACTICE… what he actually did as an expert

        “Well, right from the beginning it was apparent that Ray was measuring fewer neutrinos events than I had predicted. He came to Caltech in early 1968 to spend a week with me while he and I wrote our papers up describing for me a refined calculation, for him the first measurement of the rate in his tank. It was clear that the rate that he was getting was a factor of three smaller than I was predicting, and that was a very serious problem.
        There was a famous meeting at Caltech, just a few physicists—Dick Feynman, Murray Gell-Mann, Willie Fowler, Bob Christie, and a couple of others—in a small meeting room, where Ray presented his results and I presented my calculations of what he should have measured. There was some discussion of it afterwards, and it was pretty inconclusive. There was a discrepancy; it looked like one of us was wrong. I was very visibly depressed, I guess, and Dick Feynman asked me after the meeting if I would like to go for a walk. We just went for a walk, and he talked to me about inconsequential things, personal things, which was very unusual for him, to spend his time in quite idle conversation; it never happened to me in the many years that I knew him that he did that before or afterwards. And only toward the end of the walk, which lasted over an hour, he told me, “Look, I saw that after this talk you were depressed, and I just wanted to tell you that I don’t think you have any reason to be depressed. We’ve heard what you did, and nobody’s found anything wrong with your calculations. I don’t know why Davis’s result doesn’t agree with your calculations, but you shouldn’t be discouraged, because maybe you’ve done something important, we don’t know. I don’t know what the explanation is, but you shouldn’t feel discouraged.”
        For me I think of all of the walks or conversations I have had in my professional life, that was the most important, because I was a young man without tenure, and [while] I’d done many calculations by that time, this was the one that was most visible and people had paid the most attention to, and it looked like it was wrong. I really was feeling very, very, very discouraged. And for a person whom I so enormously admired, Dick Feynman, to tell me “You haven’t done anything that’s visibly wrong, maybe you’ve done something important”—for me that was a huge boost.”

        In the end the Theory proved correct, disspite the apparant contradiction with experiments. Took decades to untangle

      • The report says: “The cause of this “warming hole,” or lack of long-term warming remains uncertain…” [p 272 ln 4].

        Jim, are you suggesting the CSSR is in need of your expertise? Remember, none of the IPCC’s 50+ CMIP models can be as skilled as your physics if your analysis is correct. That might be discouraging considering your unblinking faith in anything the models or their creators say. Only one of you must be right, which I imagine is a dilemma for you. (Think on that).

        JCH, the warming hole is a big enough deal for the CSSR to site several studies that attempted inconclusively to explain it. In science when you dismiss evidence that contradicts your hypothesis as “unimportant” that is, well – bad science. What do you think of Jim’s implied claim that climate models do not give enough attention to aerosol forcing influence? I respect your knowledge on this.

        JCH, I agree that the warming hole is superfluous. That is because I think the US climate history is superfluous to projecting US climate future. In a chaotic global system the most meaningful statistics are global ones. And that goes for all regions. Currently the models do not show skill in reproducing the insignificant warming of the Antarctic. This is another mystery. But it would be scientifically unfounded to program the models to project less warming in this area in the future relative to the world without a proven physical basis. This seems like what they are doing by projecting less warming in the SE USA.

        Jim, regarding your quoting of Dr. Curry’s testimony, for those who are unaware, in 2006 she was a rising star in climate science, proposing climate change would lead to increased hurricane frequency and intensity. She was at the hearing fresh off Hurricane Katrina to warn the Congress that is will be worse than we thought. Dr. Curry’s realization over the years following that of the institutional biases that were driving her and her colleagues should be hailed as an example a model scientific ethics. And her ability to follow her conscience into the face of the gauntlet of hazing and disclaim by “the team” should be lauded as an example of true scientific heroism.

        Jim, if you read on in the 2006 hearing you will have seen Dr. Pielke and Dr. Christy predicting that focusing on the science debate would not be fruitful and that in ten years the debate will have not changed. They instead suggested focusing on what is now called “no regrets” policies that are beneficial regardless of attribution of future climate. The whole premise of the CSSR report as being productive on the basis of subjective collection of questionable or anecdotal facts when the need for sustainable forms of energy and building coastal resiliency generally accepted flies in the face reason. It just causes division and puts risk on the coin of science.

      • That “warming hole” is not the SE USA, so you have now deftly or clumsily changed to the subject of the non-warming area thousands of miles away off Greenland. This is an area’s whose temperature is sustained by fresh meltwater from Greenland, and as such is not warming up due to a continuous supply. It certainly is a valid subject for the CSSR as it may portend an AMOC slowdown if this area expands any more, as recent tendencies have shown. Note also that Hansen’s proposed meltwater pulse episodes from Greenland simulate such a cold core as a first stage.
        Also if you want to say that Dr. Curry’s testimony doesn’t pertain, neither does Christy’s. There has been a further IPCC round since 2006, and you need to find something more recent. He doesn’t bring this subject up anymore, does he?

      • The USA color map is mottled. Perhaps Mississippi and Alabama are the only places where the colored segments were not predicted by climate models, but I suspect there are lots of colored segments on the map that are not predicted by climate models. Lol, I also suspect Feynman might want to know why the always wrong spoon benders are obsessed with not seeing the overwhelming lack of neutral/negative numbers versus all positive colors completely dominating the map. If this was sports, the mothers would be crying’ for the mercy rule.

        http://i.imgur.com/TPu8IIo.png

      • Jim D, speaking of “deftly or clumsily changed to the subject” I never mention Greenland or the AMOC or other warming holes. Back to the “warming hole” in the SE USA, a term coined for it by Folland 2001, the CSSR is quite clear that is an open question under extensive investigation.

        Some studies conclude that changes in anthropogenic aerosols have played a crucial role (e.g., Leibensperger el al. 2012, Yu et al. 2014), whereas other studies infer a possible large role for atmospheric circulation (Abatzoglon et al. 2007), internal climate variability (e.g., Meehl et al. 2012; Knutson et al. 20131), and changes in land use (e.g. Goldstein et al. 2009; Xu et al. 2015).

        One should notice that the CSSR does not bite onto the attractive explanation that the cooling is anthropogenic aerosols, probably for the same reasons JCH declined my invitation.

        1) Models already use anthropogenic aerosols for the heavy lifting of pumping up climate sensitivity by simulated masking of GHG effects with great over-exaggeration according to both skeptics and non-skeptic aerosol expert Dr. Bjorn Stevens of the Max Plank Institute. This is evidenced in the large excursions in modeled global temperature after volcanic eruptions which are absent from the observational record.

        2) If anthropogenic aerosols were so potent as to erase 120 years of anthropogenic warming, well — problem solved.

        And JCH, Christy did not give up his curiosity about the warming hole. He studied it and published a paper on it in 2015. And although it’s the most recent investigation the CSSR neglects to include it among the nine others it cites. Christy’s hypothesis is that its an example of temperature record being biased by local surface land use and micro-climate, which means there is a significant amount of the land station record that is really just warming near the thermometers, not in the larger atmosphere.

        Christy’s hypothesis is supported by the UAH satellite temperature record of the lower atmosphere since 1979, which has been diverging from the station record. The CSSR isolates the UAH record in dismissive fashion by stating.

        The UAH record likely has a lower trend because it differs from the other satellite products in treatment of target temperatures from the NOAA-9 satellite as well as in the correction for diurnal drift (Po-Chendley et al. 2015).

        [p 269 ln 27]

        This seems to be another way of saying, “We know why UAH is lower. Move along.” I suppose it would be too messy to tell the story of UAH and RSS. I will let someone else. I would suggest the CSSR contact Roy Spencer.

      • The version of the CSSR that I have does not mention the warming hole in the SE USA context, only the Greenland one. Yes, there were not enough observations in the 60’s and 70’s to say what role increasing aerosols had when it temporarily cooled in that region before warming resumed, but similar things have happened more recently in China for example. Plus, as I plotted above, the warming in the SE has matched the rest of the US since the 1951-1980 baseline, so is it an issue at all? The aerosols are still there and not changing, but the warming is occurring anyway because of the GHG trend dominance since 1950. When dealing with forcing factors, it is their trends that matter.

      • > Christy’s hypothesis is that its an example of temperature record being biased by local surface land use and micro-climate, which means there is a significant amount of the land station record that is really just warming near the thermometers, not in the larger atmosphere.

        A quote from JohnC’s mouth would be nice, RonG, unless you’re happy with suggesting that Alabama did not warm as much as elsewhere in America because it did build as much buildings as elsewhere.

        I would add this quote to my Contrarian Matrix and even add your name in my Colophon.

      • “Christy’s hypothesis is supported by the UAH satellite temperature record of the lower atmosphere since 1979, which has been diverging from the station record. ”

        UAH Tropospheric temps are also diverging from radiosonde data … since ~2000,
        when the new AMSU sat took over.

        http://postmyimage.com/img2/792_UAHRatpacvalidation2.png

      • UAH post-2000 trends had a major adjustment downwards with UAH6 to match RSS, but since then RSS has had a major adjustment upwards, more matching the old UAH5, so I think they are still diddling about, and I would hold off any conclusions with satellite data.

      • Willard: “A quote from JohnC’s mouth would be nice…”

        Streets, parking lots and rooftops absorb heat during the day and release it into the atmosphere at night, also causing turbulence. Irrigation increases dry soil’s ability to hold heat and releases a powerful greenhouse gas (water vapor) into the lowest levels of the atmosphere over dry and desert areas. That’s the short list.

        When the cool layer of air near the surface is disturbed, warmer air aloft is drawn down to the surface.

        All of those cause real changes in the local climate, raising local surface temperatures, especially at night, by amounts large enough to be noticed both by weather station thermometers and by people living in some of those areas.

        But none of those changes has anything to do with widespread climate change in the deep atmosphere over large areas of the globe, such as might be seen if caused by increased concentrations of carbon dioxide in the atmosphere.

        https://www.sciencedaily.com/releases/2016/05/160502111616.htm

      • You may need to work a bit harder than that, RonG. The first paragraph rehearses “but UHI,” while the last amounts to say “because UHI, let’s go satellites-only.”

        I’ve read that (science by) press release about daily highs. I also read the source material, which proves the article you quote is a copy-paste of the press release. I also read his Congress testimony. I still can’t pinpoint what you call “Christy’s hypothesis,” nor your extrapolation from it.

        Let me try myself, using the testimony:

        (1) modulz don’t represent real-world bulk temperature trends

        (2) modulz perform best when not including extra GHGs

        (3) modulz are showing about twice the sensitivity to GHGs than calculations based on real, empirical data

        The first claim rests on MMH10, which opens a can of worms auditors shut a while ago.

        The second claim rests on WCA16, which deserves due diligence. Interestingly, it Christy’s testimony doesn’t mention if it has been “McKitricked.” if you allow me the pun.

        The third claim stretches the bounds of justified disingenuousness a bit too much to my taste.

        You’ll note that none of these claims imply anything regarding UHI.

        I duly submit you have (2) in mind.

        Am I correct?

      • Willard, if you have trouble following me please remember many readers here don’t know Willardeze, including myself.

        The CSSR Chapter 6, which is the only chapter I read so far, does not represent the spectrum of scientific facts or opinions on climate trends and projections. It is biased in every fashion including its emphasized key findings, which are non-informative in factual accuracy beyond what anyone has seen in the news: the climate is warming.

        Key finding 1 states that the satellite record is consistent with the ground record, ignoring the fact that both independent satellite groups, along with radiosondes (balloons) had been diverging from the ground record trend until one of the satellite groups, (RSS), decided to abandon use of the newest satellite in favor of use of an older one (in decaying orbit) because it’s giving a higher temperature. I don’t know who is correct, UAH or RSS, but that issue should not be papered over.

        Key finding 1 states that the current decade is the warmest in 1500 years. I think that after MBH98,99 affair with use of contaminated proxies, proprietary analysis techniques giving 30x weight to “conforming” proxies, truncation of those same proxies where they are non-conforming and sneakily substituting thermometer record in placement of present day proxy non-performers, all sailing through peer review and shrugged off after exposure by citizen based scrutiny, there needs to be a footnote there. A few Climate Audit articles on tree rings, including the most recent one, should be required reading before any “medium confidence” about the hockey stick shaft (paleoclimate prior to 1900).

        Key finding 3 with it inflated model projections should be replaced with the common sense statements that we know within a broad 300% range of error (under scientific dispute) that warming will continues as trended to present if fossil fuel use is not systematically replaced over the next century.

        Key finding 4 “Extreme temperatures in the contiguous United States are projected to increase even more than average temperatures.” This is completely without scientific basis. I just read Thorne (2016) and surprisingly there has been no global trend in diurnal temperature range DTR since 1980. I say surprisingly because there is a robust narrowing of DTR from 1950-1980 or 1900-1980. If the Greenhouse Effect should be causing warming physics would predict that impeding outgoing night time cooling would raise the low temperature more than the high. Thorne does not dare come out and say this but comes close.

        The nature of DTR variability is important from a theoretical perspective for myriad reasons, including for understanding microclimate impacts, changes in stratification propensity, and the nature of changes within the deeper boundary layer [e.g., Christy et al., 2009; Pielke and Matsui, 2005; Zhou and Ren, 2011; Parker, 2006; Steeneveld et al., 2011; McNider et al., 2012], and potentially as a determinant between forcings that have different short wave and long wave radiative fingerprints but may otherwise be similar [e.g., Jackson and Forster, 2013; Wang and Dickinson, 2013].

        http://onlinelibrary.wiley.com/doi/10.1002/2015JD024583/full

      • Beth,

        Your 60s video here: https://judithcurry.com/2017/08/20/reviewing-the-climate-science-special-report/#comment-857095, is about science. It can’t be applied to ideological and religious dogma, like ‘catastrophic/dangerous climate change’, unfortunately. :(

      • Beth and Peter, you are absolutely on point that the traditional orthodoxy of validation of hypothesis (guess) with observation of nature (data) has been seemingly given a dispensation by climate science. Instead, validation is being performed by models, that have not been validated themselves. We see frequently observation coming under suspicion when found in conflict with the hypothesis instead of the other way around. The Emperor’s New Clothe’s moral is for all time, wherever there is power. It’s part of the basis for groupthink.

        Dr. Curry and other’s who question many of the politically popular climate science conclusions do so through orthodoxy of scientific ethics. And, this is why Dr. Curry testified before congress this year advising that we spend more money on gathering good data from nature (not models).

        Thorne (2016) calls for recovery and digitization efforts on world daily temperature maximum and minimum data in order to improve resolution of trends in diurnal temperature range. If the same is done for degree of cloudiness, precipitation, wind and other metrics fingerprints can be developed that perhaps can accurately identify CO2 caused climate effects separate from solar, ocean current, cloud and aerosol, as well as their feedback interactions.

        For the greatest improvement in advancement of accurate climate science the investigation must be strictly protected from political coercion as well as policed against cronyism.

        Dr. Curry and many other scientists have been targeted by activists, politicians and even colleagues, all fanning each other’s flames. Micheal E. Mann called Dr. Curry a “climate denier” in his official statement for his congressional testimony this year. When Dr. Curry, sitting in the same panel of witnesses, pointed out his statement to him, he had the audacity to deny it. The chairman had to read back his opening statement. It was so part of Mann’s everyday lexicon he apparently nonchalantly used the slur in writing without thought or consideration.

        Congressman Raul Grijalva in 2015 had Dr. Curry and other “skeptical” scientist investigated for ties to the fossil fuel interests, having their employers interrogated and subpoenaed for documents. This directly led to one of the Grijalva’s targets, Roger Pielke Jr., to leave climate science, although there was absolutely no finding of conflicts. He just deemed it too dangerous for his career and young family to stay in.

        If there had been found funding by outside interests to any of the targeted scientists we know the likely result would have been a quite visible destruction of their careers and lives. For example, Senator Sheldon Whitehouse has been calling for RICO prosecution for any “merchants of doubt.”

        On the other side, scientists who spout the party line can pocket millions in public grants, like Dr. Jagadish Shukla, who employed his family members and funded his native countries philanthropies to the tune of millions over ten years before being exposed by Stephen McIntyre on Climate Audit through a tip from, ironically, Roger Pielke Jr.

        Shukla quietly closed up shop last year.

        Dr. Bjorn Steven, I mentioned earlier is an IPCC AR5 author and expert in atmospheric aerosols who came out with a paper in 2015 that severely cut the mean estimate of cooling effect of aerosols by eliminating the upper ranges from plausibility. The problem is that climate models rely on these high ranges to make the atmosphere look more sensitive to CO2. Basically the models compute CO2 minus aerosols to equal temperature change. Lowering the effect of one lowers the effect of the other. For this shameful act Stevens was compelled to defend himself, not scientifically, but politically with a sort of loyalty pledge to the climate non-skeptics.
        http://www.mpimet.mpg.de/fileadmin/grafik/presse/News/AerosolForcing-Statement-BjornStevens.pdf

        The answer to progress in reliable climate science is for study results and reports that can be trusted by all sides. This can only be done be Red Team protocols. https://judithcurry.com/2017/07/09/framing-the-challenge-for-the-climate-red-team/

      • Ron, just from observations GHGs account for over 100% of the net forcing (aerosols subtract), and forcing itself accounts for over 100% of the net warming, given the positive OHC trend that measures the majority of the imbalance. This is just observations, and the quicker the skeptics realize this, the better for them. Instead of basing their argument on models, they need to look at observations. The effective observation-based warming rate for the last 60 years is over 2 C per CO2 doubling, and that is a period that has seen 75% of the warming during 75% of the CO2 increase. Nature is giving us the most obvious signal possible.

      • Jim D: You make a statement that is equivalent to having to prove the future before you believe models

        Yes, models ought to be tested against our-of-sample data before they are believed. If a model is claimed to be accurate enough for policies to be enacted to affect the future, then they have to have been shown to predict the future sufficiently accurately.

        Unless there is something provably wrong with accruing data (e.g. one of the plates from the Eddington 1919 expedition), misfits of model to data disconfirm the model.

        Selections of subsets of data are tricky. If CO2 did not cause the early 20th century increase in global mean temperature, what did? Is it continuing or did it stop? (what’s the evidence for each?) Whatever it was/is, did the same process, more indirectly and sparsely measured, create the evidence of the other post Ice Age oscillations?

        Some day there may be an accurate set of forecasts of global mean temperature for a century; if that ever happens, we might believe, and base policy on, the projections for the next 50 years.

      • Natural variation (especially solar ones) can account for the swing from the relatively cold 1910 to the hot 1940, as compared to 1900 for example. These kinds of 0.2 C swings were going on in the 19th century too. What started to add uniquely during the 20th century was a trend, and it is known why. Without anthropogenic forcing, we would still be within 0.2 C of the 1900 temperature. However, no one is surprised that we are now nearly a degree warmer than that, except some skeptics who just can’t understand where the warming could have come from all of a sudden.

      • Jim D, your summary of the consensus belief can be refined further to simply: “Our measurements fit our hypothesis therefore we must be right.”

        Wisely skeptical scientist, however, know that is not all you need to be right, and they know this usually from the school of hard knocks, including Feynman upstring teaching a famously quoted lecture.

        1) One must deal with the observations that don’t fit. You can’t toss them out as “unremarkable” or unimportant, e.g. warming holes that no model can reproduce.

        2) One can’t make up the rules of the experiment after one collects and studies the data, or discard or truncate data that doesn’t fit, as many tree ring scholars do right.

        3) One can’t hide their data and methods from rival investigators who challenge one’s published findings, as was the rule before Climategate.

        4) One can’t publish predictions that do not materialize and not accept that it leaves discredit. In 2005 the consensus of hypothesis believers predicted more frequent and more extreme tropical cyclones. They were wrong. The model predicted troposphere hotspot and tropical hotspot have not been observed. Between 2012 and 2015 there was an accelerating blizzard of papers to explain “the pause.” Finally Huang15 and Karl15 changed the observation to fit the hypothesis to solve the problem.

        5) One can’t win scientific debate by vilifying opposition, and smear.

        6) Even if the core of one’s hypothesis is hardly disputed that does not provide license to expand claims beyond that limited core. Radiative transfer properties of CO2 explain a trivial 1.2C warming per doubling of CO2, and this is spread over a several hundred year period.

        7) One must embrace the challenge of untangling or controlling confounding influences. The history of the consensus’s handling of Urban Heat Island and non-climate artifacts in the climate records is abominable. It was completely ignored (despite being a known influence) until Karl 1988, which found a 0.1 C non-cimate bias in small towns in yearly average temperature and 10-25 times that in big cities. The answer to this known bias started in the 1960s by moving stations out of cities to airports and water treatment plants. The problem is that the station just experienced a reset and repeat of the bias as the urbanization consumed that new location. Jones (2016) does not cite Karl (1988) and dismisses UHI as non-detectable. Ren (2014) says China has observed 0.047C/dec UHI. That’s a rate of 0.47C per century, which is significant in a 0.77C per century total warming. “Non-detectable?”

        8) Complaining that proof is impossible or takes too much time is a cop-out. We are taking reasonable action already. And credible scientific evidence is possible. There are plenty of unique characteristics of GHG forcing that can be examined, including effect on diurnal temperature range (DTR). Zhou (2010) presents a challenge to climate models, which in a 12-member climate model ensemble simulated only 22% of the global DTR trend for 1950–1999. Is the rest UHI, which also decreases DTR? This is work that needs to be done, and information that should be represented in reports like the CSSR. Do you not agree?

      • Your argument amounts to not understanding anything if we can’t account for temperatures every square mile of the earth’s surface every decade since 1850. No, 2 W/m2 of CO2 forcing is expected to have the one degree warming effect we have seen, just from physics, and observations of other responses like to volcanoes and solar 11-year variations that give us nice calibration datasets. You prefer to ignore other evidence like this that supports AGW, and just dismiss it all as not even possibly right. You’re stuck endlessly because some detail can’t be proven by existing data, and never will because the data just isn’t there to judge between hypotheses. Will we ever know the albedo changes or aerosol effects in the pre-satellite south east? No. That doesn’t mean that there aren’t explanations in terms of those things.

      • Jim D: Natural variation (especially solar ones) can account for the swing from the relatively cold 1910 to the hot 1940, as compared to 1900 for example. These kinds of 0.2 C swings were going on in the 19th century too.

        This is a species of argument that one can often read: a short segment of one long time series (e.g. an epoch of solar activity) can “explain” a short segment of another long time series (e.g. global mean temperature) once you know the epochs over which they are correlated. The problem , as pointed out repeatedly, is that any two independent time series (independent of each other) that are autocorrelated (each with itself) are sometimes correlated. Thus you need to compare very long segments of measures, not selected short segments. Autocorrelated time series exhibit periodicities, so any attempt to understand putative causal relations between the time series requires that each be observed over many periods.

        For the specific cases of global mean temperature oscillations and their diverse causes, the relevant time series have not been well-measured over much of the Earth surface for very long. Actual surface temperature records are spotty over the past couple centuries, absent before; proxies are available for at most the MWP and RWP; ice core proxies are available for few selected regions of the Earth. Evidence for explanatory solar variation is even more spotty. Evidence for natural feedback mechanisms with short periods has been accumulated, short here meaning 100 years and less; evidence for a natural feedback mechanism with a period of about 1000 years weak (thermohaline circulation, anybody?)

        It looks to me like evidence for a natural period of about 1,000 years is increasing, but evidence for particular mechanisms remains “spotty”, or totally inadequate. People have made models with the periods entailed, and without them; people have made models that have partial support in the evidence, but no track record of successful prediction against out-of-sample data.

      • Autocorrelation is not de facto an explanation. It is a result of natural and unnatural variations. Just studying statistical properties of temperature time series is just taxonomy devoid of physics. If 1910 to 1940 was a period of increasing solar strength, use that information to help explain why the temperature rose plausibly due to that effect combined with CO2. Science is about explanations, not statistical taxonomies.

      • Jim D: The effective observation-based warming rate for the last 60 years is over 2 C per CO2 doubling, and that is a period that has seen 75% of the warming during 75% of the CO2 increase.

        Besides the fact that natural variation is not completely characterized, natural heat transfer from the surface to the troposphere is not completely characterized. Upward radiation increases largely in accordance with the Stefan-Boltzman law; rainfall increases about 4%-6% per C increase (review by O’Gorman et al); water vapor pressure increases about 6% per C increase; where studied, CAPE*rainfall rate increases 7% per C; change in rate of transfer by dry thermals I have not come across in the literature. Unaccounted for is how these relatively large increases in energy transfer rates could have been powered by the relatively minor increase in DWLWIR accompanying the CO2 increase that is the putative cause of the surface temperature increase (4 W/m^2 per doubling of CO2 concentration; 3 W/m^2 on your accounting).

      • You seem surprised that 4 W/m2 can cause 2-3 C of surface warming. There are some whole back-to-basics threads that went through all that. Short answer: to eject the extra 4 W/m2 you need a 1 C surface warming, which however also puts more water vapor in the atmosphere (7% per degree) adding to the 4 W/m2, and finally when things are settled after feedbacks the surface is 2-3 C warmer and the water vapor greenhouse effect is greater too. It is very hard to come up with mechanisms by which warming does not increase the water vapor in the atmosphere because the ocean acts as a reservoir.

      • Jim D: You seem surprised that 4 W/m2 can cause 2-3 C of surface warming.

        What I wrote was that 4 W/m^2 could not power the increased transfer of energy that would accompany the increased surface warming. That’s one of the holes in the Swiss cheese of the CO2 theory of warming.

      • You mean that you personally don’t understand the science behind that effect from that forcing, and you infer that others don’t understand it too despite the physics that has pointed to effects like that since Arrhenius over a century ago.

      • Jim D: Science is about explanations, not statistical taxonomies.

        Much of the debate about CO2 is about which apparent statistical associations require explanation; and another “much” of the debate is about what constitutes adequate explanation.

        You write as though you do not understand, or do not accept, that independent time series may appear to be causally related because they are sometimes correlated.

      • Fluctuations due to the CO2 increase have appeared as expected over the last few decades. They have been predictable. What would have been surprising is if they didn’t materialize because the GHG effect is well understood at a quantifiable level.

      • Re Feynman and the scientific method. Fascinating chapter
        by Ray Jayawardhano in ‘The Neutrino Hunters.’ Guess
        when you’re chasing the elusive, you don’t give up too soon.

        Peering into the heart of a star for the first time is a feat in
        itself. No wonder Feynman encouraged Bahcall to keep
        going. And their research and Bahcall’s calculations were
        later vindicated by the independent findings of a team in
        Japan using water instead of dry cleaning fluid to trap
        particles. Lots of questioning and tests in the process
        and led to new physics.

        https://books.google.com.au/books?id=bLxpAwAAQBAJ&pg=PT55&lpg=PT55&dq=Caltech+Feynman+Ray+-+Neutrino+calculations&so

      • Jim D: “Fluctuations due to the CO2 increase have appeared as expected over the last few decades. They have been predictable.”

        Predictable in hindsight is not the same as scientific prediction. Searching for correlations is an early stage of hypothesis formation. Validation comes after predictive skill has been demonstrated and reproduced under critical scrutiny.

        Jim, the global mean temperature meandering upward with the assistance of several rounds of data enhancements on the favored data set only demands even higher skill before validation. Excluding satellites data until one of the indexes is altered to be more in line, and then dismissing the index not brought into line as an outlier is an example of investigator bias by the CSSR.

        Jim D: “What would have been surprising is if they didn’t materialize because the GHG effect is well understood at a quantifiable level.”

        Jim, in fact there are contrary surprises to model predictions. These include the warming hole, lack of tropical hotspot and tropospheric hotspot. The later makes sense because EGHE warming should be driven by the troposphere and dampened by land and sea uptake. The satellite index of the troposphere should be the leading index, not the dampened one.

        Before Nov. 1, 2011, both leading skeptic Richard Lindzen and leading non-skeptic Gavin Schmidt were in agreement that the lower troposphere should be warming at 1.4 times the rate of the surface. Surprisingly, the reverse was being observed. Satellites analyzed by both UAH and RSS were sensing only ~75% rate of warming in the atmosphere as stations and buoys on the surface, indicating perhaps the surface record being contaminated by non-climate effects like urbanization. In a Climate Audit post of November 1, 2011, a fascinating discussion on this contradiction ensues between climate scientist Robert Way, Steven Mosher of Berkeley Earth, Gavin Schmidt and Stephen McIntyre where Schmidt agrees the GISS models warm the atmosphere at a higher rate than the sea surface but at about the same rate as land. here. It’s and astounding conversation considering this is one of the most fundamental pieces of evidence upon which the whole trillions of dollars rests.

      • Search elsewhere on this thread for my thoughts on UAH and RSS. There are reasons not to trust them when their trend since 1998 changes 200% with one adjustment and they disagree with each other by a similar amount as of now. Which one do you favor? They can’t both be right. Also with the land warming twice as fast as the ocean since 1980, what does that tell you? Have you thought of the implications of such a clear signal? From AGW this is about what would be expected from external forcing.

      • Jim, in searching your comments I do not find a rebut my assertion that Key finding #4 of Chap 6 is wrong.

        Key finding 4 “Extreme temperatures in the contiguous United States are projected to increase even more than average temperatures.”

        Warming through GHG or from UHI predominately raise low temps. Only increased incoming shortwave (solar) has increases, (in observation or in models,) daily or annual high temps more than daily or annual average temps.

      • Ron, I did answer in another sub-thread. Tables 6.4 and 6.5 are the ones you are looking for.

      • Jim, those CSSR tables reference Pierce et al. 2014, which looks at local climate history and downscales global climate model output and distributes is appropriately assuming that climate relationships that exist now will persist. Pierce creates a new statistical method LOCA and admits it needs further testing, but they claim is an improvement to prior methods due to flaws discovered in their results.

        In no part of Pierce 2014 do they make the any claim that future extremes will exceed the trend in average temperature. It is plausible that the CSSR confused Pierce 2014’s one in 20-yr extreme as an annual extreme.

        Possibly the climate models are biased. Possibly the downscaling to regional for forecasts is biased. But, by the time that result is scaled up to a CSSR key point there’s a 99.9% certainty it’s biased. Because it contracts both the hypothesis and observation upon which the models were built upon.

      • So when the Tables say the warmest day of the year by mid-century changes by more than the average temperature, do you have trouble with what the Table says, or how it is interpreted? Key Finding 4 says what the Tables presented say as far as I can tell. Completely consistent.

      • The CSSR references Pierce et al. 2014, which introduces a method for downscaling GCMs to make regional projections. Pierce 2014 does not make any projections. If there is another Pierce 2014 I can’t find it. The one I cited that introduces the LOCA method is published Dec. 2014.

        If the CSSR did their own analysis it is not referenced. And I would challenge it if it was cited and published.

        Jim, from your years of blogging on the GHE have you ever seen a claim that the physics should increase Tmax more that Tavg? If so where?

      • That implies that the distribution will spread. I have not heard anything for or against that. I think Hansen’s dice paper was misinterpreted and criticized for suggesting the distribution is spreading. Note that rapid climate change in itself leads to a spread of a distribution because it is a moving bell curve that would be wider than a more stationary baseline one. Hansen had to detrend annual data to remove this effect.

      • Jim D:
        “Also with the land warming twice as fast as the ocean since 1980…”

        Anyone have a definition as to the depth of the SSTs as used in the temperture series? We can do better counting if we know what that depth is.

      • It is very shallow because it should represent the ocean surface air temperature too.

      • Wow – and satellites preferrably and ship’s intake otherwise.

      • Yes, nothing deeper than would be affected by air temperatures.

      • Jim, does the GHE make the Earth’s DTR more or less like that of the Moon? Hint: The Moon has a DTR of hundreds of degrees. There is no question that the GHE should decrease DTR and does so in models. The fact that DTR has trended five times steeper decline than in models from 1950-1980 and then zero trend from 1980 to present is an indication of contamination and improper adjustments in the land station record.

        The sea surface record are yet more unreliable. There was little documentation of attempts to calibrate different bucket methods with engine intake readings as the observation type changed while both had major corrections applied to them post hoc in different amounts in different times, the last being by Karl 2015 citing Huang 2015. As the ship intake and bucket methods were completely replaced by buoys and Argo robots from ~1970-2014 climate administrators took a last parting shot at ships by assuming they were warm biased worse than thought and applied yet another correction, effectively cooling the past and erasing the “pause” in warming from 1998-2014. Huang’s methodology was to transfer the trend of night time ship air temperature records onto the sea surface. These air temperatures themselves were warmed by a trend assumed to match the trend in increase of measurement height from a decadal trend of larger (taller) ships. There was no correction for DTR, however. The fact that average sea air temp increased at less rate than night time sea air temperatures over the last century, as modeled, was overlooked.

      • They are not talking about an increase in the diurnal range. It is about extremes, and a rapidly changing climate broadens the distribution making extremes more extreme as a direct function of how quickly the climate is changing. Think of averaging a moving bell curve and comparing it to a relatively stationary one as a baseline.

      • Let’s finish the string by re-quoting the key finding #4 of Chap. 6

        4. Extreme temperatures in the contiguous United States are projected to increase even more than average temperatures.[bold emphasis added]

      • Supported by Tables 6.4 and 6.5.

  41. The entire climate change dialogue reminds me of another controversy in which I was deeply embroiled for a brief period, viz “cold fusion.” After claims of large energy gains made the headlines, the actual calculations showed that miniscule “differences” (aka measurement errors) were magnified by artful chartsmanship. The whole climate alarmist community strikes me as cold fusion made good.

  42. Pingback: Reviewing the Climate Science Special Report | privateclientweb

  43. This discussion is only to do with “Extreme Precipitation” as on page 47 and the extent to which CO2 might drive such extremes.

    On page 112, line 21 we find the words “Warm air holds more moisture —-” This sentence was not written by someone with understanding of physical chemisty. The amount of water vapor predicted by Clausius-Clapeyron has (to first order) nothing to do with any holding power of the atmosphere and everything to do with the temperature and heat of evaporation of water.

    Let’s do a sanity check on increased precipitation.

    From https://en.wikipedia.org/wiki/Radiative_forcing, Modtran says 3.4W/m^2 for 300-600 ppm CO2, which suggests 5 ln(c/cz) for the total column forcing due to CO2. Since CO2 in 1900 was 290ppm and is now ~400ppm the total forcing is 1.6W/m^2.

    To evaporate sea water some part of the total forcing would have to go to water evaporation. Given that 1 mm of rain requires 29W/m^2, if all forcing went to evaporation, the maximum expected amount of increase in precipitation over the last 110 years is 1mm * 1.6/29 = .055 mm.

    Based on the above 1mm rain/(29 W/m^2) and a global 1000 mm rain per year, the heat required for evaporation is 80w/W^2, or about 40% of the incoming 200 w/m^2 insolation.

    Since the amount of solar insolation that goes to evaporation at present is 40%, it seems unlikely that CO2 back radiation would be any more efficient, so we are left with 40% *.05mm or .02mm. as the maximum amount of likely increase in precipitation in the last 100 years, and arguably half or so of this in the last 30 years.

    The paper (Asadieh and Krakaur 2015) is mentioned several times. It asserts that annual daily maximum precipitation has changed 8.5% (5.73 mm) in the last 110 years. Yet, they do not find a concomitant change at lower percentiles or average. Given the spatially continuous nature of CO2 forcing, it is hard to imagine a CO2 driven mechanism whereby the maxima are changed but the density of lower values of the distribution are not. It seems more likely that (Asadieh and Krakaur 2015) are observing “shot noise” from the discrete nature of rain storms. Whatever, it is critical that we all get to see the actual distribution of data from which conclusions are drawn that we may evaluate the utility of various methods used. In particular, it seems unlikely that any rainfall data is “normally” distributed but no distribution analysis is presented.

    Here is the “dog who didn’t bark in the night” argument. There are many agencies that monitor rainfall and river flow. If there had been any climatological increase in waterflow, it would have made front page headlines in the NYT.

    The summary is that Climate Science has not made the case that it is observing extreme precipitation driven by CO2

  44. Also, page 23, “the incidence of large forest fires in the western United States and Alaska has increased since the early 1980s”

    This is a gross and misleading error of context, which the USGCRP should retract.

    The few decades cited is extremely brief, of course, but more importantly, the statement implies that climate change is the most or even somewhat significant factor with fires in America. This is false. Grazing and fire suppression, especially since aircraft, have changed fire frequency. The chart below of Jemez Forest fires in New Mexico is consistent with the long lived tree scar record in California and ash in lake sediment records around the US. There were many more fires a thousand years ago than in recent centuries and there were many more fires two centuries ago than in recent decades.

    http://d1vn86fw4xmcz1.cloudfront.net/content/royptb/371/1696/20150168/F2.large.jpg

    (http://fireecologyjournal.org/docs/Journal/pdf/Volume05/Issue03/swetnam-84.pdf)

  45. A few comments regarding chapter 2 : Physical Drivers of Climate Change :

    § 2.1, p 99, line 16 : if there were no atmosphere (nor oceans), the mean temperature on earth would be much lower than the -18°C (15°C – 33°C) that are supposed here : as a comparison, the mean temperature of the moon is around -85°C (188K) (with 7% albedo), and there is no reason why the earth’s temperature (with 30% albedo) would be higher.

    § 2.6.1, p 112, line 21 : “Warmer air holds more moisture … about 7% more per °C” : this also means 7% more latent heat withdrawn from the oceans (71% of earth surface) : i.e. 6 W/m2/°C = 4 W/m2/°C), and directly carried to the top of clouds through evaporation/condensation (to be compared with the 0.6 +- 0.4 W/m2/°C negative feedback (line 36)) : this should be mentionned. Water does not enforce warming, it regulates it.

    More generally speaking :

    1/ The assumption that GHGs can radiate towards the ground (from cold to hot) is pseudo-science : all gases radiate at their absorption bands wave lengths, … where they are opaque unless they are sufficiently diluted :
    – for CO2 : from the stratosphere to cosmos
    – for H2O : from top of clouds to cosmos
    They cannot radiate towards the ground through their own opacity : Fig 2.1, p 128, forget those GHG 342 W/m2 backward radiations.

    2/ Gravity causes a thermal gradient, from 220K at the tropopause altitude to 288K on the ground : 9.8 °C/km in dry air, 6.5°C/km in standard atmosphere (well known by alpinists and aviators ; standardised value for the civil aviation, to prevent freezing risks). It is a direct application of the Laplace Law (PV^gamma = Cst, with gamma = 0.28 for dry air, 0.19 for standard (wet) atmospheric air).
    This law also applies to Venus, with 92 bar pressure and 470°C bottom of atmosphere (composed of CO2).
    This gradient operates reversibly in both ways : heating downwards, cooling upwards.
    It has nothing to do with GHGs nor radiation.

  46. “It is often assumed that higher-resolution, more complex, and more up-to-date models will perform better and/or produce more robust projections than previous-generation models.”
    Page 195, section 4.3.1 line 12

    This may be assumed but it is generally completely wrong. There has never been a theory of convergence developed for climate models. Higher resolution only creates a more accurate result once a simulation is inside the zone of convergence, which relies on Taylor’s theorem, and the calculus of limits. Such convergence theories are also limited to numerical simulation methods like finite elements, which provide both the framework and theory for such estimations. In general there is no theory for the finite difference models that are used in the climate model, except in the unrealistic and impossibly narrow case of smooth geometries and forcing functions.

    In addition, the majority of parametrized models are hard coded for specific resolutions and therefore, not only is there no theory of their convergence to an ideal solution given refinement, but the mechanization of such a test of convergence is utterly impossible. As we do not completely understand the cascade of scales within models, it is possible that these arbitrary parameterizations completely destroy any convergence within the more amenable surrounding PDE solvers of basic fluid mechanics.

    In more sophisticated modelling environments such as Sandia National Lab’s engineering divisions, the massively parallel simulations and the codes that implement them are tested both by:
    1) rigorous finite element analysis
    2) Plugging a known function (satisfying boundary conditions) into the differential operator, collecting the residual as the right hand side, and running the solver with this residual and b.c. to test its convergence to this known original function, including grid resolution studies and confirmation of the correct convergence rate based on theory.

    No such analysis has ever been performed on climate models. They have not been designed with such tests in mind, and, in fact, the general culture of climate modelling within atmospheric science labs is dismissive of the importance of such tests, relying instead on a ‘the simulation is the model’ approach.

    In addition, their general hyperbolic nature of climate models makes them particularly divergent when discretized. The general reliability of such models is likely pointwise accurate only for a few months, and surely divergent to 100% error within a year or less from their initial conditions.

    Despite assertions within the climate community that pointwise accuracy is not needed for convergence of metrics like ‘average temperature’, no such statement is either mathematically sound, nor has their ever been a mathematical theory that either supports or confirms such conclusions. In general, error analysis shows that such generalized metrics may suffer a slower rate of degradation than pointwise solutions, but once the pointwise solution is more than a few percent in error, all metrics cease to be useful.

    Therefore simulations are completely unreliable for long term conclusions and their output should be considered essentially random.

    (I know the full text may not be useful, but maybe there is a useful idea in there somewhere. Anyone in numerical analysis of PDE’s would be able to provide a more ‘authoritative’ statement if they wanted to participate)

  47. Regarding expression of uncertainty, it is stated in the Climate Science Special Report report section: ´Treatment of Uncertainties: Likelihoods, Confidence, and Risk Framing´ that: «In this report, likelihood is the chance of occurrence of an effect or impact based on measures of uncertainty expressed probabilistically (in other words, based on statistical analysis of observations or model results or on expert judgment).» – Page 7

    «Traceable Accounts for each Key Finding 1) document the process and rationale the authors used in reaching the conclusions in their Key Finding, 2) provide additional information to readers about the quality of the information used, 3) allow traceability to resources and data, …» – Page 7

    «Often, the underlying studies provided their own estimates of uncertainty and confidence intervals. When available, these confidence intervals were assessed by the authors in making their own expert judgments. For specific descriptions of the process by which the author team came to agreement on the Key Findings and the assessment of confidence and likelihood, see the Traceable Accounts in each chapter.» – page 8

    Even though it does not seem to be well known within climate science, there exists one widely accepted international standard for the expression of uncertainty: Guide to the expression of uncertainty in measurement.. The standard is neither referred to in IPCC; WGI; AR5 nor in the CSSR report.

    The standard covers two fundamentally different types of uncertainty estimate :
    «“2.3.2 Type A evaluation (of uncertainty)
    method of evaluation of uncertainty by the statistical analysis of series of observations”
    “2.3.3 Type B evaluation (of uncertainty)
    method of evaluation of uncertainty by means other than the statistical analysis of series of observations ”
»

    The following seven organizations supported the development of the Guide to expression of uncertainty, which is published in their name:
    BIPM: Bureau International des Poids et Measures
    IEC: International Electrotechnical Commission
    IFCC: International Federation of Clinical Chemistry
    ISO: International Organization for Standardization
    IUPAC: International Union of Pure and Applied Chemistry
    IUPAP: International Union of Pure and Applied Physics
    OlML: International Organization of Legal Metrology ..”

    The guideline is applicable for a very wide range of uncertainty estimates, and it is hard to see that it should not also be applicable to the climate industry in general and the CSSR report in particular.

    The guide provides the following guidelines for reporting uncertainty:

    Ref.: 7.2.3 in Guide to the expression of uncertainty …:
    Simply put, the result of an estimate should be reported by:
    – giving a full description of how the measurand Y is defined
    – stating the result of the measurement as Y = y ± U
    – give the units of y and U
    – giving the approximate level of confidence associated with the interval y ± U
    – state how the level of confidence was determined;

    However, from the following section it should be clear that even though some kind of judgment is applied in estimating uncertainty, the scientist is not free to keep secret the arguments and inferences behind that judgment and only report the final estimate:

    Ref.: 7.1.4  in Guide to the expression of uncertainty …:
    «Although in practice the amount of information necessary to document a measurement result depends on its intended use, the basic principle of what is required remains unchanged: when reporting the result of a measurement and its uncertainty, it is preferable to err on the side of providing too much information rather than too little. For example, one should
    a) describe clearly the methods used to calculate the measurement result and its uncertainty from the experimental observations and input data;
    b) list all uncertainty components and document fully how they were evaluated;
    c) present the data analysis in such a way that each of its important steps can be readily followed and the calculation of the reported result can be independently repeated if necessary;
    d) give all corrections and constants used in the analysis and their sources.
    A test of the foregoing list is to ask oneself “Have I provided enough information in a sufficiently clear manner that my result can be updated in the future if new information or data become available?”

    By section 7.1.4 it seems clear that an uncertainty estimate that is reported in accordance with this guideline should also satisfy the requirements of the Information Quality Act for utility, transparency, and reproducibility.

    If we now turn to the uncertainty estimates that are provided in the CSSR report and that are promised (on page 8) to be described in the ´Traceable account´ sections it rapidly becomes clear that reporting of the uncertainty estimates generally does not comply with the Guide to the expression of uncertainty in measurement. Hence, it also seems that the uncertainty estimates largely fail to be transparent and reproducible in accordance with the Information Quality Act.

    Anyone who has another impression might want to test that impression by trying to repeat the following uncertainty estimate: «Overall, the global warming response to ERF includes a substantial amplification from feedbacks, with a model mean of 0.86°F/(W/m2) (0.48°C/[W/m2]) with a 90% uncertainty range of ±0.23°F/(W/m2) (±0.13°C/[W/m2]) (as derived from climate sensitivity parameter in Table 9.5 of Flato et al. [2013] combined with methodology of Bony et al. [2006]).» Ref CSSR – Page 111

    Please tell us about the experience.

    • Geoff Sherrington

      SorF
      Your comments are very close to some I was composing for a post here. Can I please have your permission to refer to them with attribution, in a more general post I hope to make in the next day or two? sherro1 at optusnet dot com dot au

  48. Figure 5.1 – Supposedly, this report reflects modern scientific advancement with respect to climate and the atmosphere.

    Yet, the mythical and quite wrong “three cell circulation” is depicted.

    There is no such thing as a Hadley Cell! needs authoring and soon.

  49. I happen to have some experience with estimation of uncertainty. The CSSR report claims to provide a traceable account for the information it provides. The thing is that the report does not provide what it claims.

    That is easily seen for anyone who tries to find out how the uncertainty estimates in the above quote about Effective Radiative Forcing are derived. Guide to the expression of uncertainty section 7.1.4 lists what to look for. Flato et al. [2013] and Bony et al. [2006] are both freely available. A search for the term ´uncertainty´ in Bony et al. demonstrates that the information required for a traceable uncertainty estimate is not provided. There is no way an independent reviewer can verify the uncertainty estimates. The necessary information simply isn´t there.

    By the Information Quality Act, the report should have provided transparent and reproducible information, which it doesn´t in this example. Hence, the report is not in accordance with the Information Quality Act.

    In my view, there is a glaring contradiction between the claimed traceability and transparency that the report purports to provide and what it actually provides. (Ref. also another comment I made about the poor citation practice that is widely used in the report).

  50. Alas, this voluminous report quite successfully exemplifies a classic case of tasseographic misdirection.

    Consider two planets with 255K surface temperatures, one with a fully developed greenhouse atmosphere the other lacking any atmosphere. In steady states, both must exhibit thermal radiation densities of order 240W/m2. In one case, this radiation is 100% free energy, in the other 0%, all free energy initially emitted from the surface having been dissipated as weather.

    I know of no climate scientist in the full Lindzen-Gore spectrum who grasps the significance of this distinction and suspect it may well take more than a generation for western science to realize that the theorem of radiative-convective equilibria, the cornerstone of current climate science, is no more than a rather thinly disguised PMMII.

  51. Dr. Curry —
    In a separate post, please provide suggestions of names you would like to see as reviewers for the two reports. I see the USGCRP deadline is today, so I’ll just make up some names, but NCA4 is Sept. 8.
    In it, please indicate whether there a limit to how many reviewers one person can nominate. Is it counterproductive to suggest several? Do they have to be US citizens?
    The USGCRP page focusses on economic consequences, rather than climatology.. Are they not looking for input on the climate issues per se?
    Thanks!

  52. A different way of thinking of climate – order emerging out of disorder in Earth’s turbulent flow fields

    https://www.facebook.com/Australian.Iriai/posts/1396879590428299

    • Added some video.
      https://www.facebook.com/Australian.Iriai/posts/1396314623818129

      Land care and energy innovation are doors to brighter futures.

    • “Finally, Lorenz’s theory of the atmosphere (and ocean) as a chaotic system raises fundamental, but unanswered questions about how much the uncertainties in climate-change projections can be reduced. In 1969, Lorenz [30] wrote: ‘Perhaps we can visualize the day when all of the relevant physical principles will be perfectly known. It may then still not be possible to express these principles as mathematical equations which can be solved by digital computers. We may believe, for example, that the motion of the unsaturated portion of the atmosphere is governed by the Navier–Stokes equations, but to use these equations properly we should have to describe each turbulent eddy—a task far beyond the capacity of the largest computer. We must therefore express the pertinent statistical properties of turbulent eddies as functions of the larger-scale motions. We do not yet know how to do this, nor have we proven that the desired functions exist’. Thirty years later, this problem remains unsolved, and may possibly be unsolvable.” Julia Slingo and Tim Palmer 2011

  53. I hope I am not repeating someone else’s comment because I have just skimmed the comments, but what needs to be done is to have a 1, or at the most 2 page summary, of the most important points. There should be an explanation that the summary compresses subtleties in the science, but is fundamentally correct. Politicians, staff people for public figures, and interested lay people need something they can rely on as being fundamentally correct that everyone can understand. Unless the findings are simplified, in a way that lay people can easily understand them, a different report will have little impact because 98% of the population will not bother to read it. As someone who has conducted many jury trials, I know that basic, honest simplicity is the key to persuasion.

    An example of something that is needlessly complicated is temperature anomalies. Undoubtedly, they add scientific accuracy, but what should be done is to simply compare, for instance, this year’s temperatures to last year’s temperatures so that people can get a quick gauge of whether temperatures went up or down in any one year. Then anomalies can be added. I can guarantee that 98% of the public is confused by anomalies.

    If someone can shorten the executive summary to 5-7 pages, I will volunteer to further shorten it to 1 or 2 pages. I skimmed the executive summary for the draft report, and at 22 pages, it is way too long for the general public. In my mind, there is nothing wrong with an accurate, simplistic summary for the public, running parallel to a more formal summary that more extensively discusses subtleties. I will email you directly, and if you think what I am suggesting is useful, you can respond.

    JD

    • Roger Knights

      How about both—a summary and a two-page summary of the summary? (Cream of gumbo soup, so to speak.)

  54. Geoff Sherrington

    JD
    It is not scientifically correct to assume that the temperature anomaly method adds scientific accuracy (over the raw data method). You cannot improve accuracy by numerical cosmetics like subtraction of 2 large numbers, each accurate to +/- 10% , to obtain a smaller difference number also +/- a claimed 10%. The accuracy is dictated by the error of the original observation.
    Please be careful when offering to condense writing if you are to reject salient points and retain dubious ones. That is already done in buckets in the draft report, to its discredit. Example, dancing around inability to refine climate sensitivity spread after 30 years of study as described in IPCC AR5. Geoff

  55. Geoff, Anything I would write would be reviewed by people more knowledgeable than me, and I would defer to their expertise. Roy Spencer reports monthly temperatures in terms of anomalies. I relied on his reputation to make the assumption that they serve a practical, scientific purpose in the field of climate science. My point in the previous post was that I would not use anomalies because they are too difficult for the public at large to understand. What is the best way to report temperatures is for others to decide.

    Again, my major point is that for the report to have the greatest influence, a short, accurate summary understandable by the general public needs to be written. Of course, it doesn’t have to be written by me.

    JD

  56. jddohio. I don’t know what you mean by the “general public”, but I’m A climate science layman, and I don’t see much difference in reporting anomalies rather than raw temperatures. Both are subject to math abuse such as scaling distortions. I like the way Roy Spencer does it but you have to make sure you understand what the base period is. Nothing unusual there.

    As to whether 98% would not read the summary if it’s too long, I don’t think 2% of the public will read any of it regardless of length. But that’s not the point. The people who should read it and who actually read it should get an accurate view of the underlying science – warts, gaps and all. What I’ve read doesn’t meet that test. The CSSR is not supposed to be an advocacy document. It’s too important for that – too much policy is based on it. And presenting complexity in plain English is done everyday, and there is no excuse for publishing a study that a reasonably educated person cannot understand.

    One of the main purposes of red-teaming, which I have learned here and elsewhere, is to identify and correct group-think. The group that put this document together are of like mind and are likely to promote the “consensus” without really intending to do so. That’s the whole point. I always prefer to believe that those performing a task are being conscientious and are doing the best they can until proven otherwise. I would hope that the authors of the CSSR would see a red-team approach in that light, and that the red-teamers would do the same.

  57. scraft1 “As to whether 98% would not read the summary if it’s too long, I don’t think 2% of the public will read any of it regardless of length. But that’s not the point. The people who should read it and who actually read it should get an accurate view of the underlying science – warts, gaps and all. What I’ve read doesn’t meet that test. The CSSR is not supposed to be an advocacy document. [I agree that the actual report should be 100 % accurate] It’s too important for that – too much policy is based on it. And presenting complexity in plain English is done everyday, and there is no excuse for publishing a study that a reasonably educated person cannot understand.”

    My experience as a lawyer is exactly the opposite of yours. For instance, the US Supreme Court limits petitions for Certiorari to 9000 words. (the way 99% of the cases that are heard get accepted — limitation is in Rule 33 — about 20-25 pages) Also, Rule 16 of the Ohio Rules of Appellate Procedure states in part that each brief (the legal argument made by a party) shall contain:

    “(5)A statement of the case briefly describing the nature of the case, the course of proceedings, and the disposition in the court below. …
    (8) A conclusion briefly stating the precise relief sought.”

    Judges, who deal with many complex issues are continually telling lawyers to keep it short and to the point. They don’t want to see 100 page briefs or 500 page briefs. If judges demand brevity in very complex cases, there is no harm in adding a brief summary to a detailed, accurate, complex and long report. If practical action is to be taken, elected representatives and their staffs have to understand the main points. They will not read the CSSR, nor 40 page summaries, and even if they did, most of them could not understand the subtleties.

    Also, your statement that “there is no excuse for publishing a study that a reasonably educated person cannot understand.” is wrong. Much of climate science is based on statistics, which 99% of the public doesn’t understand.

    JD

    • “Much of climate science is based on statistics, which 99% of the public doesn’t understand.”
      That is no excuse for publishing a study that is not objective, transparent, and reproducible for the remaining 1 %, in accordance with the Information Quality Act. The task of the authors is not to form an opinion on behalf of the public.

      • S or F: “That is no excuse for publishing a study that is not objective, transparent, and reproducible for the remaining 1 %, in accordance with the Information Quality Act.”

        You should read my comment. I agree with that. There is nothing inconsistent with having a report written for the 1% who will understand the scientific subtleties, and condensing the more detailed, subtle report for the other 99%. In the condensed report, you would make clear that some subtleties are not being addressed and that where there are issues of interpretation or application, between the two reports, the longer, more detailed report controls.

        Unless we are going to limit the vote to the 1% who can understand the subtleties, we have to also deal with the other 99%, many of whom will vote in elections. Also, I will point out that if the writers of the report don’t condense their report, others definitely will such as newspaper reports. Why not write the most accurate condensed report possible instead of letting others twist the meaning of the report in things such as blog posts and tweets, which will inevitably happen.

        JD

  58. I agree with JD. Particularly his point, RE: “The people who should read it and who actually read it should get an accurate view of the underlying science – warts, gaps and all.”

    Having been paid to both summarize and visualize complicated concepts (for corporate communications, government agencies, and broadcast TV) job one is always determining your target audience. Everything else is aimed at that target.

    Or you miss.

    • Mr. Rubble, if you agree with JD then you also agree with me, since the quote is from my comment.

      Also, jd disagrees with my quote that a reasonably educated person should be able to digest a report on climate science. Who do you think the target audience is for the CSSP, which is what we’re supposed to be talking about here? It certainly is not climate scientists: it is the reasonably educated policy maker. The CSSP is an advocacy document, summarizing the “real” science for science neophytes.

  59. Geoff Sherrington

    (Somebody has to do this).
    In scientific reality, the CSSP version 5 draft (“the Draft”) is a poorly written document reflecting poor science.
    It is largely built around the three legs of AGW – Anthropogenic, Global and Warming.
    In essence, the Draft does not set out a scientific case for A, G or W. When dealing with A, it assumes G and W established, and so forth, as if A, G and W were settled science when they are hypotheses. The credibility of A, G and W can be supported by many observations; but it can also be questioned by other observations of which less has been written.
    The Draft continues the momentum of the IPCC Paris arrangement without reference to its rejection by the President. The sections of the Draft implicitly questioned by this rejection should be made into an addendum to avoid the propagation of stale opinion.
    The draft is too long. There is little value gained from the inclusion of many references that have marginal bearing on the key features of climate in the USA. Recall Einstein “When told of the publication of a book entitled, ‘100 Authors Against Einstein’; he replied: “Why one hundred? If I were wrong, one would have been enough.”
    The Draft lacks a critical consideration of key objections to the AGW science, several of which have potential to render nugatory much of the Draft narrative. Here are some objections using mostly Australian data for examples.

    Anthropogenic?
    There no settled method for the separation of global climate effects into anthropogenic or natural.
    It is not even possible to attribute portions, such as 10% anthropogenic and 90% natural. It is hard to make this separation because there is no parallel earth on which to perform experiments. No matter how hard it is to show attribution, it is scientifically premature to assume that there is a significant anthropogenic contribution to global climate. (The Draft is already heavy with premature expectation). There is a known natural contribution because climate, whenever measured adequately, is interpreted as being forever in a state of change including before the time of mankind.
    Until an anthropogenic global contribution is confirmed and quantified, there is no case for global policies that regulate human activities, in order to ‘correct’ or ‘prevent’ a ‘wrong climate’.
    Global?

    While the Draft is specifically about USA, there are global climate mechanisms that cannot act for one land area alone. Therefore, global effects need consideration. The Draft is suffers because of a scarcity of data from parts of the globe less reported or less inhabited or less scientific than the USA such as the oceans of the southern hemisphere, much of Africa and the Antarctic.
    If there is global warming, the naïve expectation would be that warming was a slow, steady, near uniform process that left no place unwarmed. This is not happening. Here are 2 examples, one of temperatures from remote Macquarie Island in the vast Southern Ocean and one from Sydney and Melbourne which combine to house nearly half of all Australians. They show places without global warming by the criteria selected here.
    http://www.geoffstuff.com/macquarie_gw.jpg
    http://www.geoffstuff.com/century_days_sydmelb.jpg
    (These examples are meant to be representative of many needing explanation, not inclusive of all objections. Both ask “Where is the global warming?”)

    Warming?
    The Draft depends on warming for many of the climate processes described therein. Warming is a pervasive theme in the Draft, even for cooling. It is invoked to explain more future droughts, more future floods and how both coexist.
    In the Draft, almost all warming processes are dominated by Greenhouse Gases, most notably Carbon Dioxide, CO2. The warming ability of CO2 is defined by the temperature change arising from a change of atmospheric CO2 content.
    From the 5th IPCC report (2014) –
    “The equilibrium climate sensitivity quantifies the response of the climate system to constant radiative forcing on multicentury time scales. It is defined as the change in global mean surface temperature at equilibrium that is caused by a doubling of the atmospheric CO2 concentration. Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence) 16.
    16 No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”
    Once errors are properly applied to the published ECS estimates, a value of zero degrees C is not excluded. This means that science has been unable to tell us if a doubling of CO2 will lead to no global temperature change, or a change of up to 5 °C.
    The whole hypothesis of global warming still awaits proof of validity.
    At the heart of alleged warming is the reconstructed temperature of the past, via proxy studies, from historical records and observations and from instrumental sources, mainly thermometry on the ground and satellite microwave sounding from higher. Numerous accusations of data tampering have been made. Not all have been resolved. Therefore, it is scientifically prudent to place little weight yet on historic temperature series, especially those earlier than the satellite era starting about 1980. The satellite series do show some warming in the 21st Century, but it is plausibly caused by non-anthropogenic processes like the redistribution of retained ocean heat (like El Ninos cause).
    Models.
    Most large scale modelling results have not passed acceptance by standards of adequate scientific rigour. The mismatch between modelled and observed temperatures summarised in this graph and other versions of it has caused discussion rather than enlightenment.
    http://www.geoffstuff.com/spencer_mismatch.png
    http://www.drroyspencer.com/2014/02/95-of-climate-models-agree-the-observations-must-be-wrong/

    Accuracy, Precision, Confidence etc.
    From a blog comment here:
    http://www.geoffstuff.com/blog_sorf.docx
    Climate research has long resisted adoption of such international standards and procedures that help give properly constructed confidence limits and error bounds. Much published climate research would not have been printed because it was apparent that results were uncertain. An example is from the energy balance measurements at Top of Atmosphere, where the raw observational data has a very wide spread when compared to the significance of its variability. How does one confidently extract a significant 0.1 unit signal from a raw data spread of 15 units?
    http://www.geoffstuff.com/toa_problem.jpg
    http://www.geoffstuff.com/kandl_toa_paper.pdf
    Another example is from the rejection of much data on ocean temperatures after the deployment of the state-of-art Argo system. Will the Argo data be rejected when the new baby makes its debut?
    http://onlinelibrary.wiley.com/doi/10.1029/2007JC004517/pdf
    http://www.pas.rochester.edu/~douglass/papers/KD_InPress_final.pdf

    Approved authors.
    The Draft asserts at p 385 of its “Extreme Storms” chapter–
    “Human activities have contributed substantially to observed ocean–atmosphere variability in the Atlantic Ocean (medium confidence), and these changes have contributed to the observed upward trend in North Atlantic hurricane activity since the 1970s (medium confidence)”.
    This broadly contrasts with climate disaster graphs like this –
    http://www.geoffstuff.com/cli_disasters.jpg
    An author of this graph is Roger Pielke Jr. who is author, co-author or co-editor of seven books, including The Honest Broker: Making Sense of Science in Policy and Politics published by Cambridge University Press (2007), The Climate Fix: What Scientists and Politicians Won’t Tell you About Global Warming (2010, Basic Books) and The Rightful Place of Science: Disasters and Climate Change (2014, Consortium for Science, Policy & Outcomes).
    Despite this record and an international reputation on climate damage research, Dr Pielke has little mention in the Draft (one co-authored paper from 1999). There are many other cases where the Draft ignores authors of prominent, relevant works, despite having many references to lesser-known authors. This assertion carries an accusation of preferential treatment for those who support AGW hypotheses, even of deliberate censorship to suppress publications by authors who do not “sing from the right hymn sheet.” The above statement in the Draft could be correct through careful wording and oblique views.

    Taken overall, the Draft should be withdrawn and re-written objectively with corrections and improvements as indicated above; but importantly, with a neutral presentation of data and deductions that often conflict. The USA can only benefit from a proper scientific exposition.

  60. The entire chapter on temperature in the United States is predicated on highly dubious assertions passed of as scientific knowledge of “very high confidence.” It starts with the claim that the regressional trend (1896-2016)of contiguous USA station data is 1 degree Celsius, is followed by the projection of another 1.6 to 4.1 degrees warming by the late years of the century, and is topped off by the warning that temperature extremes will rise even more drastically.

    Nowhere does one find any acknowledgement of the lack of compensation for UHI effects and land use changes that (along with Karlization) profoundly biased the data base, nor of the uncertainty of making long-range forecasts in the face of strong, but irregular, multidecadal variability that is nowhere near adequately modeled. All in all, Chapter 6 reads as if it were written by bureaucrats whose scientific sophistication doesn’t extend much beyond blind faith in number crunching and the physical preconceptions of social scientists.

  61. Pingback: Weekly Climate and Energy News Roundup #282 | Watts Up With That?

  62. Judith Curry,

    What I have experienced on this kind of complex multiscientific problems like the recent climate warming, most of activated people seem to believe sincerely in certain views related to causation of climate warming, without any evidence whether they are right or wrong in reality. However, to be right one needs an evidence based on pragmatic logic i.e. on obervations in reality relating to causes in accordance with natural laws.

    CSSR: ”There is very high confidence that the global climate change of the past 50 years is primarily due to human activities, given the evidence base and remaining uncertainties (IPCC 2013). Recent changes have been consistently attributed in large part to human factors across a very broad range of climate system characteristics.”

    By reviewing any report of climate change anyone should be able to scrutinize whether e.g. observed climate warming is caused by natural reasons or by generally claimed anthropogenic CO2 emissions based only on mere ideological or intitutional belief. As for instance the Paris agreement seems to be based only on institutional belief of UN politicians, in accordance with hypothetic results of climate models, its consequences can be forecasted to be only disastrous like the CO2 cuttings according to the Kyoto protocol even prove.

    As to the influence of anthropogenic CO2 emission from fossile fuels, it is not any threat of global warming. The evidence needed I have proved e.g. in my earlier comment https://judithcurry.com/2017/02/10/the-new-climate-denial/#comment-838229 :

    ”In the same way as [water from] a small ditch has only minimal influence on lake flooding compared to all the amount of water coming to lake, the small amount of 4 % CO2 from fossile fuels in the total CO2 emissions to atmosphere has only minimal influence on total increase of CO2 content in atmosphere.

    This proves that CO2 from fossile fuels has recently conrolled only about 4% of the total increase of recent CO2 content in atmosphere at the most; https://judithcurry.com/2011/08/04/carbon-cycle-questions/#comment-198992. In the same comment I have proved that warming of sea surface water in oceans – especially on the areas where sea surface CO2 sinks are – dominates the recent increase of CO2 content in atmosphere.

    As we even solve the share of warming caused by the total increase of CO2 content in atmosphere, it is so minimal, that it can not be found in reality. In my link https://judithcurry.com/2017/07/19/stealth-advocasy-a-survey-of-weathercasters-views-on-climate-change/#comment-854996 I have proved: ‘ – – – trends of increase of CO2 content in atmosphere follow trends of climate temperature and not vice versa i.e. according to geological observations during last 100 million years in 10 million years periodes CO2 contents in atmosphere have followed changes of climate temperature; during glacials and interglacials trends of CO2 content in atmosphere have followed trends of temperature changes in climate; and the last nearly two decades prove that CO2 content in atmosphere has been increasing by lag after climate warming. This all should make anyone true that climate sensitivity – i.e. warming of climate as CO2 content in atmosphere is doubling – is so minimal that it can not be distinguished from zero.’ And because, in addition, the share of anthropogenic CO2 emissions in the total increase of CO2 content in atmosphere is minimal, it is unreasonable to try to mitigate climate warming by cutting manmade CO2 emissions.”

    As the share of the anthropogenic CO2 from fossile fuels in the total increase of CO2 content in atmosphere is minimal and as according to observations in reality the total increase of CO2 content in atmosphere follows climate warming and not vice versa, increases of CO2 content in atmosphere are dominated by natural factors.

    • As I see, the main mistake in the climate model results, adopted by IPCC, is concerning the parameters, which have been received by using circular argumentation. For instance the recent increase of CO2 content in atmosphere seems wrongly to have been assumed totally to be anthropogenic CO2 from fossile fuels, bypassing that CO2 contentent in atmosphere is a result of dynamic balance between all CO2 emissions from sources to atmosphere and all CO2 absorptions from atmosphere to CO2 sinks, where the share of anthropogenic CO2 emissions is minimat compared to the total CO2 emission to atmosphere.

      As to the recent global warming of climate, all of its potential causes seems nowadays to be too complicated to be explained completely. Instead we can concentrate only to prove the role of anthropogenic CO2 emissions on the recent climate warming well enough. The targit of this is to get answer to Judith Curry’s key question ”how much warming is caused by humans is THE key issue of debate”.

      In my comment above I have tried to state understandably, why the share of anthropogenic CO2 emissions from fossile fuels is minimal in the recent increase of CO2 content in atmosphere: in the same way as water from a small ditch has only minimal influence on lake flooding compared to all the amount of water coming to lake, the small amount of 4 % CO2 from fossile fuels in the recent total CO2 emissions to atmosphere has had only minimal influence on total increase of CO2 content in atmosphere. In addition, if total CO2 emissions are more than total CO2 absorptions, the CO2 content in atmosphere is increasing, but if they are less, the total CO2 content in atmosphere is decreasing.

      My interpretation is, that, in general, commentators have difficulties to understand the mechanism, how CO2 content in atmosphere is changing. For instance even IPCC scientists – without any evidence – seem to assume that all the recent increase of CO2 content in atmosphere is caused by anthropogenic CO2 emissions. But, according to natural laws, by striving for dynamic balance, all CO2 emissions from CO2 sources to atmosphere and all CO2 absorptions from atmosphere to CO2 sinks together determine the CO2 content in atmosphere, and the result above proves, that the recent share of anthropogenic CO2 emissions from fossile fuels is only about 4 % at the most in the total increase of CO2 content in atmosphere.

      Already the minimal share of anthropogenic CO2 from fossile fuels in the recent total increase of CO2 in atmosphere prove that it is not any threat of climate warming. In addition, geologic observations from last 100 million years and observations during present decades prove that trends of climate temperature follow temperature trends and not vice versa.

  63. Emailed comments from Clyde Spencer:

    Following is my detailed critique of the Executive Summary:

    Executive Summary, p. 9 (13), map on page: Geographic quasi-political regions is a poor choice for analysis. Physiographic or climatological regions should be used to avoid mixed regions obscuring effects looked for.

    Executive Summary, p. 13 (17), lines 9-14: “…2015 surpassed 2014 by a wide margin;…” The statement is misleading. It is not quantitative and suggests a large percentage or integer temperature changes when the actual difference between the two years is of questionable precision.

    Executive Summary, p. 13 (17), graph at bottom: the graph does not have error bars or any measure of uncertainty associated with it. The surface temperature map on the right shows an absence of data from the Polar Regions, calling into question the accuracy of the graph on the left.

    Executive Summary, p. 14 (18), Lines 13 & 14: “We find no convincing evidence for natural cycles in the observational record that could explain the observed changes in climate.” Yet, there was in increase in temperatures from the end of the Little Ice Age that is not attributed to anthropogenic influences. Whatever was/is responsible for that warming is a “natural cycle” that explains at least part of the recent warming, if not the majority of it.

    Executive Summary, p. 14 (18), graph at bottom: The graph is contradictory in that it shows (red bar) the purported “Human-caused” radiative forcing for the period 1750-2011 CE; however, it had previously been claimed that significant anthropogenic warming occurred between 1951-2010, with emphasis on the period of 1986-2016. Why is the period 1750-1951 included in the range? It is explained with more detail as a change, in the text below, but that verbose of an explanation probably will not be read by the typical reader of the Executive Summary.

    Executive Summary, p. 15 (19), Lines 22-23: “With significant reductions in the emissions…could be limited to 3.6°F (2°C) or less.” The point should be made that these are predictions from models that even the modelers admit are running hot. Further, it is generally acknowledged that the Arctic is warming 2X to 3X as fast as the global average. Therefore, the Arctic changes bias the average and make the situation for the US appear more severe than it will be, assuming the forecasts are accurate.

    Executive Summary, p. 16 (20), Something should be said here about the probability of the various RCPs, and assumptions, such as availability of resources limiting the most pessimistic (8.5) assumptions, and the potential for technology to impact these scenarios.

    Executive Summary, p. 17 (21), Lines 16-21: The role of UHI is addressed, but it isn’t made clear how this relates to consumption of fossil fuels or generation of CO2.

    Executive Summary, p. 19 (23), Lines 5-6: “Some extremes…or of longer duration,…” The word “some” is used, and suggests to me more than a couple, but only temperature and precipitation are provided as examples.

    Executive Summary, p. 20 (24), graph at top: If the average temperature is increasing, one should expect more record highs and fewer record lows. What isn’t stated is by how much the record is broken and whether there is a change in the diurnal range. One could get excited about a lot of red in this graph, but if it is only a change that a human doesn’t perceive, is it important?

    Executive Summary, p. 23 (27), lines 27-32: “Both physics and numerical modeling…in the Atlantic…” Yet, these predictions don’t seem to be borne out by observations! Is this an indication that there is something very wrong with the underlying assumptions of the physics and modeling?
    Executive Summary, p. 25 (29), lines 1-2: “…, although confidence is presently low regarding the magnitude of the human contribution relative to natural variability (Ch. 5).” I can’t help but wonder why, when the confidence about the human contribution to the change in tropics is low, that the overall confidence of human impact is much higher. That would only seem to make sense if the influence of the tropics on the overall global climate was negligible.

    Executive Summary, p. 25 (29), Line 11: “Oceans Are…Becoming More Acidic” Logically, something can only become “more” of what it is. If it is acidic, it can become more acidic. If it is alkaline (which the ocean is) then it cannot become more acidic. I know that this has become entrenched in the literature, but we should push back because it misrepresents the situation.

    Executive Summary, p. 26 (30), Line 3: “Ocean heat content has increased at all depths since the 1960s…” How can that be true of the deep ocean when it is isolated from the surface by a distinct thermocline and has a circulation time of about a millennium?

    Executive Summary, p. 26 (30), Lines 12-13: “Human-caused climate change has made a substantial contribution to GMSL rise since 1900 (high confidence),…’ The word “substantial” should be replaced with a more quantitative assessment. I haven’t jumped ahead to read Chapter 12. However, if they haven’t taken into account water withdrawn from aquifers for irrigation, and the volume displacement of river sediment, then I don’t think that their “high confidence is warranted. If you haven’t seen David Middleton’s critique of GMSL (along with other things), you should link to it.

    Executive Summary, p. 28 (32), Lines 3-4: “Almost all U.S. coastlines…in response to Antarctic ice loss,…” Is there a net loss of ice in Antarctica? I was of the opinion that ice was accumulating.

    Executive Summary, p. 28 (32), Lines 8-10: “…,the number of tidal floods each year…have increased 5- to 10-fold…in several U.S. coastal cities…” To what extent can this be attributed to expansion of the cities and building on what was formerly recognized as land at risk?

    Executive Summary, p. 29, Lines 21-22: Again the pejorative term “more acidic” is used. They refer to “potential detrimental impacts” but don’t provide an assessment of the probability.

    Executive Summary, p. 29, Lines 22-25: They claim “Higher-latitude systems typically have a lower buffering capacity against changing acidity…” It isn’t immediately obvious to me why the capacity should be lower. I suspect that they are actually talking about CO2 being more soluble in cold water, thus acquiring a greater forcing on lowering pH. [If you haven’t read my article in WUWT, http://wattsupwiththat.com/2015/09/15/are-the-oceans-becoming-more-acidic/ , I would humbly recommend you do so.] In any event, one is not “changing acidity,” but rather changing alkalinity. I think that a more objective term of the complex changes is carbonation of the seawater.

    Executive Summary, p. 29, Lines 26-27: “Under RCP8.5, the global average surface ocean acidity is projected to increase by 100% to 150% (high confidence).” To what are they comparing the so-called acidity? What is the base measurement? Because the water is alkaline, it can’t increase its ‘acidity’ 100%! That is like saying that if a block of ice is at -20°C and the temperature increases to -10°C it has increased its liquidity by 50%. I suspect that they are playing games with the hydrogen ion concentration. However, they should address the expected consequences of a logarithmic change of something that has a range of 15 orders of magnitude.

    Executive Summary, p. 29, Lines 28-29: The upwelling of low-pH water along the coasts is totally unrelated to anthropogenic CO2! It is not even an issue of climate! It is water that is hundreds of years old that has been enriched in CO2 as a result of oxidation of biogenic material falling through the water column.

    Executive Summary, p. 30, Lines 3-9: It appears that they are indirectly trying to blame ‘dead zones’ resulting from fertilizer-induced algal blooms on anthropogenic CO2. I think that this should be stricken from the text.

    Executive Summary, p. 31, Lines 16-20: They assert that “Human activities have likely contributed to more than half of the observed September sea ice decline since 1979 (high confidence).” They don’t acknowledge that we may be looking at a portion of a long-term cycle, yet confidently assign “high confidence!”

    Executive Summary, p. 33, Line 13: “Human activities are now the dominant cause of observed trends in climate.” They categorically assign responsibility without mentioning that the claim is based on Global Circulation Model experiments, models that are known to run hot and have no expertise in forecasting step increases or a pause in temperature rise.

    Executive Summary, p. 33, Lines 27-29: “…no climate analog for this century any time in the last 50 million years…” Other than the fact that 50 million is a big number that may appear scary, what is so special about a period of time that represents 1% of the history of the Earth?

    Executive Summary, p. 35, Lines 23-33: The specter of a so-called ‘Tipping Point’ is raised. The Earth has been much warmer, (PETM) and much colder (multiple Snowball Earth events), strongly suggesting that the Earth has self-correcting systems. Were that not so, we wouldn’t be here! Therefore, the Tipping Point scenario should be presented as a vanishingly-low probability event, rather than a finite one with humans having a linear influence on.

    Executive Summary, p. 35, Lines 33-36: They speculate about how humans might initiate or force positive feedback loops. Yet, compared to the Milankovitch Cycles, or Plate Tectonics altering air and ocean circulation, it would seem that the probability of human influence is small.

    Executive Summary, p. 36, Lines 13-18: “…, the systematic tendency…to underestimate temperature changes during warm paleoclimates suggests that climate models are more likely to underestimate than to overestimate the amount of long-term future change…” The underestimation may be because the GCMs are trained on recent historical data, not geologic historical data. The recent trend to run warm suggests exactly the opposite for the future. Indeed the “very high confidence” that “…future changes outside the range projected by climate models cannot be ruled out” must obviously include the possibility that the future could be cooler.

    What I have read here is the typical scare mongering that we have come to expect from those with an advocacy position. A question that hasn’t even been asked, let alone answered, is what would an optimum climate for Earth be? What is the probability that after 4.5 billion years of changes, Earth had an optimum climate just prior to industrialization? Clearly, an optimum climate is not one with half the Northern Hemisphere covered in ice! If the optimum were an Earth warmer than currently, what justification is there for an attempt to prevent Earth from warming? I think that there is an issue of addressing priorities.

    Clyde Spencer

  64. Regarding aerosols, it is stated in the report that:
    Major uncertainties
    Aerosol–cloud interactions are the largest source of uncertainty in both aerosol and total anthropogenic radiative forcing. These include the microphysical effects of aerosols on clouds and changes in clouds that result from the rapid response to absorption of sunlight by aerosols. This finding, consistent across previous assessments … , is due to poor understanding of how both natural and anthropogenic aerosol emissions have changed and how changing aerosol concentrations and composition affect cloud properties (albedo and lifetime) … . From a theoretical standpoint, aerosol–cloud interactions are complex, and using observations to isolate the effects of aerosols on clouds is complicated by the fact that other factors (for example, the thermodynamic state of the atmosphere) also strongly influence cloud properties. Further, changes in aerosol properties and the atmospheric thermodynamic state are often correlated and interact in non-linear ways ….» – Page 123
    (References are here replaced with «…» to increase the readability)

    Nevertheless it is concluded that:
    Summary sentence or paragraph that integrates the above information
    «… While significant uncertainty remains in the quantification of aerosol ERF, we assess with high confidence that aerosols offset about half of the positive forcing by anthropogenic CO2 and about a third of the forcing by all well-mixed anthropogenic GHGs. The fraction of GHG forcing that is offset by aerosols has been decreasing over recent decades, as aerosol forcing has leveled off while GHG forcing continues to increase.»
    – Page 124

    That is a very significant statement that is stated with high confidence. It is clearly remarkable that the statement is expressed with high confidence in light of the uncertainty related to aerosol forcing. That summary statement clearly deserves traceability to the basis for that statement. However, the report provides no clearly identified and traceable account for that statement, even thought the purpose of the traceable account is to document «the supporting evidence, process, and rationale the authors used in reaching these conclusions», and provide «traceability to resources and data».

  65. Recalling Einstein’s statement about one scientist could disprove his work–I came across this:
    https://www.omicsonline.org/open-access/new-insights-on-the-physical-nature-of-the-atmospheric-greenhouse-effect-deduced-from-an-empirical-planetary-temperature-model.pdf
    And see in it the falsification of the greenhouse hypothesis. I am not able to properly review it but it looks solid to me so far. If it is the whole CSSR is based on a misunderstanding.

  66. Email from Clyde Spencer:
    Chapter 1. Our Globally Changing Climate

    Chap. 1. Page 38, Paragraph 1., Line 3: They don’t define the word “rapidly.” One of the weaknesses of climatology is an avoidance of quantitative assessments in favor of subjective descriptions.
    Chap. 1. Page 38, Paragraph 2., Lines 10-19: Again, descriptions of “extreme heat and heavy precipitation” are subjectively described as increasing in “frequency and intensity.” No quantitative values are provided, and whether those alleged increases are significant statistically or existentially is left up to the imagination of the reader. That is, “extreme” isn’t defined for either heat or precipitation. They claim that models tend to underestimate the trends for these parameters. Yet, from my reading I’ve been left with the impression that the models are running hot and aren’t able to predict deceleration of warming. My impression of model predictions of precipitation is that the various models were even contradictory as to where increases or decreases were expected. It would seem that there assessments of very high confidence and high confidence are overly optimistic.
    Chap. 1. Page 38, Paragraph 3., Lines 25-32: “…and we find no convincing evidence for natural cycles in the observational record that could explain the observed changes in climate. “ One might ask if they have heard of Occam’s Razor and the Little Ice Age. One could argue that the current warming trend might just be a continuation of what started at the end of the Maunder Minimum, albeit with some help from Man. Karl Sagan was fond of saying that for extraordinary claims, one must provide extraordinary proof. It isn’t enough to just say that “We can’t think of anything other than Anthropogenic influences.”
    Chap. 1. Page 38, Paragraph 4., Lines 33-37: They say, “Global climate is projected to continue to change over this century and beyond.” Forecasting is always fraught with difficulties, so (absent any known cyclical influences) extrapolating from the past is the best we can do. So, their statement does not provide any great insight on the issue. They state, with very high confidence, that the “remaining uncertainty” in the sensitivity of warming to “greenhouse (heat-trapping) gases” is one of the primary determinants in predicting the “…magnitude of climate change beyond the next few decades…” They then go on to state with high confidence (on the next page) “Without major reductions in these emissions, the increase in annual average global temperatures relative to preindustrial times could reach 9°F (5°C) or more by the end of this century.” These statements appear contradictory! That is, they are prescribing a solution that they have high confidence in, while they admit, with even higher confidence, that they aren’t sure what the impact will be of climate sensitivity.
    Chap. 1.1, Page 39, Lines 21-22: “Evidence for a changing climate abounds, from the top of the atmosphere to the depths of the oceans.” Few contest that the climate has been changing – that is what climate does! What is at issue is to what extent humans are responsible. For that matter, it isn’t even unequivocally demonstrated that glacier retreat is the result of increasing global temperatures. There are reasons to believe that a reduction in cloudiness, or decreased albedo of snow from particulates settling on the snow, play a larger role than average global temperature changes.
    Chap. 1.1, Page 39, Lines 21-22: “Over the last century, there are no alternative explanations supported by the evidence that are either credible or that can contribute more than marginally to the observed patterns.” Yet, whatever caused the recent Little Ice Age, and the warming afterwards, before CO2 increased significantly, probably is still lurking in the background. Because it hasn’t been identified, the logic being appealed to is not unlike primitives, who don’t understand plate tectonics, explaining volcanic eruptions as the work of gods requiring appeasement. The subsequent listing of things that have been excluded does not mean that the only other working hypothesis – anthropogenic influences – is the only possible working hypothesis.
    Chap. 1.1, Page 40, Lines 3-4: “There are no apparent [identified or generally accepted] natural cycles in the observational record that can explain the recent changes in climate (e.g., PAGES 2K Consortium 2013; Marcott et al. 2013). Correlations of increasing temperatures with increasing use of fossil fuels is suspicious, but not definitive with respect to causality. One could argue that increasing temperatures and widespread use of fossil fuels have created a world in which the survival rate of humans has improved, thus contributing to further increases in fossil fuels.
    Chap. 1.1, Page 40, Lines 4-6: “In addition, natural cycles within the Earth’s climate system can only redistribute heat; they cannot be responsible for the observed increase in the overall heat content of the climate system (Church et 6 al. 2011).” This claim ignores the obvious that, during the Paleocene-Eocene Thermal Maximum, temperatures were higher than today. It is difficult to assess whether the rise in temperature was as rapid because proxy measures of history inherently have lower temporal resolution than modern measurements.
    Chap. 1.1, Page 40, Lines 9-13: “Known human activities quite reasonably explain what has happened without the need for other factors. Internal variability and forcing factors other than human activities cannot explain what is happening and there are no suggested factors, even speculative ones, that can explain the timing or magnitude and that would somehow cancel out the role of human factors (Anderson et al. 2012).” The message comes through that because over the last six decades we have seen modest increases in average global temperatures (<<1°C) the authors believe that only humans seem to be a reasonable explanation
    Chap. 1.1, Page 40, Lines 21-22: “Climate, on the other hand, is the statistics of weather—meaning not just average values but also the prevalence and intensity of extremes—as observed over a period of decades.” Yet, no analysis is presented about those extremes. Average annual global temperatures are presented as the average of monthly averages. This averaging suppresses extremes – filters them out! Based on published extreme Earth temperatures, it would appear that the standard deviation of global temperatures is several tens of degrees. Yet, government agencies cite average annual anomalies to hundredths and even thousandths of a degree, without citing the uncertainty, which is probably orders of magnitude larger. [ https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/ ]
    Chap. 1.3, Page 41, Lines 21-22: “Global average temperature is not expected to increase smoothly over time in response to the human warming influences, because the warming trend is superimposed on natural variability…” One would expect natural variability to be random and transitory; however, one of the things that is striking about the temperature record is that the changes appear step-like, being flat for a while, then increasing at a slope similar to past increases, then becoming flat again. The GCMs don’t reflect this behavior, begging the question as to whether the models are adequate for their task and should be given as much credence as the often are.
    Chap. 1.3, Page 41, Lines 32-36: “Even so, 16 of the 17 warmest yeas in the instrumental record…occurred in the period from 2001 to 2016…” If one has any time series where the variable being measured has an increasing trend, one would expect most if not all of the high values to be at the end of the series. This isn’t rocket science, as the saying goes! They are really stating the obvious. They have already made the point that the average global temperature has been increasing for at least decades. In making this statement one is led to believe that they are looking for the subjective scare value on the reader.
    Chap. 1.3, Page 42, Line 31: “Regional climate variability is important…” Indeed, more attention should be paid to how the classic climate zones are responding to the climate change. The Arctic, and to a lesser extent, the Antarctic are the only areas that typically get special treatment from the global averages.
    Chap. 1.3, Page 43, Line 20: They are now claiming that a period of 25 years is necessary to characterize a ‘hiatus,’ thus pushing the limits of the definition of climate.
    Chap. 1.3, Page 43, Line 22: In their support of the denial of a hiatus, they are of course citing the work of Karl (2015). However, they neglect to mention that the practice of adjusting superior modern buoy data to agree with inferior ship engine room temperatures is highly contentious! In my mind, it is also inappropriate to conflate sea surface (of variable depths) temperatures with surface air temperatures, usually of a fixed height above the ground. Each data set has a story to tell, and conflating them hides information.
    Chap. 1.3, Page 44, Lines 6-12: After basically denying a hiatus, they re-characterize it as a “slowdown,” and implicitly acknowledge that the CMIP3 and CMIP5 ‘projections’ are running warm.
    Chap. 1.3, Page 45, Lines 1-10: They then re-inforce the models running warm by pointing out that only during the 2015-2016 El Nino anomalous event did the model projections have better agreement with reality.

    • Ah, the sort of distortions I’d expect from a contributor to the denialist website WattsUpWithThat. I’ll highlight a view of Spencer’s errors/distortions.

      Re: “One of the weaknesses of climatology is an avoidance of quantitative assessments in favor of subjective descriptions.”

      Anyone who reads papers in climate sciences know they are often filled with quantitative assessments, statistics, etc. So Spencer’s being misleading.

      Re: “They claim that models tend to underestimate the trends for these parameters. Yet, from my reading I’ve been left with the impression that the models are running hot and aren’t able to predict deceleration of warming. My impression of model predictions of precipitation is that the various models were even contradictory as to where increases or decreases were expected.”

      Then he should spend less time on denialist blogs, and more time reading the scientific literature. Then, for example, he’d know some of the evidence of models under-estimating dryland expansion and increased aridity:

      “Comparison of dryland climate change in observations and CMIP5 simulations”
      “Accelerated dryland expansion under climate change”

      Of course, the models are doing better in other areas:

      “Observed heavy precipitation increase confirms theory and early models”

      Re: “Yet, whatever caused the recent Little Ice Age, and the warming afterwards, before CO2 increased significantly, probably is still lurking in the background. Because it hasn’t been identified, the logic being appealed to is not unlike primitives, who don’t understand plate tectonics, explaining volcanic eruptions as the work of gods requiring appeasement.”

      Spencer’s reasoning is as fallacious as saying that:
      “You can’t say the defendant killed the victim. After all, people have died of natural causes in the distant past, and you haven’t explained the cause of all those deaths.”

      Even the IPCC derides the shoddy reasoning Spencer uses:

      “These examples illustrate that different climate changes in the past had different causes. The fact that natural factors caused climate changes in the past does not mean that the current climate change is natural. By analogy, the fact that forest fires have long been caused naturally by lightning strikes does not mean that fires cannot also be caused by a careless camper.”
      https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-6-1.html

      Anyway, we can have evidence what caused a fire or what caused someone’s death, without having to explain every past forest fire or every past death. I don’t know how times this needs to be explained to folks like Spencer: causal attribution for a particular event, does not require explanation of every past event. Just as there can be multiple lines of evidence pointing to a particular cause of a particular fire or a particular defendant as the killer of a particular persom, multiple lines of evidence point to increased CO2 as being the cause of most of the post-1950s global warming. If Spencer thinks otherwise, then I challenge him to give a non-GHG cause that’s known to cause all of the following things scientists have evidence of:

      1) Tropospheric warming + surface warming
      2) Stratospheric cooling
      3) Mesospheric cooling
      4) Thermospheric cooling
      5) Post-1970s increase in radiation absorption at a wavenumber of ~666 cycles per cm
      6) Ocean acidification, due to ocean uptake of CO
      7) Decreased atmospheric ratios of C14 and C13 isotopes of CO2
      8) Estimates of climate sensitivity (ex: from paleoclimate data) in combination with the observed CO2 increase seen, resulting in estimates where most of the post-1950s global warming is caused by CO2

      Re: “Correlations of increasing temperatures with increasing use of fossil fuels is suspicious, but not definitive with respect to causality.”

      No need for us to pretend that the causal case is just based on correlation; that’s a tired myth many denialists keep offering. Instead, the causal case is more of a cumulative cumulative case like:

      1) correlation +
      2) well-evidenced mechanism +
      3) primacy, where the proposed cause occurs before the effect +
      4) robustness of the correlation under multiple tests/conditions +
      5) experimental evidence that adding the cause subsequently results in the effect +
      6) exclusion of other likely causes +
      7) specificity, where the effect having hallmarks of the cause (ex: the observed tropospheric warming and stratopsheric cooling, is a hallmark of greenhouse-gas-induced warming, not warming from solar forcing)
      + ….
      = causation.

      You can also apply Mills’ methods (for causal inference) to show that CO2 causes warming, or you can use some of the Bradford Hill criteria (which I did above), or you can use statistical tests from Granger causality, or… Take the following papers, for example:

      “On the causal structure between CO2 and global temperature”
      “A contribution to attribution of recent global warming by out-of-sample Granger causality analysis”

      Re: “we have seen modest increases in average global temperatures (<<1°C)"

      The change is not modest, just because Spencer claims it is. He needs to provide evidence that the effects of such warming are modest. Otherwise, he's akin to a person who says that a drop in systemic blood pH from 7.40 to 7.05 is "modest" just because such a drop seems small to him (by the way, such a drop would be disastrous metabolic acidosis).

      Re: "Yet, government agencies cite average annual anomalies to hundredths and even thousandths of a degree, without citing the uncertainty, which is probably orders of magnitude larger."

      How about looking up peer-reviewed research on the uncertainty, instead of just making up claims regarding orders of magnitude? If he did, then he'd know the amount of warming is greater than the uncertainty. For example, see:

      "Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set"
      "Further Exploring and Quantifying Uncertainties for Extended Reconstructed Sea Surface Temperature (ERSST) Version 4 (v4)"
      "A review of uncertainty in in situ measurements and data sets of sea surface temperature"
      "The Reliability of Global and Hemispheric Surface Temperature Records"
      "How accurately do we know the temperature of the surface of the earth?"

      Re: "In their support of the denial of a hiatus, they are of course citing the work of Karl (2015). However, they neglect to mention that the practice of adjusting superior modern buoy data to agree with inferior ship engine room temperatures is highly contentious!"

      Another denialist distortion; whether you adjust the ship data vs. adjusting the buoy data, THAT HAS NO EFFECT ON THE OVERALL TEMPERATURE ANOMALY. I don't know how many times this needs to be explained to denialists before it finally sinks in. Anyway, Hausfather et al. already did a solid job of validating the corrections done in Karl et al. 2015. For the curious, the first paper below is Karl et al. 2015, the second paper goes over some of the relevant corrections made by Karl et al. for the Extended Reconstructed Sea Surface Temperature version 4 (ERSSTv4) record, and the third paper is Hausfather et al.'s validation of the corrections:

      "Possible artifacts of data biases in the recent global surface warming hiatus"
      "Extended Reconstructed Sea Surface Temperature Version 4 (ERSST.v4). Part I: Upgrades and intercomparisons"
      "Assessing recent warming using instrumentally homogeneous sea surface temperature records" (figures 1 and S2)

      And here's a run-down of some of the ludicrous, paranoid conspiracy theories denialists have been making about Karl et al.'s corrections:

      "How a culture clash at NOAA led to a flap over a high-profile warming pause study"
      potholer54's Youtube video: "NOAA vs Mail on Sunday — FACT CHECK"
      "Politicians shouldn't troll through scientists' e-mails"
      "How an interoffice spat erupted into a climate-change furor"

      Dealing with the rest of the errors/distortions in Spencer's piece would take too long. Suffice to say, I don't have much confidence in what he has to say; same point applies to most of the other contributors to WattsUpWithThat.

      • You can also apply Mills’ methods (for causal inference) to show that CO2 causes warming, or you can use some of the Bradford Hill criteria (which I did above), or you can use statistical tests from Granger causality, or…

        Alas, none of the ad hoc methods cited here can accomplish what well-established cross-spectrum analysis can insofar as providing an incisive basis for examining possible causal connections in complex systems. Wiener-Hopf theory tells us that the optimal transfer function between system input and output is given by the cross-spectrum.

        Causality in any physically rigorous sense can exist only if, for any chosen frequency band, the coherence is significant and the cross-spectral phase shows input leading the output. The bad news for AGW true believers is that no such result can be obtained with the iconic Mauna Loa CO2 series and realistic, unadulterated estimates of GAST. Throughout the entire baseband, either the coherence is insignificant, or the phase shows CO2 lagging temperature.

        Sic transit gloria in the church of “climate science”!

  67. Dr Curry,

    Thanks for putting up the emails from Clyde Spencer.

  68. This wouldn’t be scientific peer review, obviously. No one who understands peer review thinks it’s at the level of having a bunch of bloggers offering uninformed criticisms of a field they don’t understand, for largely politically-motivated reasons. So I won’t be doing scientific peer review. Instead, I’ll be making a comment as a layperson who admittedly doesn’t have the expertise of the people who wrote this report.

    They didn’t mention cooling of the mesosphere and thermosphere, even though these would be lines of evidence supporting their claim that increased GHG levels caused much of the recent warming:

    “A model estimate of cooling in the mesosphere and lower thermosphere due to the CO2 increase over the last 3–4 decades”
    “Evidence of CO2-induced progressive cooling of the middle atmosphere derived from radio observations”
    “Ozone and temperature decadal trends in the stratosphere, mesosphere and lower thermosphere, based on measurements from SABER on TIMED”
    “Why CO2 cools the middle atmosphere-a consolidating model perspective”
    “Effect of trends of middle atmosphere gases on the mesosphere and thermosphere”
    “How will changes in carbon dioxide and methane modify the mean structure of the mesosphere and thermosphere?”
    “Temperature trends in the midlatitude summer mesosphere”
    “Role of carbon dioxide in cooling planetary thermospheres”

  69. «In addition, natural cycles within the Earth’s climate system can only redistribute heat; they cannot be responsible for the observed increase in the overall heat content of the climate system (Church et al. 2011).»
    Page 40

    It is a bold statement that natural cycles can only redistribute heat. Is there any proven reason why random variations in clouds can not cause random and natural variations in the overall heat content?

    It is also peculiar that the Church et al. 2011 paper (Revisiting the Earth’s sea‐level and energy budgets from 1961 to 2008) does not seem to state what is referred to in the report. The term «cycl» is not found, «redistr» is only found one time in another context, «natur» is not found in a relevant context. Hence, the statement does not seem to be substantiated by the stated reference. But due to the poor citation practice used in the report, it is really hard to say for sure.

    Anyhow, the statement that “natural cycles can only redistribute heat” seems to be unsubstantiated.

  70. «The global climate continues to change rapidly compared to the pace of the natural variations in climate that have occurred throughout Earth’s history.»
    – Page 58

    That statement gives the impression that the pace of natural variations throughout Earth´s history is known. That seems to be an unsupported statement, as the pace of natural variations can hardly be determined accurately for time periods that are of a comparable length to the time period since humans really started to emit CO2 in the 1950´s, and certainly not throughout all of Earth´s history.

  71. “Bigger problem might even be that a GCM year is 360 days”

    Now that is a can of worms.
    In the simplified world that climate models represent the Gregorian calendar of 365/366 days is not always used. For historic reasons some GCMs have been set up to have a ‘simpler’ calendar. Some models omit the leap day and use a calendar of 365 days. And a few models use a 360 day calendar in which each month is assumed to be 30 days.

    So much for yearly energy balances then.
    Do they take the whole years 365 days heat and stuff it into 360 days?
    No wonder the world is heating up.

    Does this matter to the assessments that have just been done??

  72. «4.3.4. Averaging, Weighting, and Selection of Global Models

    ..The results of individual climate model simulations using the same inputs can differ from each other over shorter time scales ranging from several years to several decades …. These differences are the result of normal, natural variability, as well as the various ways models characterize various small-scale processes… For this reason, multimodel simulations are generally averaged to remove the effects of randomly occurring natural variations from long-term trends and make it easier to discern the impact of external drivers, both human and natural, on the Earth’s climate. there is growing recognition of the value of large ensembles of climate model simulations in addressing uncertainty in both natural variability and scientific modeling …» -Page 199

    It is a misuse of terminology to refer to differences in models outputs as natural variability. Model outputs should not be confused with measurements of nature. The term natural variability should be reserved for variations that are observed in nature and not variation or differences in the outputs from models. The quoted statements may be perceived as an indication that the authors do not distinguish clearly model output from nature.

  73. Regarding natural variations and human influence.

    Within the science of measurement, the term «random» is used for: «errors in prediction that presumably arises from unpredictable or stochastic temporal and spatial variations of influence quantities. The effects of such variations, hereafter termed random effects, give rise to variations in repeated observations of the measurand.» ref: Guide to the expression of uncertainty in measurement

    The term random is also used for unpredictable or stochastic temporal and spatial variations in a measurand. The essential characteristics of the term random are that random variations can not be predicted or quantitatively related to influence quantities in a reliable manner.

    Logically, all kinds of variations in climate observations can be divided into three categories:
    1 Variations that can be quantitatively related to natural influence quantities
    2 Variations that can be quantitatively related to human influence quantities
    3 Unpredictable or stochastic temporal and spatial variations that can not be predicted or quantitatively related to influence quantities in a reliable manner.

    (Temporal in this context may, presumably, be variations over years, decades, or even centuries.)

    It is remarkable that category 3: «unpredictable, or stochastic temporal and spatial variations» is not properly identified and acknowledged in the report, as that is a logically valid category that is distinct from category 1 and 2.

    The omission is made explicit by the following statement: «1.1 Introduction
    .. Any explanations for the observed changes in climate must be grounded in understood physical mechanisms, appropriate in scale, and consistent in timing and direction with the long-term observed trends.»

    It is even explicitly stated that «1.1 Introduction … We find no convincing evidence that natural variability can account for the amount of global warming observed over the industrial era. Solar flux variations over the last six decades have been too small to explain the observed changes in climate … natural cycles within the Earth’s climate system can only redistribute heat; they cannot be responsible for the observed increase in the overall heat content of the climate system … »

    As if all variation has to be cyclic, and as if clouds, albedo by ice extent, and other influence quantities can not vary independently from CO2 and increase or decrease the overall heat content of the climate system.

    The tendency to attribute anything that is not understood to human activity is also evident in the following statements: «1.1 Introduction … Known human activities quite reasonably explain what has happened without the need for other factors. Internal variability and forcing factors other than human activities cannot explain what is happening and there are no suggested factors, even speculative ones, that can explain the timing or magnitude and that would somehow cancel out the role of human factors …»

    Well, one human activity that is identified as a factor that «quite reasonably explain what has happened» seems to be anthropogenic aerosols. Anthropogenic aerosols are, in this paper at least, used as a fudge factor to explain the difference between predicted and observed warming: «The aerosol forcing, inferred as a residual in the atmospheric energy balance, is estimated as −0.8 ± 0.4 W m−2 for the 1980s and early 1990s. It increases in the late 1990s, as is required for consistency with little surface warming over the last decade. This increase is likely at least partially related to substantial increases in aerosol emissions from developing nations and moderate volcanic activity.»
    Ref: «Revisiting the Earth’s sea‐level and energy budgets from 1961 to 2008» John A. Church et. al. 2011.

    It is also remarkable that the only two natural influence quantities that are identified in the report are solar irradiance and volcanoes. (Ref. section 2.3.1 Natural Drivers – that only have two chapters: solar irradiance and volcanoes.) Hence, all other unknown and known influence quantities like clouds and albedo from ice extent are explicitly or implicitly treated as feedback from human activities.

    It is clear from the report that the effect of CO2 – and measurements and effects of aerosols, water vapor, and clouds are significant and highly uncertain. (Ref: easy searchable, ask me for help if preferred). It seems as if all variations are, by default, attributed to human activities – without acknowledging that there exists a category of unpredictable or stochastic temporal and spatial variations that can not be predicted or quantitatively related to influence quantities in a reliable manner. Anyhow, it is not reasonable to attribute all variations that are not understood or can not be explained, to human activities. These variations should be attributed to the category of random variations – not to the anthropogenic category. After all, that is also our main legal principle – innocent until proven guilty – learned the hard way.

    The biased, if I may say so, attribution to human-related activities is further exemplified by:
    «1.9. Recent Global Changes Relative to Paleoclimates … Before the emissions of greenhouse gases from fossil fuels and other human-related activities became a major factor over the last few centuries, the strongest drivers of climate during the last few thousand years had been volcanoes and land-use change (which has both albedo and greenhouse gas emissions effects)»

    Regarding Volcanoes it is also stated that:
    «2.3.1 Natural Drivers; Volcanoes; .. Most volcanic eruptions are minor events with the effects of emissions confined to the troposphere and only lasting for weeks to months. In contrast, explosive volcanic eruptions inject substantial amounts of sulfur dioxide (SO2) and ash into the stratosphere, which leads to significant short-term climate effects»

    As volcanoes only have short time effects of years or maybe decades, humans are effectively blamed, by the statement in 1.9, for being the strongest driver of climate during the last few thousand years – including the Roman warm period, the Medieval Warm Period and the Little Ice-Age. If those climate periods still exists in human records.

    To make a long story short: I can be blamed for much, but I will not tolerate being blamed for unpredictable or stochastic temporal and spatial variations that can not be predicted or quantitatively related to influence quantities in a reliable manner.

  74. I dropped out of following climate.etc for a couple of years. Today I tuned back in for the fun of it. The same pseudonyms are making the same claims as two years ago. I suspect that not much will change two years from now.

  75. Geoff Sherrington

    SorF,
    Thank you for your clear expositions here.
    Among the known unknowns that could influence GAST as measured are variations in the heat flux from the lower crust through interfaces such as the ocean floor. There is inadequate measurement data to prove or disprove that such a mechanism exists in terms of T variability and its magnitude.
    My old mate Mosher calls such effects unicorns. I call them possible mechanisms. As a geochemist, I had to be involved in geothermal depth profiles, but we were on the lookout for unusually high gradients on land (deep mines get hot and need expensive air conditioning). There is a lot more measurement and deduction to be done in the years ahead before this known effect and its absolute variability over time is adequately measured.
    It is only one of Mosher’s unicorns that like Santa’s reindeers are used in imaginary stories to assist in childrens’ understanding. Geoff.

  76. Submitted by Clyde Spencer:

    CSSR 5OD: FINAL CLEARANCE
    Chap. 1.7, Page 52, Lines 17-19; “Observed rapid mass loss from West Antarctica is attributed to increased glacial discharge rates due to diminishing ice shelves from the surrounding ocean becoming warmer (Jenkins et al. 2010; Feldmann and Levermann 2015).” This conclusion is subject to question. Newton’s first law says that a body in motion will tend to stay in motion unless acted on by some outside source. The friction encountered by grounded ice is much higher than the friction experienced by floating ice. While floating ice potentially offers momentum to resist acceleration, it is already moving as fast as the grounded ice and thus offers no resistance attributable to momentum. Additionally, if the floating ice were acting to retard the forward motion of the grounded ice, one would expect fissures to be closed. Instead, what is observed is that fissures form, widen, and propagate, suggesting that they are not acting as a buttress against the grounded ice. Other explanations need to be searched for to explain the increased glacial discharge rates.
    Chap. 1.8, Page 53, Lines 23-24; While the authors acknowledge, “Local sea level rise is likely to be greater than the global average along the U.S. Atlantic and Gulf Coasts and less than the global average in most of the Pacific Northwest.”, they don’t point out that this is because the effect is not solely from an expanding ocean, but also from land changing elevation. The implication is that even if the oceans stopped expanding their volumes, apparent sea level rise would continue in those regions experiencing subsidence!
    Chap. 1.9, Pages 53&54, Lines 23-24, 1-2; “Before the emissions of greenhouse gases from fossil fuels and other human-related activities became a major factor over the last few centuries, the strongest drivers of climate during the last few thousand years had been volcanoes and land-use change…“ Inasmuch as the impact of volcanos typically only last a couple of years, and are relatively infrequent, they should probably be considered instruments of weather change, not climate is climate is the average of 30 years of weather.
    Chap. 1.9, Page 54, Lines 15-22; The comparison between recent-past (Eemian) temperatures and sea level are made. The question that is begging to be answered is why, for comparable temperatures, sea level was much higher than currently.
    Chap. 1, Page 58, lines 18-20; “A substantial body of analysis comparing the observed changes to a broad range of climate simulations consistently points to the necessity of invoking human-caused changes to adequately explain the observed climate system behavior.” This reads as though, in the absence of known cause for anomalous behavior, it is assigned to anthropogenic influence routinely.
    Chap. 1, Traceable Accounts, Key Findings, Pages 58-63; All the Key Findings have under “Major Uncertainties,” a statement to the effect, “Key remaining uncertainties relate to the precise magnitude and nature of changes at global, and particularly regional, scales, and especially for extreme events and our ability to simulate and attribute such changes using climate models.” Yet, terms such as “very high confidence” are used commonly for the assessment about the statements and claims. One would think that the disclaimer about the uncertainties would preclude such high confidence!
    Chap. 1, Figures, Pages 65-73, the graphs, as is usual, are without error bars or any statement regarding the uncertainty of the values. Only the graph for the multimodel ensemble for temperature change has an associated two standard deviation range shown.
    Chap. 2.1, Page 99, Lines 14-16; “…keep the near-surface air temperature about 60°F (33°C) warmer than it would be in their absence.” I believe this is wrong. It is my recollection that the average annual global temperature is about 60°F, suggesting that in the absence of water vapor and CO2 (such as are the conditions in the Sahara) the average temperature would be 0°F.
    Chap. 2.3, Page 101, Lines 28-30; “The only significant natural climate drivers in the industrial era are changes in solar irradiance, volcanic eruptions, and the El Niño–Southern Oscillation.” Changes in cloudiness and hence cloud albedo should be included here! There are undoubtedly transitory changes, not unlike the infrequent injection of volcanic ash. Can coronal mass ejections supply significant energy increases to the atmosphere?
    Chap. 2.3, Page 104, Lines 21-23; They remark about seasonal variations in photosynthesis; they neglect to mention the production of biogenic CO2 as a result of microbial breakdown of plant mass, particularly in the Spring, before trees leaf out. Furthermore, they notably don’t acknowledge the outgassing of CO2 in the Tropics, shown quite clearly by the OCO-2 mapping satellite. They also don’t speak to what is surely an increase in that outgassing as the oceans warm.
    Chap. 2.5, Page 110, Lines 18-23; “A metric for the former, which approximates near-term climate change from a GHG forcing, is the transient climate response (TCR), defined as the change in global mean surface temperature when the atmospheric CO2 concentration has doubled in a scenario of concentration increasing at 1% per year. The latter is given by the equilibrium climate sensitivity (ECS), defined as the change at equilibrium in annual and global mean surface temperature following a doubling of the atmospheric CO2 concentration …” The concept of climate sensitivity to CO2 doubling is introduced, but there is no mention of the natural restrictions on the effects. That is, a point will be reached – saturation – where no amount of emitted IR can penetrate the atmosphere, and additional additions of CO2 will have no further effect! It has been suggested that the logarithmic relationship does not properly model the effect of CO2 on temperature changes.
    Chap. 2.6, Page 112, Lines 5-7; “…there is generally very low confidence in our understanding of the qualitative and quantitative forcing–response relationships at the regional scale.” This is a very important statement! Inasmuch as the global state is an aggregate of the regional states, it is basically saying that, globally, forcing is not well understood.
    Chap. 2.6, Page 113, Lines 4-29; Cloud feedbacks are discussed. From the discussion, I’m left with the impression that the generalizations are for sunlight near the zenith. But, it should be obvious that the effects will be different for the time of day. That is to say, light hitting a cumulus cloud from the side (low sun angle) will scatter light towards the ground and back through the atmosphere rather than off into space. My impression is that the generalizations are not very sophisticated.
    Chap. 2.6, Page 113, Lines 30-36; This section talks about “Snow, Ice, and Surface Albedo.” Albedo is an acceptable term for diffuse reflectors such as clouds and snow, but is a poor choice to account for the behavior of specular reflectors and diffuse reflectors with a BRDF with a strong forward lobe that varies with the angle of incidence.
    Chap. 2.6, Page 114, Lines 3-8; This section seems to be relating to the supposed buttressing effect of floating ice shelves in retarding the forward motion of the ice sheets. This is rarely of significant concern in Greenland where the outlets are valley glaciers and represent a small fraction of the total ice. In Antarctica, the ice sheets are more classically the wide continental type of glacier. However, the supposed buttressing effect is exaggerated, as I’ve previously remarked, and appears to be more speculation than actual fact.
    Chap. 2.6, Page 114, Lines 9-14; This section is qualitative speculations on how other effects might impact the whole system. However, it is devoid of any quantitative assessment. No assessment of uncertainties are provided for this particular section or the one immediately preceeding.
    Chap. 2.6, Page 114, Lines 28-32; This section talks about how changes in ice cover may impact cloudiness. This is an important area that deserves more discussion. The Artic is notorious for being cloudy, apparently causing the Viking explorers to invent a means to detect the position of the sun in the sky through polarization. Most analyses of the supposed warming impact of reduced ice cover appear to ignore the cloudiness, lack of sunlight for 4 to 6 months of the year, and the fact that open water can have a reflectivity exceeding that of snow for grazing sun angles. [https://wattsupwiththat.com/2016/09/12/why-albedo-is-the-wrong-measure-of-reflectivity-for-modeling-climate/]
    Chap. 2.6, Page 115, Lines 1-20; The authors present a qualitative summary of the various potential climate feedbacks that can impact the atmospheric composition. They then notably remark “While understanding of key processes is improving, atmospheric composition feedbacks are absent or limited in many global climate modeling studies used to project future climate, though this is rapidly changing (ACC-MIP 2017). For some composition–climate feedbacks involving shorter-lived constituents, the net effects may be near-zero at the global scale while significant at local to regional scales (e.g. Raes et al. 2010; Han et al. 2013).” This is another one of the unspecified uncertainties, inherent in GCMs, that might be as significant as what the spatial-resolution and clouds issue is.
    Chap. 2.6, Page 115, Lines 24-26; “The global mean atmospheric CO2 concentration is determined by emissions from burning fossil fuels, wildfires, and permafrost thaw balanced against CO2 uptake by the oceans and terrestrial biosphere…” What is left out is the production of cement, fermentation processes (leavened bread, alcoholic beverages, and biofuels) and chemical industries that produce CO2 as a byproduct, and biogenic production of CO2 from bacteria decomposing the cellulosic products of annual vegetation (such as tree leaves).
    Chap. 2.6, Page 115, Lines 35-37; Notably, it is stated, “Significant uncertainty exists in quantification of carbon-cycle feedbacks. Differences in the assumed characteristics of the land carbon-cycle processes are the primary cause of the inter-model spread in modeling the present-day carbon cycle and a leading source of uncertainty.” Once again, their own statements seem to deny the justification of high certainty in most of their claims resulting from modeling!
    Chap. 2.6, Page 119, Lines 21-24; “In the coming few decades, enhanced plant growth at high latitudes and its associated CO2 sink (Friedlingstein et al. 2006) are expected to partially offset the increased emissions from permafrost thaw (Schaefer et al. 2014; Schuur et al. 2015); thereafter, decomposition will dominate uptake.” No citation or justification is given for the bolded claim.
    Chap. 2.6, Page 120, Lines 15-16; “Satellite data are used directly to observe changes in Earth’s outgoing visible and infrared radiation.” However, light not returning in the general direction of the sun or normal to the surface of the Earth is not quantitatively characterized well because most satellites are nadir (downward) viewing. [ https://wattsupwiththat.com/2016/09/12/why-albedo-is-the-wrong-measure-of-reflectivity-for-modeling-climate/ ]
    Chap. 2.6, Page 121, Lines 3-6; “The attribution of climate change to human activities is supported by climate models, which are able to reproduce observed temperature trends when RF from human activities is included, and considerably deviate from observed trends when only natural forcings are included…” The unstated assumption here is that the supposed anthropogenic forcing is not a proxy for something(s) not yet identified, or improperly characterized mathematically.
    Chap. 2.6, Page 123, Line 3; “…global-mean aerosol RF has become more less negative…” Poor wording!
    Chap. 2.6, Page 123, Lines 22-38; Despite saying “Aerosol–cloud interactions are the largest source of uncertainty in both aerosol and total anthropogenic radiative forcing…. This finding, consistent across previous assessments…is due to poor understanding of how both natural and anthropogenic aerosol emissions have changed and how changing aerosol concentrations and composition affect cloud properties…”, they then confidently state, “There is very high confidence that aerosol radiative forcing is negative on a global, annually averaged basis,…”
    Chap. 2.6, Page 125, Lines 20-38; Perhaps the reason that this tome runs to over 600 pages is because they frequently re-state what they have previously stated!
    Chap. 2.6, Page 127, Table 2.1; The total anthropogenic radiative forcing is not quantified, apparently because some of the components are not quantified. It would probably be best to at least put bounds on the total, and/or take some guesses at the values of the individual components and assign uncertainties to reflect the confidence in the guesses.
    Chap. 2.6, Page 132, Figure 2.5; The top graph purportedly shows “All Anthropogenic Gases.” However, water vapor is apparently not included. To the extent that ground water and reservoir water is supplied to arid and semiarid regions (for agriculture and urban needs), which would not otherwise be present, the water is anthropogenic. While it may only persist downwind for a few days, it is supplied continuously and has therefore altered the atmosphere regionally.
    Chap. 2.6, Page 134, Figure 2.7; What appears to be missing from this illustration is increased outgassing from the tropical oceans as a result of increasing temperatures, and the impact of burning to clear land.
    [ http://wattsupwiththat.com/2015/05/05/anthropogenic-global-warming-and-its-causes/ ]
    Chap. 3, Page 160, Lines 6-10; “It is extremely likely that more than half of the global mean temperature increase since 1951 was caused by human influence on climate (high confidence). The likely contributions of natural forcing and internal variability to global temperature change over that period are minor (high confidence).” The use of the word “minor” is not justified! Using the word “minor” is being either careless or disingenuous. Instead, they should say, “…less than half.”
    Chap. 3.2, Page 161, Lines 35-36; “This implies that the observed global warming is attributable in large part to anthropogenic forcing.” Or, alternatively, to something that is highly correlated with the presumed anthropogenic forcing. In any event, what is really being said is that the models do not behave in conformance with the empirical temperatures and a correction (‘fudge factor’) is needed. Have other possibilities been explored, such as the parameterization factors (additive constants, multipliers)? Perhaps an assumption that needs to be looked at carefully is whether temperature can be expected to increase linearly with the logarithm of the CO2 concentration.
    Chap. 3.2, Page 162, Lines 4-25; The historical summary of the IPCC assessments concludes with: “Importantly, the continued long-term warming of the global climate system since the time of the first IPCC report [1990] … give more confidence in the attribution of observed warming since 1951 as being due primarily to human activity.” Unfortunately, between 1998 and at least 2015, the warming stalled and was neither predicted by the extant models, nor explained. Additionally, the warming apparently stalled between about 1940 and 1970. However, during the period of 1958 to today, the increase in CO2 has been increasing smoothly (other than for seasonal variations). Their claim is not strongly supported by the data.
    Chap. 3.2, Page 163, Lines 14-15; “Stern and Kaufmann (2014) use another method…and again infer that ‘human activity is partially responsible for the observed rise in global temperature…’” This also is not a strong endorsement of the claim [above] “…observed warming since 1951 as being due primarily to human activity.” I doubt that there are very many who would argue with the claim by Stern and Kaufmann (2014). However, “primarily” would seem to me to be ‘a thermometer too far!’
    Chap. 3.2, Page 163, Lines 32-37; “Thus, using present models there is no known source of internal climate variability that can reproduce the observed warming over the past century without including strong positive forcing from anthropogenic greenhouse gas emissions (Figure 3.1). The modeled century-scale trend due to natural forcings (solar and volcanic) is also minor (Figure 3.1), so that, using present models, there is no known source of natural variability that can reproduce the observed global warming over the past century.” Are they repeating themselves here? A question that should be asked is, “Had there been no anthropogenic influence on temperatures, when could we have expected the warming following the LIA to have stopped, and what would have been responsible?” If the LIA can be modeled reliably, then the difference between what might have been and what is, is the attribution being sought! If the LIA cause and recovery cannot be modeled well, then any attribution to anthropogenic forcing is unsupported speculation!
    Chap. 3.2, Page 165, Lines 1-4; “…, Richardson et al. conclude that the earlier studies noted above may underestimate TCR, because the surface temperature data set they used undersamples rapidly warming regions due to limited coverage and because surface water warms less than surface air.” This speculation could be and should have been tested because the specific heat of water and air are quite well known.
    Chap. 3.2, Page 165, Lines 4-6; “Gregory et al. (2015) note, within CMIP5 models, that the TCR to the second doubling of CO2 (that is, from doubling to quadrupling) is 40% higher than that for the first doubling.” I find this result to be surprising. I would expect exactly the opposite. This should be examined in detail to see if the claim can be substantiated.
    Chap. 3.3, Page 165, Lines 16-18; “Other forcings that have typically received less attention than greenhouse gases, such as land-use change, could be more important at regional scales than globally …” This strikes me as being illogical because the global result is the sum of the regional effects.
    Chap. 3.3, Page 165, Lines 16-18; “…attributable human contributions were reported for warming over all continents except Antarctica.” Antarctica lacks trees and rivers as differences from continents at lower latitudes. It may be presumptuous to claim that humans are the key to Antarctica being different.
    Chap. 3.3, Page 166, Lines 4-7; “Detectable anthropogenic warming since 1901 has occurred over the western and northern regions of the contiguous United States according to observations and CMIP5 models, although over the southeastern United States there has been no detectable warming trend since 1901.” I have seen a map from an unidentified dissertation that shows how the temperatures for the physiographic regions of California have changed over the last century. There appears to be a correlation with population changes for the alleged temperature increases, begging the question if urban heat island effects or land use changes have been the driving force.
    Chap. 3.4, Page 169, Line 10; “In the future, as the climate change signal gets stronger compared to natural variability,…” I think that the choice of the word, “as,” instead of “if,” speaks to the bias of the authors. Their mindset is that Man is primarily responsible for the observed changes and that the changes are inevitable because of the increasing population, and more Third World countries improving their standard of living. If they were objective, and open to the possibility that anthropogenic forcing was minor, then they would not be so quick to say “as.” It is subtle choices of vocabulary, like this, that can bias readers who are unacquainted with the science and history of climatology.
    Chap. 3, Page 176, Figure 3.1; the two major graphs only have an uncertainty range for the CMIP5 model runs. They try to make the case that the observed historical anomalies fall outside the ±2SD of the model ensemble means lacking anthropogenic forcing [CO2 only?]. However, there are no uncertainties shown for the anomalies. The anomalies are computed from the annual averages of monthly averages, filtering out the extreme values that would increase the SD of the temperatures. I suspect a rigorous attention to precision and uncertainties would demonstrate an overlap of the uncertainty ranges. Graph (a) uses 35 models, and graph (b) uses 18 models. It isn’t a strict comparison of like things, and there is the potential for mischief when one set of data only has half as many models.
    Chap. 3, Page 176, Lines 14-17; They claim that after about 1980, the temperature anomalies are inconsistent with “natural forcing-only models.” I’m not all that impressed with the correlation prior to 1980, with respect to following transient excursions, especially since that is presumably the period of time the models are tuned to!
    Chap. 3, Page 177, Figure 3.2; It appears that despite labeling the orange bar “all anthropogenic forcings combined,” it is actually only (+)GHGs and (-)aerosols.
    Chap. 3, Page 177, Figure 3.3; Again, while the bottom graph is labeled “Anthropogenic Component,” it is actually only GHGs and aerosols.

  77. Submitted by Clyde Spencer:

    Chap. 4.3.4, Page 199, lines 28-30; “However, all future simulations agree that both global and regional temperatures will increase over this century in response to increasing emissions of greenhouse gases from human activities.” This is an unwarranted assumption! There are regions that have not shown increased temperatures, such as the SE USA, as mentioned above. [Also, see p. 269, lines 6&7.]
    This sub-chapter talks about how ensemble averages can be weighted to, hopefully, improve the skill of the projections. However, assuming that no two model-outputs are identical, logically, there can only be one best model. Averaging its results with all the models that are ‘wrong,’ only reduces the quality of the projections. The models should be studied to understand why some models (presumably) get better results, and those insights should be used to improve the competing models. In the end, they might all converge on one model that can be trusted.
    Chap. 4.4, Page 201, lines 1-3; “Structural uncertainty can arise because a process is not yet recognized—such as ‘tipping points’ or mechanisms of abrupt change…” Interestingly, they acknowledge the possibility of not everything being well known, with respect to catastrophic events. However, that candidness or humility is not expressed when deciding to attribute unexplained events or trends to anthropogenic forcing.
    Chap. 4.4, Page 203, lines 1-3; “As stated in the key finding, natural variability is the primary uncertainty in quantifying the amount of global temperature change over the next two decades.” Is this saying that we don’t understand the climate system as well as is typically implied, or that we just don’t know what the range of natural variability is? Either way, it calls into question the attribution to humans those events that appear to be anomalous.
    Chap. 4.4, Page 204, lines 10-11; “The statement regarding the sources of uncertainty in long-term projections is based on Hawkins and Sutton (2009, 2011).” This does not appear to be a consensus opinion if only two papers by the same authors is the source.
    Chap. 6.1.1, Page 269, lines 21-23; It is claimed that recent temperature records have been set, and they are given as rankings; however, there is no mention of the magnitude of the records, the precision that is claimed, or the uncertainty associated with the absolute temperature.
    Chap. 6.1.2, Page 271, lines 1-14; This is an important revelation. In consideration of the predictions of future heat waves killing people and crops, based on average global temperature increases, this points out that the low and high temperatures are not responding equivalently. To wit, the number and intensity of heat waves is not increasing in the USA by the same amount that record cold spells are decreasing. This brings into question the veracity of the predicted future heat waves, and the fallacy of depending on global averages for predictions. They do claim that “The number of new highs has surpassed the number of new lows in 15 of the last 20 years,…”; however, they do not mention the amount by which the new highs are setting a record. Is it a statistically insignificant artifact of the processing of the temperature data?
    Chap. 6.3.2, Page 274, lines 1-33; There are some rather dire predictions being made in this sub-chapter relating to future temperature extremes. These are all based on the outputs of the extant GCMs, which haven’t been thoroughly vetted and are of questionable skill. The authors note above that “…resolution of the CMIP models does not capture orography in detail.” This is in addition to all the other caveats related to the GCMs.
    Chap. 6, Page 281, Table 6.2; This clearly shows that, except for the Southwest, all of the regions in the USA have experienced declines in the warmest day of the year of from -0.17°F to -2.22°F.
    Chap. 7.1.5, Page 306, Lines 12-13; “… [precipitation] trends identified for the U.S. regions have not been clearly attributed to anthropogenic forcing…” This is somewhat surprising, inasmuch as temperature changes seem to be attributable to humans throughout this review, and temperature is important in the hydrological cycle.
    Chap. 7.2, Page 308, Lines 12-13; “Global climate models used to project precipitation changes exhibit varying degrees of fidelity in capturing the observed climatology and seasonal variations of precipitation across the United States.” This suggests to me that, because the models are tuned for historical temperature fidelity, and that apparently precipitation projections do even worse than temperature, that there are some significant structural problems with the models. One might be that the temperature and/or precipitation input data are inadequate, or that the ‘First Principles’ are not capturing reality well enough. An area to examine is the parameterization of energy exchanges in clouds.
    Chap. 7, Page 315, Lines 17-21; Under Major Uncertainties, they state, “A key issue is how well climate models simulate precipitation, which is one of the more challenging aspects of weather and climate simulation. In particular, comparisons of model projections for total precipitation (from both CMIP3 and CMIP5, see Sun et al. 2015) by NCA3 region show a spread of responses in some regions (for example, the Southwest) such that they are opposite from the ensemble average response.” This speaks to my criticism immediately above.
    Chap. 7, Page 316, Lines 30-33; Interestingly, they put greater confidence in model predictions of snowpack declines in the future than they do in recent observed changes. I think that this is a fundamental issue in climatology, whether models are more trustworthy than measurements.
    Chap. 8, Page 336, Lines 6-9; Notably, they state, “While by some measures, drought has decreased over much of the continental United States in association with long-term increases in precipitation, neither the precipitation increases nor inferred drought decreases have been confidently attributed to anthropogenic forcing.”
    Chap. 8, Page 355, Key Message 6; Overall, this section seems to be more objective than most of what I have read. My only criticism here is not mentioning the role of arson in Western fires, which to me, seems to have increased in recent decades.
    Chap. 9, Page 375, Lines 6-8; “Both theory and numerical modeling simulations (in general) indicate an increase in tropical cyclone (TC) intensity in a warmer world, and the models generally show an increase in the number of very intense TCs.” This claim has always struck me as counterintuitive because I learned that weather is driven by temperature and pressure differences. With the Arctic warming more rapidly than mid-latitudes, the temperature differential is decreasing.
    Chap. 11, Page 443, Lines 16-18; “average ice mass loss from Greenland was −269 Gt per year between April 2002 and April 2016, accelerating in recent years (high confidence).” This should also be stated as a percentage loss of the total ice mass, to put it into perspective.
    Chap. 11, Page 443, Lines 21-23; “Arctic-wide ice loss is expected to continue through the 21st century, very likely resulting in nearly sea ice-free late summers by the 2040s (very high confidence).” There are some that claim that the Arctic ice loss is cyclical, and even now might be rebounding. However, until the concern about periodicity is addressed, the “very high confidence” prediction is probably not warranted.
    Chap. 11.2.1, Page 445, Lines 20-21; “One study argues that the natural forcing has not contributed to the long-term Arctic warming in a discernable way.”
    Chap. 11.2.2, Page 447, Lines 23-25; However, the above quote appears to be contradicted by “One recent study…indicates that internal variability dominates Arctic atmospheric circulation trends, accounting for 30%–50% of the sea ice reductions since 1979, and up to 60% in September.” Thus, there really seems to be great uncertainty about what is responsible for sea ice reduction and the predictions for future extent should carry a similar uncertainty assessment. The follow-up statement (Lines 26-29), “This apparent significant contribution of natural variability to sea ice decline is consistent with the statement that likely more than half of the observed sea ice loss since 1979 has an anthropogenic contribution,” would seem to be less than unbiased.
    Chap. 11, Page 460, Lines 23-25; “It is virtually certain that Alaska glaciers have lost mass over the last 50 years, with each year since 1984 showing an annual average ice mass less than the previous year.” However, issues such as albedo reduction through accumulation of soot on the surface, reduced cloudiness, greater sublimation with increased wind, or reduced snow and increased rain are not mentioned anywhere as even possible contributors! The analysis is less than complete, with the assumption that it is primarily air temperature that is responsible.
    Chap. 11, Page 463, Lines 3-4; “Arctic sea ice and glacier mass loss impacts the United States by affecting coastal erosion in Alaska and key Alaskan fisheries through an increased vulnerability to ocean acidification.” I saw nothing in the preceding text to support this claim!
    Chap. 12, Page 493, Lines 21-22; “RSL rise is likely to be greater than the global average in the U.S. Northeast and the western Gulf of Mexico.” The point should be made that this is true because of subsidence of the land, and would RSL rise would take place even in the absence of an increase in the volume of the oceans. It is covered on page 495.
    Chap. 12, Page 493, Lines 25-26; “Almost all U.S. coastlines experience more than global mean sea level rise in response to Antarctic ice loss,..” This statement doesn’t make sense. Either the grammatical construction, or the facts behind it, need to be verified by the author(s).
    Chap. 12, Page 493, Lines 28-29; “As sea levels have risen, the number of tidal floods each year that cause minor impacts (also called “nuisance floods”) have increased 5- to 10-fold since the 1960s…” This is probably a result of zoning regulations not taking into account a safety margin for people wanting beachfront property. To the extent that these “nuisance floods” are a result of RSL rise, then they could be predicted to occur in the absence of absolute sea level rise.
    Chap. 12.2, Page 495, Lines 31-32; “…, at interannual timescales, changes in ocean dynamics, density, and wind can cause substantial sea level variability in some regions.” Those who prepare tide tables are aware of several periodicities in tides, which are used routinely for preparing the tables, not all of which have attribution. I have never seen this issue addressed directly in discussions of sea level change. It certainly is not addressed here.
    Chap. 12.4, Page 499, Lines 2-3; “Mass loss from floating ice shelves does not directly affect GMSL, but does allow faster flow of ice from the ice sheet into the ocean.” No citation is provided. One sees this claim frequently; however, I believe that there is little evidence to support it. I have previously addressed this in the context of Newton’s First Law and the evidence for tension cracks where ice shelves breakaway.
    Chap. 12, Page 509, Lines 2-3; “Human-caused climate change has made a substantial [undefined] contribution to GMSL rise since 1900 (high confidence), contributing to a rate of rise that is greater than during any preceding century in at least 2,800 years (medium confidence).” This implicitly assumes that humans have made a “substantial contribution” to the global average temperature increase, which is still subject to debate.
    Chap. 12, Page 509, Lines 34-35; “…, and that 37% ± 38% out of 74% ± 22% simulated was attributable over 1900–2005.” Is this another way of saying that they have no certainty?
    Chap. 13, Page 540, Lines 5-7; “…surface waters have warmed by about 1.3° ± 0.1°F (0.7° ± 0.08°C) per century…” This should read, “…(0.72° ± 0.06°C)…” [changes underlined]
    Chap. 13, Page 540, Lines 19-21; “…making them more acidic (very high confidence),…” I realize that the terms “ocean acidification” and “more acidic” have become entrenched in the climatology literature over the last couple of decades. However, they are not only misnomers, but nonsensical. One cannot have more of something that doesn’t exist. That is, one does not become more angry just because they are a little less happy. The oceans are alkaline, and probably always will be. I believe the terms were selected purposely for their pejorative value with laymen. I suggest that, whenever possible, the terms should be objected to and the term “carbonation” used as a more accurate descriptive term. As to the “very high confidence,” it is my understanding that the historical data were rejected on the basis of poor sampling protocol – the same protocol used for SSTs during the same time frame and accepted for use. Having rejected the historical data, a model was constructed and it is the comparison of the model’s prediction for the past, with modern data, that the supposed “very high confidence” is based. I have concerns about the model!
    Chap. 13, Page 540, Line 24; “Acidification is regionally increasing along U.S. coastal systems as a result of upwelling (for example, in the Pacific Northwest)…” Unless it can be demonstrated that increasing temperatures has increased upwelling (which it hasn’t) then the claim is unsupportable. Upwelling is a natural process that contributes essential nutrients to the regions where the upwelling occurs. Classic, high-productivity fishing regions are often the result of upwelling and have been known since long before anthropogenic CO2 became a concern. The upwelling water is hundreds of years old and has acquired the reduced pH through oxidation of organic material drifting down through the water column; the CO2 released is then dissolved in the cold water.
    Chap. 13, Page 540, Lines 28-30; “Under RCP8.5, the global average surface ocean acidity is projected to increase by 100% to 150% (high confidence).” They are making quantitative predictions without defining the thing they are predicting an increase. They are probably speaking of the hydrogen ion concentration, although they don’t specify that.
    Chap. 13, Page 540, Lines 34-35; “Over the last half century, major oxygen losses have occurred in inland seas, estuaries, and in the coastal and open ocean (high confidence).” They make a scary statement, but fail to associate the changes with anthropogenic CO2. It is more likely the result of agricultural runoff of phosphates and nitrates. It really is a non sequitur in the context of global warming.
    Chap. 13.3.1, Page 546, Lines 21-26; “[Carbonation] causes a variety of chemical changes in seawater; an increase in the partial pressure of CO2 (pCO2,sw), dissolved inorganic carbon (DIC), and an increase in the concentration of hydrogen and bicarbonate ions and a decrease in the concentration of carbonate ions (Fig. 13.4).” The illustration below, from Wikipedia, summarizes the above statement; the situation is a little more complex than suggested by the report. It should be noted that the bicarbonate ion (red line) is fairly flat in the region of concern, and the carbonate ion is in a region of essentially linear change. There are no thresholds or ‘tipping points’ to be concerned about. The concerns about the impacts on the calcareous shells of organisms is probably exaggerated because the organisms have evolved to handle diurnal changes and upwelling events. Thus, they usually have protective coatings of chitin and/or mucous over their carbonate shells. No responsible researchers are suggesting that ocean water will ever become even neutral (pH 7), let alone actually acidic.

    Chap. 13.3.1, Page 546, Lines 30-31; “Ocean surface waters have become 30% more acidic over the last 150 years…” This is cited frequently! For starters, while temperatures are commonly stated with higher precision than warranted, in this case the precision was reduced, allowing the number to be rounded up considerably, making it appear larger. Additionally, it is citing the hydrogen ion concentration, which varies through 14 orders of magnitude. The huge range is why pH was developed. This number is not presented in context. It would take a change of nearly 1500% in the hydrogen-ion concentration to reach neutrality! Surprisingly, (or perhaps not so), nothing in this presentation mentions the chemical buffering in seawater that resists changes pH. I have written extensively on this issue and exposed the sophistry used to scare readers: http://wattsupwiththat.com/2015/09/15/are-the-oceans-becoming-more-acidic/
    Chap. 13.3.3, Page 547, Lines 17-20; “Deep waters that shoal with upwelling are enriched in CO2 due to uptake of anthropogenic atmospheric CO2 when last in contact with the atmosphere [typically several hundred years ago], coupled with deep water respiration processes and lack of gas exchange with the atmosphere…” What is meant by “deep water respiration processes” is that both biogenic and abiogenic processes oxidize organic material settling down through the water column. The very cold water (nearly freezing, typically) and the high pressure conspire to keep the CO2 in solution, saturating the water with CO2 and carbonic acid, although remaining alkaline). None of this is caused by Man and the partial pressure of CO2 in the upwelled water is too high to be impacted by the CO2 in the atmosphere.
    Chap. 13.3.3, Page 547, Lines 27-29; “While these processes have persisted historically, climate-induced increases in glacial melt and high-intensity precipitation events can yield larger freshwater plumes than have occurred in the past.” Except for the end of the last Glaciation! Salinity and temperature changes are probably at least as great a threat to invertebrates as the supposed pH changes.
    Chap. 13.3.3, Page 547, Lines 27-29; “Nutrient runoff can increase coastal acidification by creating conditions that enhance biological respiration. In brief, nutrient loading typically promotes phytoplankton blooms, which, when they die, are consumed by bacteria. Bacteria respire CO2 and thus bacterial blooms can result in acidification events …” Again, agricultural runoff is not responsible for temperature increases and has only a tangential relationship to the issue of anthropogenic CO2. This is another non sequitur that has been piled on to make the future seem scary.
    Chap. 13.3.4, Page 548, Lines 5-7; “Because carbonate minerals also more readily dissolve in colder waters, these waters can more regularly become undersaturated with respect to calcium carbonate whereby mineral dissolution is energetically favored.” I don’t know what the point of this statement is. If dissolution is undesirable, then increasing temperatures from global warming should ameliorate the problem.
    Chap. 13, Page 558, Lines 24-25; “There is also high confidence that oceanic pH will continue to decrease.” However, there no mention as to what a reasonable lower-bound on the pH is, nor mention of the difficulty in reducing pH because of strong chemical buffering. This is not a balanced presentation!
    This report is voluminous because of the inherent structure where facts are stated in the introductions, expanded upon in the main body of the chapter, and then summarized in the Key Findings section. Thus, many statements can be found at least three times in a chapter, are cited in other related chapters, and stated in the Executive Summary. It really needs to be condensed by stating the key findings once, and referring to an appendix with details.

  78. Regarding the radiative forcing budget.
    The report provides many figures that are relevant for a radiative forcing budget. However, some figures are missing, others are poorly defined. As a consequence, the budget does not seem to add up.

    To elaborate this point: «Table 2.1. Global mean RF and ERF values in 2011 for the industrial era» provides many of the relevant figures. However, the current radiative imbalance (0,6 W/m2) is not stated in the report but only indicated in Figure 2.1. The temperature increase since pre-industrial times is not provided anywhere in the report even though it is necessary to quantify the: Planck feedback, the snow albedo feedback, and the cloud feedback that all have the unit W/m2*K. Effective Radiative Forcing in Table 2.1 may seem to include the effect of Planck feedback from the surface, but not from the ocean sea surface temperature, and voila – the radiative forcing budget is pretty much obfuscated.

    Given the tremendous impact of the climate policies, one radiative forcing figure that seems essential is the radiative forcing from CO2. However, this figure is mixed up with well mixed green house gases (CH4, N2O, and halocarbons) in Table 2.1.

    Given the high confidence of the authors, the authors should be able to include in the report a clear tabular overview over traceable and well-defined posts in the radiative forcing budget together with traceable uncertainty estimates.

  79. The National Academy of Science is described above as being “independent”. Is this an honest assessment, or they in fact receive government money ?

  80. I worked for the NAS for several years (1975-77). Indeed their funding is very heavily from Congress. For example, my project was a $5M study of EPA, which was created by Jamie Whitten, Republican from Mississippi, an opponent of EPA. He took the money from the EPA budget. So when we visited EPA, it was sometimes a bit dicey talking to the scientists, because they would have greatly preferred the money had gone to them. On the other hand, Whitten had nothing to do with the selection of scientists on the review panels, and this was the case for all such studies that I know about. NAS at that time was jealous of their prerogative to select from the best scientists on a certain subject. Regarding climate change, however, it may be that their selection process is flawed toward those with the highest visibility–one would have to check out the makeup of the panels.

  81. Regarding model tuning and predictive skills.

    The power of parametrization was famously expressed by John von Neumann: «With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.» By this, he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, any data set can be fitted.

    Within climate science, parameterization and tuning are discussed in a number of papers. The following quotes are particularly relevant for the point being made in this comment:

    «Kiehl (2007) … discovered the existence of an anti-correlation between the total radiative forcing and climate sensitivity in a model ensemble; high-sensitivity models were found to have a smaller total forcing and low-sensitivity models were found to have a larger forcing, yielding less cross-ensemble variation of historical warming than otherwise to be expected. … it could suggest that some models may have been inadvertently or intentionally tuned to the twentieth-century warming.

    «There is a broad spectrum of methods to improve the model match to twentieth-century warming, ranging from simply choosing to no longer modify the value of a sensitive parameter when a match is already good for a given model …, or selecting physical parameterizations that improve the match, to explicitly tuning either forcing or feedback, both of which are uncertain and depend critically on tunable parameters …»

    «In theory, tuning should be taken into account in any evaluation, intercomparison, or interpretation of the model results. .. the importance of tuning is probably not advertised as it should be. It is often ignored when discussing the performances of climate models in multimodel analyses. In fact, the tuning strategy was not even part of the required documentation of the CMIP phase 5 (CMIP5) simulations.»
    – The art and science of climate model tuning; Hourdin et al 2017.

    As clearly indicated, parameterization and model tuning need to be taken into account in any evaluation, intercomparison, or interpretation of model results.

    However, while it is stated in the first sentence in 4.3.1. Global Climate Models that «Global climate models (GCMs) are mathematical frameworks that were originally built on fundamental equations of physics.» the information that models are widely parametrized and tuned to match historical data is omitted in that chapter.

    As there are significant differences between the models, many of the models that seem to match historical data, do that for the wrong reasons. The report should elaborate how the risks with tuning, and lack of testing of predictive skills, have been taken into account in establishing the confidence of the authors in the model results.

  82. Judith

    A friend of mine recently forwarded the most recent draft of the CSSR. I have now read the Executive Summary. He thought that would convince me of the urgency of the situation.

    Can I just ask you to push someone in authority to require one simple change that would be much more honest and I think completely change the perception of the public when they read this Executive Summary. Please refer to Page 10 of Front Matter.

    Please require the Confidence Levels to be amended to read as follows:

    “Very High Confidence” replaced with “Strong Evidence”
    “High Confidence” replaced with “Moderate Evidence”
    “Medium Confidence” replaced with “Suggestive Evidence”
    “Low Confidence” replaced with “Inconclusive Evidence”.

    This is an example of what bothers me as a social conservative. The “warmists” always try to oversell their position.

    I cannot believe the number of times when I was reading this Executive Summary and when came across “High Confidence” I had to remind myself and turn back to page 10 to understand that what they really mean by this term “High Confidence” was “Moderate Evidence”.

    It would completely change the tone of this Executive Summary to make this change. They cannot disagree with this change because my suggestion is much more representative of what they mean because my wording is contained in their own definition. Obviously there some statements in this Summary where they indicate “Low Confidence” are laughable because if there is a Low Confidence then why is the statement in here at all?

    • Whenever you have collected yourself from the shock, it seems as if you may take pleasure from having a closer look at the definition of “Medium confidence”: «Suggestive evidence (a few sources, limited consistency, models incomplete, methods emerging, etc.), competing schools of thought.» (Page 10). I think that calling this for «suggestive evidence» is a gross exaggeration. “Suggestive evidence” is an oxymoron. By the definition of the term there is no evidence – at least not by the common meaning of the word.

      And if you ever wonder where this perversion of science was introduced, the following document may be a clue: Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties

      As far as I can tell, this kind of expression of level of confidence is an invention by IPCC that has no sound basis in science or philosophy.

  83. I agree with your comment re “Medium Confidence”. I was going to suggest “Some Evidence” even though I agree with your comments. I obviously had read the definitions. I just wanted to propose something that they could not argue with because I was throwing back their very words. Is it also not interesting that the term “medium consensus” is used to describe “High Confidence”?