Week in review – science edition

by Judith Curry

A few things that caught my eye this past week.

QBO primer [link]

Periodicity disruption of a model quasibiennial oscillation of equatorial winds

Influence of the QBO on MJO prediction skill in the subseasonal-to-seasonal prediction models [link]

Combined effect of the quasi-biennial oscillation and ENSO on the MJO [link]

The Beijing Climate Center Climate System Model (BCC-CSM): the main progress from CMIP5 to CMIP6: quasi-biennial oscillation reproduced for first time in this model.

Quasi-biennial Oscillation reproduced in the INMCM5 climate model [link]

The folks at just published a substantially improved (?) estimate of the uncertainties in their historical temperature record:

«The mid-latitudes are one big, chaotic mess». The DYNAMITE project is seeking order in this atmospheric chaos – to see if mid-latitude storm-tracks and weather are influenced by phenomena in other regions:

A study of links between the Arctic and the midlatitude jet stream using Granger and Pearl causality

Scientists discover China has been secretly emitting banned ozone-depleting gas [link]

John Christy: Climate Models Have Been Predicting Too Much Warming

Non-uniform contribution of internal variability to recent Arctic sea ice loss: Journal of Climate:

Contrasting effects on deep convective clouds by different types of aerosols [link]

Limited capacity of tree growth to mitigate the global greenhouse effect under predicted warming [link]

Ice sheet contributions to future sea level rise from structured expert judgment [link]

Comparison of Effective Radiative Forcing Calculations Using Multiple Methods, Drivers, and Models

Evolution of mean, variance and extremes in 21st century temperatures [link]

AMOC sensitivity to surface buoyancy fluxes explaining differing response of AMOC to heat loss in winter and summer,

Thirty years of regional climate modeling: Where are we and where are we going next? (open access)

Observation and attribution of temperature trends near the stratopause from HALOE

Reduction in northern mid-latitude 2-m temperature variability due to Arctic sea ice loss

“Impact of North Atlantic Freshwater Forcing on Pacific Meridional Overturning Circulation under Glacial and Interglacial Conditions”

Accumulation of soil carbon under elevated CO2 unaffected by warming and drought

La Nina effects on droughts can be traced back to the U.S. Civil War [link]

An update on the thermosteric sea level rise commitment to global warming (open access)

Atmospheric moisture transport and the decline in Arctic Sea ice

Rainfall in Adelaide, Sydney and Melbourne back to 1839!

The vital, life-sustaining Asian Monsoon is weakening due to air pollution from coal-fired power plants, study finds. It’s essentially one form of air pollution overpowering another. [link]

climate impacts of the volcanic eruption.

Glacial cycles simulation of the Antarctic Ice Sheet with PISM – Part 1: Boundary conditions and climatic forcing

Scientists made ‘major overestimations’ of methane emissions from oil and gas production in the United States by relying on faulty measurements, according to new research sponsored by NOAA.” [link]

Assessing dynamic versus thermodynamic origin of climate model biases. Thermodynamics more relevant for temperature & precipitation biases in CESM model, especially for .

The sun is stranger than astrophysicists imagined [link]

‘Extraordinary thinning’ of ice sheets revealed deep inside Antarctica [link]

Controls on the transport of meltwater from the southern Greenland ice sheet in the Labrador Sea

Different effects of two ENSO types on Arctic surface temperature in boreal winter

Two Decades of Change at Pine Island Glacier

Long‐term measurements show little evidence for large increases in total U.S. methane emissions over the past decade

Historical extreme rainfall events in southeastern Australia (open access)

Shifting spatial and temporal patterns in the onset of seasonally snow-dominated conditions in the Northern Hemisphere, 1972 – 2017

Update of Canadian Historical Snow Survey Data and Analysis of Snow Water Equivalent Trends, 1967–2016 [link]

Is the noble gas‐based rate of ocean warming during the Younger Dryas overestimated?

Social science, technology and policy

Robert Stavins: Learning from thirty years of cap and trade [link]

Pushing small green initiatives “decreases support for substantive policies by providing false hope that problems can be tackled without imposing considerable costs” [link]

A multi-model assessment of food security implications of climate change mitigation. [link]

Federal Energy Regulatory Commission is quietly, profoundly shaping U.S. climate policy [link]

Germany’s dangerously flawed energy policies [link]

Xcel Energy Acknowledges that Natural Gas is Vital for Future Energy Economy – “The grid can’t be 100% renewable.”

Food waste is now powering British trucks [link]

Turning agricultural waste into food, packaging and more [link]

A new paper in looks at ways to cut the footprint of . [link]

Why carbon credits for forest preservation may be worse than nothing [link]

On the financial viability of negative emissions [link]

Concern for climate change is rising, but that doesn’t necessarily equate to action. Just ask someone worried about eating right and exercising enough — but who doesn’t actually make it to the gym or opt for salad over fries.

Electric vehicles are an important tool in decarbonizing transportation. However, despite the lack of tailpipes they are still far from zero-emissions. I dig into the details of the climate benefits and impacts of electric vehicles at Carbon Brief:

AMOC decline is accompanied by enhanced along the US northeast coast. Piecuch et al. on the role of atmospheric forcing in driving that relationship.

Green New Deal for farming: Address climate crisis and revitalize food system [link]

About science and scientists

Supreme Court being asked to weigh in on the Mann versus CEI libel/freedom of speech case [link]

Free university tuition – a cautionary note from Germany [link]

RIP for another giant in our field. Norman Phillips, former meteorology department head, dies at 95 | MIT News [link]

RIP Murray Gell-Mann [link] to his TED talk

It’s cut throat: half of UK academics stressed and 40% thinking of leaving academia [link]

The peculiar blindness of experts: The best forecasters view their ideas as hypotheses in need of testing. If they make a bet & lose, they embrace the logic of a loss just as they would the reinforcement of a win. This is called, in a word, learning.” [link]

“People are rewarded for being productive rather than right, for building ever upward instead of checking the foundations. These incentives allow weak studies to be published. And once enough have amassed, they create a collective perception of strength…” [link]

“Classical education involves the acquisition of culturally & scientifically useful knowledge, & fostering an ability to think critically to further understanding. Modern education, on the other hand, is accreditation by an officially sanctioned seminary.” [link]

Scientists like to think of science as self-correcting.  To an alarming degree, its not. [link]

 

348 responses to “Week in review – science edition

  1. structured expert judgment
    is a retreat from the scientific method,
    a retreat from the Renaissance
    a retreat from the Enlightenment
    a renunciation of Galileo and Kepler and a return to Ptolemy, epicycles and truth as doctrine established by state religion.
    Good job!

  2. “Impact of North Atlantic Freshwater Forcing on Pacific Meridional Overturning Circulation under Glacial and Interglacial Conditions” https://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-19-0065.1

    I didn’t know there was a Pacific Meridional Overturning Circulation, counterpart to the Atlantic MOC. Thanks! Except it seems the two are exclusive, you get the AMOC during an interglacial and the PMOC in a glacial period. Partly due to sea level and the opening or closing of the Bering Strait. Since MOCs are driven by the salinity-downwelling positive feedback, they are chaotically unstable and subject to fluctuation. They make the ocean basin in question an excitable medium. That might explain why climate globally is less stable during glacial than interglacial periods. The Pacific ocean is bigger than the Atlantic and thus he PMOC more disruptive of climate than the AMOC – possibly explaining the DO events (micro-interglacials). Interesting stuff!

  3. Ireneusz Palmowski

    A solar panel malfunction kept the Fermi Telescope mostly pointed away from the sun for the last year, but workarounds have been found — just in time for solar minimum. The sun’s magnetic field lines are currently curving tidily from pole to pole; if this solar minimum is like the last, the gamma-ray signal is now at its most robust. “That’s what makes this so exciting,” Linden said. “Right now we’re just hitting the peak of solar minimum, so hopefully we’ll see higher-energy [gamma-ray] emission with a number of telescopes.”
    https://www.quantamagazine.org/gamma-ray-data-reveal-surprises-about-the-sun-20190501/?platform=hootsuite

  4. The CarbonBrief article on vehicle emissions shows the complexity of comparing gasoline powered car carbon emissions to hybrids, plug-in hybrids and EVs.

    The total number of articles here makes me wonder who in the main-stream-media reads and digests them so as to present fair, balanced and authoritative news stories.

  5. Ireneusz Palmowski

    Important clues to the origin of GCRs are provided by their energy spectra and source composition, as well as by the electromagnetic signatures they emit in various wavelengths, both in their putative acceleration sites (SNRs) and throughout the Galactic interstellar medium where they propagate.

    The characteristic signature of accelerated protons (gamma-rays from the decay of pions, produced through energetic proton collisions) was for a long time difficult to distinguish from emission of energetic electrons (bremsstrahlung or inverse Compton scattering). Recently, the pion-decay feature was detected with the Fermi Large Area Telescope in the gamma-ray spectra of two SNRs, IC 443 and W44,: both of these are remnants of core-collapse supernovae and IC443 is a member of the OB association GEM OB1. This detection provides direct evidence that cosmic-ray protons are indeed accelerated in SNRs.
    http://www.issibern.ch/teams/galactcosray/

  6. The sun is stranger than astrophysicists imagined [link]

    “… the highest frequency waves of light, radiate from our nearest star seven times more abundantly than expected. Stranger still, despite this extreme excess of gamma rays overall, a narrow bandwidth of frequencies is curiously absent.”

    Not caused by us strange humans though, right?

    We may learn more about climate change someday but not until academia kicks the habit of obsessively blaming all ‘global heating’ on its politically-inspired superstition that a miniscule increase in atmospheric CO2 is the cause.

    • Ireneusz Palmowski

      Galactic radiation, in addition to neutrons, produces many types of radiation, including electromagnetic radiation with a wide wavelength range.

      • “Galactic Cosmic Rays and Low Clouds: Possible Reasons for Correlation Reversal
        By Svetlana Veretenenko, Maxim Ogurtsov, Markus Lindholm and Risto Jalkanen — Submitted: December 12th 2017, Reviewed: February 14th 2018, Published: August 22nd 2018,” seems apropos and a courant, e.g.,

        One of the possible mechanisms of this influence suggests an impact of galactic cosmic rays (GCRs) on the cloud cover allowing amplifying noticeably a weak signal of solar variability in the Earth’s atmosphere. Indeed, cloudiness changes can strongly modulate fluxes of both incoming short-wave solar radiation and outgoing long-wave radiation of the Earth and the atmosphere and, thus, influence significantly the radiative-thermal balance of the atmosphere.

  7. Out of respect to Gell-Mann:

    • Russell Seitz

      Out of great respect for Gell-Mann’s physics, if not his taste in archaeology, one has a duty to point Judy’s saner readers to the comprehensive fisking of Christy& Spencer’s laterst polemic at ATTP:

      https://andthentheresphysics.wordpress.com/2019/05/25/models-are-failed-hypotheses/

      • Finding sanity at ATTP is about as probable as fairy dust and unicorns.

        Models have a dynamic core that does a great job of falsifying the physics. Even small changes in input parameters will produce 1000’s of feasible solution trajectories.

        And they do of course miss the geophysics of internal variability at any scale.

      • ATTP does nothing to cast real doubt on any of the conclusions. The post is mostly just quibbling about terminology about climate models. Christy’s paper is not a polemic either. Perhaps those who can’t evaluate actual science and who use polemic as a substitute for thought easily project that attitude on to others.

        Real climate shows much the same thing for the Troposphere in the tropics even though their baseline period is much longer so the divergence isn’t quite as large. Russell, do you recall what conciliance is? It’s a common tool used in science but not in polemics.

      • I think this 2011 Isaac Held post about the moist adiabat and tropical warming is well worth reading. In particular, it says

        A failure of the upper troposphere to warm as much as anticpated by this simple argument would signal a destabilization of the tropics — rising parcels would experience a larger density difference with their environment, creating more intense vertical accelerations — affecting all tropical phenomena involving deep convection. I like to refer to warming following the moist adiabat as the most “conservative” possible — having the least impact on tropical meteorology.

      • The failure of models to reproduce upper tropospheric temperature has no implication other than the failure of models.

        But there is so much wrong with the idea that opportunistic ensembles in particular are reliable that there are much bigger problems.

        https://www.nature.com/articles/s41612-018-0044-6

        https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161

        https://journals.ametsoc.org/doi/abs/10.1175/2009BAMS2752.1

        https://www.pnas.org/content/104/21/8709

      • I think ATTP that what you quote is not relevant to Christy’s main observations which are becoming more and more accepted, despite the desire of some to cast doubt on them.

        Held’s quote is one of those vague verbal formulations that some mistake for science. A more turbulent atmosphere could mean many things and the exact amount is important to understand. More importantly what does it mean for water vapor?

      • Russell Seitz, thank you for the link to the ATTP essay.

        Paraphrasing, how much success in forecasting is required of a model before it can be judged useful? Should the first worked out projections be taken as reliable despite errors? (that seems to be the default attitude of promoters of the idea of AGW.) The models to sate fairly consistently predict too much warming, while enjoying scattered “skill”.

      • Re: “ATTP does nothing to cast real doubt on any of the conclusions. The post is mostly just quibbling about terminology about climate models. Christy’s paper is not a polemic either. Perhaps those who can’t evaluate actual science and who use polemic as a substitute for thought easily project that attitude on to others.”

        As usual, you just offered an evidence-free diatribe that doesn’t address the central, long-acknowledged points, such as Christy contradicting himself, his failure to adequately address well-supported alternative explanations, the evidence against his claims that the data shows the models were over-estimating climate sensitivity, etc.

        https://andthentheresphysics.wordpress.com/2019/05/25/models-are-failed-hypotheses/#comment-157368

        Re: “Real climate shows much the same thing for the Troposphere in the tropics even though their baseline period is much longer so the divergence isn’t quite as large. Russell, do you recall what conciliance is? It’s a common tool used in science but not in polemics.”

        No, RealClimate did not show much the same thing. For example, Christy is notoriously bad at representing the model envelope (the range of model uncertainty that arises from differing initial conditions, internal variability, etc.), while RealClimate represents it quite well. Christy’s failure to represent this correctly allows him to exaggerate differences between the model-based projections vs. observational analyses.

        Re: “I think ATTP that what you quote is not relevant to Christy’s main observations which are becoming more and more accepted, despite the desire of some to cast doubt on them.”

        As usual, you don’t provide a shred of evidence for your claim. Why are I not surprised? Let me know when you have a shred of evidence that Christy’s claims are “becoming more and more accepted”. In the meantime, I’ll cite what some actual published research shows on this:

        “Causes of difference in model and satellite tropospheric warming rates
        […]
        It has been posited that the differences between modelled and observed tropospheric warming rates are solely attributable to a fundamental error in model sensitivity to anthropogenic greenhouse gas increases [claimed by John Christy in political testimony]. Several aspects of our results cast doubt on the ‘sensitivity error’ explanation.”

      • Just to make sure the point is not lost that Christy is probably right, here is Real Climate’s graphic. Long and obfuscatory comments distract but don’t contradict the data.

        They use a longer baselining period but the divergence is pretty clear.

      • Re: “Just to make sure the point is not lost that Christy is probably right, here is Real Climate’s graphic. Long and obfuscatory comments distract but don’t contradict the data.”

        It’s not my fault that you struggle reading long segments of texts, though that would explain why you’re willfully unfamiliar with the peer-reviewed literature. And it’s also not my fault that you never bother to address the points made.

        Anyway, you conveniently didn’t link to RealClimate on this, though you claim to be representing what they showed. We both know why:


        [ http://www.realclimate.org/index.php/archives/2016/05/comparing-models-to-the-satellite-datasets/ ]

        Oh look, it’s because RealClimate confirms what I said about Christy’s graph being misleading, Christy failing to accurately represent the model envelope, and so on.

        Anyway, let me know when you can finally address what the peer-reviewed literature shows, and when you finally decide to read it (if that ever happens):


        “Causes of difference in model and satellite tropospheric warming rates
        […]
        It has been posited that the differences between modelled and observed tropospheric warming rates are solely attributable to a fundamental error in model sensitivity to anthropogenic greenhouse gas increases [claimed by John Christy in political testimony]. Several aspects of our results cast doubt on the ‘sensitivity error’ explanation.”

      • Talk about misleading:

        Look at the upper panel. The start and end points. Combine the differences. That’s at least 0.3 C. They hid that in a the rest of the plot. The rest of plot is, it looks the same. The more true trend lines with a 0.3 C difference are the opposite of highlighted. The black and red lines are bolded. The trend lines are not.

      • Well Atomski, Those with even minimal skills can find the Real Climate post easily. It’s a permanent section on comparisons of models and data. There are several other graphics there as well showing rates of change. Your nonresponse is humorous.

        Modulo baselining Christy’s point is confirmed by Real Climate indicating that Christy is right.

      • Here is Real Climate’s rate calculation. Looks like about a factor of 2 disagreement. Christy got a factor of 3 roughly. But in either case, there is a pretty stark divergence. Atomski not doubt will obfuscate this too with consensus enforcement designed to discredit Christy. None of it will detract from the fact that Christy is pointing to a real problem with models.

    • Judy’s saner readers?

      • “Following Lorenz’s seminal work on chaos theory in the 1960s, probabilistic approaches to prediction have come to dominate the science of weather and climate forecasting. This paper gives a perspective on Lorenz’s work and how it has influenced the ways in which we seek to represent uncertainty in forecasts on all lead times from hours to decades.” https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161

        What dominates here is an assumption of impossible model determinism. It always puts me in mind of Einstein’s precept about repeating the same actions. Joshua is an exemplar – although at some still more bootless level. .

  8. To supplement the link on Adelaide rainfall this paper found no significant difference in the dry and wet periods in a Siberia location over 236 years.

    https://journals.sagepub.com/doi/full/10.1177/0959683617729450

  9. This paper studied a river basin in Mexico over 450 years and found a relationship with the PDO and AMO.
    https://link.springer.com/article/10.1007/s11069-018-3379-8

  10. I have launched my Climate Change Debate Education project
    http://ccdedu.blogspot.com
    We begin by posting just under 200 videos by Happer, Michaels, Spencer and Lindzen, arranged by length, because length is crucial to educational use. Curry videos are in process.

    At the long end there are a number of good debate videos. The short (1-10 minute) videos are good for what I call gate breaking, where a skeptical student sends them to their classmates in the face of a gate keeping alarmist teacher. See the About page for more on this.

    Ultimately I hope to have a searchable 1000 video database online, plus a lot of one page text gate breakers that I will soon publish some examples of. I am also looking for volunteers to write gate breakers. Other good stuff will be added in time.

    CCDE has been two years in the works but we finally have liftoff. Spread the word.

    Let’s get the children back.

  11. This paper found there were 29 extreme droughts and 28 extreme floods in North China from 1736 to 2000 and it identifies the decades of both during that period.
    https://www.clim-past-discuss.net/cp-2018-45/cp-2018-45.pdf

  12. Climate Change Indicators: Heavy Precipitation
    Share Icon
    Share Icon
    Reddit Share Icon
    This indicator tracks the frequency of heavy precipitation events in the United States.

    Figure 2. Unusually High Annual Precipitations in the Contiguous 48 States, 1895–2015
    Line graph showing the prevalence of unusually high annual precipitation in the contiguous 48 states for each year from 1895 to 2015.

  13. This paper covers flooding occurrences of major European rivers since 1500 and identifies 4 periods of increased occurrences from 1540 to 1840.

    https://link.springer.com/article/10.1007/s10584-010-9816-7

  14. Federal Energy Regulatory Commission is quietly, profoundly shaping U.S. climate policy

    An expansion of America’s pipeline capacity isn’t the only infrastructure policy the environmentalist activists are opposed to. Many are also opposed to upgrading the power transmission grid in the ways that are necessary to access power from wind farms and solar farms which may be located hundreds of miles away.

    Why would environmentalists be opposed to an expanded and upgraded power transmission grid?

    Because If you can transmit power efficiently from a wind farm or a solar farm located hundreds of miles away, you can also transmit power efficiently from a gas-fired plant or a nuclear plant located hundreds of miles away.

    The sum total effect of current trends in state and federal level energy policy making is that while a greater expansion of wind and solar is being greatly encouraged at the state level, the power transmission upgrades needed to efficiently transport and distribute that wind and solar power is being actively opposed by some number of environmentalists who otherwise might be expected to support the upgrades.

    The consequence of this collision between two somewhat schizophrenic energy infrastructure agendas will be that our supply of electricity will become more expensive and less reliable over time, forcing all of America’s commercial, industrial, and residential energy consumers to adopt ever more strict energy conservation measures if they are to stay in business or to remain comfortably secure in their homes.

    Xcel Energy Acknowledges that Natural Gas is Vital for Future Energy Economy – “The grid can’t be 100% renewable.”

    It is impossible to reach an 80% reduction in America’s GHG emissions by 2050 without a strong commitment to nuclear power. But nuclear remains an anathema to most environmental activists. Here and there, nuclear power may gain support from a few of the more prominent activists, but the majority will oppose nuclear to their dying breath.

    What is bound to happen over the next two decades as wind and solar continues to grow and as the option of expanding our gas-fired capacity for its load-following benefits continues to be rejected by state level energy policy makers, is that the power grid will become less and less reliable.

    The quick solution to the problem, once the reliability issues become painfully acute in the early to mid 2030’s, will be to quickly erect gas-fired peaker plants anywhere they can be serviced by LNG rail transport — but at the expense of burning up a city center or a small town every now and then whenever the inevitable rail accidents occur.

  15. Ireneusz Palmowski

    Oklahoma is again threatened by heavy thunderstorms and tornadoes.

  16. From the Granger and Pearl causality paper, another by Ghil:

    Hannart, A., Pearl, J., Otto, F. E. L., Naveau, P., & Ghil, M. (2016). Causal counterfactual theory for the attribution of weather and climate-related
    events. Bulletin of the American Meteorological Society, 97(1), 99–110.

    Many thanks to Dr Curry for these frequent informative updates.

    And thanks as well to the folks at Sci-Hub.

    • Abstract from Hannart et al: The emergence of clear semantics for causal claims and of a sound logic for causal reasoning is relatively recent, with the consolidation over the past decades of a coherent theoretical corpus of definitions, concepts and methods of general applicability (e.g. Pearl [2000]) which is anchored into counterfactuals. The latter corpus has proved to be of high practical interest in numerous applied fields (e.g. epidemiology, economics, social science). In spite of their rather consensual nature and proven efficacy, these definitions and methods are to a large extent not used in Detection and Attribution (D&A). This article gives a brief overview on the main concepts underpinning the causal theory and proposes some methodological extensions for the causal attribution of weather and climate-related events that are rooted into the latter. Implications for the formulation of causal claims and their uncertainty are finally discussed.

      Let me also recommend: Pearl J. (2000) Causality: Models, Reasoning and Inference, Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. Second edition is now out.

      and

      Spirtes P., C. Glymour and R. Scheines. (2000) Causation, Prediction, and Search, 2nd ed., MIT Press, Cambridge, MA.

  17. Ireneusz Palmowski

    Relation between geomagnetic field and climate variability. Part 2: Probable mechanism
    N. Kilifarska, V. Bakhmutov, G. Melnik

    Abstract

    In this study we show that correspondence of the main structures of geomagnetic field, near surface air temperature and surface pressure in the mid-latitudes, reported previously in the 1st part of the paper, has its physical foundation. The similar pattern, found in latitude-longitude distribution of the lower stratospheric ozone and specific humidity, allows us to close the chain of causal links, and to offer a mechanism through which geomagnetic field could influence on the Earth’s climate. It starts with a geomagnetic modulation of galactic cosmic rays (GCR) and ozone production in the lower stratosphere through ion-molecular reactions initiated by GCR. The alteration of the near tropopause temperature (by O3 variations at these levels) changes the amount of water vapour in the driest part of the upper troposphere/lower stratosphere (UTLS), influencing in such a way on the radiation balance of the planet. This forcing on the climatic parameters is non-uniformly distributed over the globe, due to the heterogeneous geomagnetic field controlling energetic particles entering the Earth’s atmosphere.

  18. Ireneusz Palmowski

    The state of the upper atmosphere is largely governedby solar electromagnetic radiation, solar corpuscularfluxes, galactic cosmic rays, and electron precipitationfrom the radiation belts. The changes in these factorsalter the atmospheric structure, composition, anddynamical characteristics at different heights.
    https://www.researchgate.net/publication/281441974_Geomagnetic_Field_and_Climate_Causal_Relations_with_Some_Atmospheric_Variables

  19. “This may imply that the knowledge of millennial and centennial variability is needed to fully understand and replicate ice age history. As we have seen, increased millennial variability decreases the length of the ice age cycles. However, the reverse is also true. This state of affairs generates a new hypothesis for the middle Pleistocene transition: a decrease in millennial variability may have caused the lengthening of ice ages. The millennial variability can legitimately be modeled as a deterministic mode, which would allow us to come up with a specific explanation of how this variability may influence ice age dynamics. Hence our completely deterministic approach makes a physically justified alternative to a popular notion that the background spectrum is merely linearly integrated noise.” Propagation of high-frequency forcing
    Mikhail Y. Verbitsky et al 2019 – https://www.earth-syst-dynam.net/10/257/2019/

  20. “Earth System Dynamics (ESD) is an international scientific journal dedicated to the publication and public discussion of studies that take an interdisciplinary perspective of the functioning of the whole Earth system and global change. The overall behaviour of the Earth system is strongly shaped by the interactions among its various component systems, such as the atmosphere, cryosphere, hydrosphere, oceans, pedosphere, lithosphere, and the inner Earth, but also by life and human activity. ESD solicits contributions that investigate these various interactions and the underlying mechanisms, ways how these can be conceptualized, modelled, and quantified, predictions of the overall system behaviour to global changes, and the impacts for its habitability, humanity, and future Earth system management by human decision making.
    https://www.earth-system-dynamics.net/index.html

    This defines the four kinds of drought usefully for those without much hydrology.

    “Whilst it may be preferable to use soil moisture as a drought indicator, observations and simulations of precipitation are more reliable. Precipitation has a large influence on agricultural droughts and is therefore appropriate to use in attribution studies in eastern Africa, supplementing the analysis of soil moisture. The outcome of previous studies that have focussed on
    precipitation deficits only (e.g., Philip et al., 2018a; Uhe et al., 2018) are thus still relevant and compare well with our results here, that no consistent significant trends on droughts are found.” https://www.earth-syst-dynam-discuss.net/esd-2019-20/esd-2019-20.pdf

  21. Is climate change hysteria preventing us from dealing with plastic pollution properly?
    https://www.bbc.com/news/science-environment-43120041

  22. “People are rewarded for being productive rather than right, for building ever upward instead of checking the foundations. These incentives allow weak studies to be published. And once enough have amassed, they create a collective perception of strength…” [link]

    This is what is happening with the belief that AGW will be net harmful. This belief is probably false. But very little work has been, or is being, done to test it. Meanwhile, the policies to mitigate AGW are wasting resources that could be doing good.

    • Unfortunately evolution and AGW are treated as equals later in the same Article:

      • Keller worries that these problems will be used as ammunition to distrust science as a whole. “People ask, Well, if scientists are publishing crap, why should we believe global warming and evolution?” he says. “But there’s a real difference: Some people were skeptical about candidate genes even back in the 1990s. There was never unanimity or consensus in the way there is for human-made global warming and the theory of evolution.”

  23. In the article on Xcel Energy, what does this mean?

    “This has forced Xcel to admit that natural gas is vital for hitting its goal of eliminating all carbon emissions.”

    It could makes sense if it said “reducing emissions” or referred to netting carbon emissions but they don’t seem to supply that context.

  24. “The Becker gas plant — if it’s to meet Xcel’s 2050 no-carbon goal — must eventually adopt some sort of carbon capture technology to store its green house gas emissions. Carbon capture is still a relatively nascent technology. Fowke has said Xcel is open to it — if it becomes cost-effective.

    “If we achieve our 2050 vision, we can’t be using natural gas the way we are today,” Fowke told the PUC.” http://www.startribune.com/ceo-xcel-will-likely-need-gas-or-nuclear-power-to-reach-carbon-free-goals/510298362/

  25. Ireneusz Palmowski

    Anyone who neglects the impact of long-term changes in the level of galactic radiation and the Earth’s magnetic field on the climate, deceives people.

    https://www.esa.int/Our_Activities/Observing_the_Earth/Swarm/Swarm_reveals_Earth_s_changing_magnetism

    • Ireneusz Palmowski

      Sorry.
      “Checking water levels behind upstream dams, in areas hit by heavy to exceptional rainfall, it seems that pools may be near or above design capacity [or] maxed out, so the dams are forced to pass high stream flows downstream to prevent catastrophe,” Andrews said.

      The flood gauge near Ponca City, Oklahoma, on the Arkansas River crested at 22.26 feet on Friday, breaking the 1993 record of 20.11 ft.

      • David Wojick

        Indeed. Fifty years ago I told the COE flood control program that they could not accurately estimate the 100 year with just 100 years of data, or 200 years for that natter, (They told me to go away and being youg I did as told.)

        The reason records are constantly broken is that the records are very short compared to the cycles. This is painfully obvious when you compare the record for one day to the next, which is often different.

      • Yes I’m sorry too.

        With 100 years of data the problem is relatively simple – albeit with broad uncertainty.

        More practical design rainfall is obtained using complex derivations of synthetic rainfall intensity, frequency and duration – with areal adjustments – and routing that through specific catchments. You imagine you can do better?

    • Ireneusz Palmowski

      “The overall weather pattern that has been in place across the U.S. will continue early this week, which will bring more rounds of severe weather to the Plains,” according to AccuWeather Meteorologist Brett Rathbun.

  26. Day by day, the El Niño keeps not dying; June 19 could see more red off Peru/Niño 1+2:

    • The significance of the subsurface temperatures is the shoaling of the thermocline in the eastern Pacific. The noncommittal pattern seen for a while now seems more likely than not to break abruptly and soon.

    • Ireneusz Palmowski

      SST Anomaly 7-day Change

    • Ireneusz Palmowski

      Depth-longitude sections of anomalous equatorial ocean temperatures (�C) for the recent 13 weeks. Contour interval is 1�C. Anomalies are departures from the 1981-2000 base period means.

    • You keep posting these graphics with little to show that you understand any of it?

      There seems to be an ideological commitment to one state of ENSO or the other. It doesn’t matter. It is a chaotic oscillator – a quasi standing wave in Earth’s spatio-temporal chaotic system. It shifts abruptly from one state to another over years – and importantly in intensity and frequency on decadal to millennial scales. Seven day anomalies are a nonsense. We need to look at variability over a 1000 years or more.


      https://journals.ametsoc.org/doi/full/10.1175/JCLI-D-12-00003.1

      • Robert I Ellison: You keep posting these graphics with little to show that you understand any of it?

        Judith, why do you permit RIE’s endless insults against almost everyone? They spoil what is otherwise a good reading experience here.

      • These graphics are repeated with no explanation. We have here two versions of subsurface heat in the Pacific – and still I don’t know why.

        As for Matthew – he suggests I should apologized for saying he doesn’t understand something. That’s hardly news.

        https://judithcurry.com/2019/05/11/week-in-review-science-edition-101/#comment-893198

        But if insult consists of disagreeing on the substance of a scientific papers he had clearly misinterpreted?

      • Robert I Ellison: But if insult consists of disagreeing on the substance of a scientific papers he had clearly misinterpreted?

        You did not in fact comment on a disagreement. You asserted outright that I had not understood a paper that I merely quoted. And you quoted no [misinterpretation].

      • Here is your quote: “Helmholtz decomposition states that a vector field (satisfying appropriate smoothness and decay conditions) can be decomposed as the sum of the form – grad Φ + curl A where Φ is a scalar field, called scalar potential, and A is a vector field, called a vector potential.” Wikipedia

        I’m quite sure Matthew doesn’t understand what they did.

        What proposition mine was in error?

      • That was in response to – “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”. As in clear in the link to the interaction. That is clearly not the case. Nor is it an insult to say that you don’t understand what they did – bruised ego notwithstanding.

        https://judithcurry.com/2019/05/11/week-in-review-science-edition-101/#comment-893146

      • Robert I Ellison: “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”.

        You mean they didn’t?

      • As Javier pointed out – energy transport in the oceans is via currents. As I clearly said currents have a rotational and vector component. If you cannot yet understand the purpose of the Helmholtz decomposition my best recourse is to go back to ignoring you.

        But saying you don’t understand is not an insult.

      • From Forget and Ferreira: Analyses of ocean heat transport tend to emphasize global-scale seawater pathways and concepts such as the great ocean conveyor belt. However, it is the divergence or convergence of heat transport within an oceanic region, rather than the origin or destination of seawater transiting through that region, that is most immediately relevant to Earth’s heat budget.

        And a detail from the text: Owing to the presence of the Atlantic Meridional Overturning Circulation (AMOC), which brings relatively warm water into the Northern Hemisphere and returns colder water back into
        the Southern Hemisphere, Atlantic OHT is northward across the
        Equator and in both hemispheres13

        Those read to me like distinctions between heat transport and mass transport, though they are obviously co-occurring..

      • Robert I Ellison: But saying you don’t understand is not an insult.

        You still have quoted anything I wrote that was in error.

        As Javier pointed out – energy transport in the oceans is via currents.

        I did not deny that, and besides, Javier wrote more: that he doubted heat flow and mass flow could be distinguished. But clearly they can be distinguished: while there is net flow of energy from the tropics to the poles, there is no net flow of water from the tropics to the poles (excepting the relatively small masses carried to Antarctica as snow — and that is likely cyclical). By basing their model for Ocean Heat Content flows on the Helmholtz decomposition of the OHC field, and estimating parameters to give a good model of measured OCH, Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.

      • “Progress can be made, however, through the decomposition of plain OHT (OHT0) into an ‘effective’ OHT that balances heat exchanges with the atmosphere (OHT∇) and a second term representing
        internal ocean heat loops that do not immediately affect Earth’s energy budget (OHTR,0). In mathematical terms, we carry out a Helmholtz decomposition of OHT0 into divergent (OHT∇)
        and rotational (OHTR,0) components.” op. cit.

        The meridional component of fluid flow has an internal energy and releases heat to the atmosphere – the loops don’t.

      • Robert I Ellison, quoting: “Progress can be made, however, through the decomposition of plain OHT (OHT0) into an ‘effective’ OHT that balances heat exchanges with the atmosphere (OHT∇) and a second term representing
        internal ocean heat loops that do not immediately affect Earth’s energy budget (OHTR,0). In mathematical terms, we carry out a Helmholtz decomposition of OHT0 into divergent (OHT∇)
        and rotational (OHTR,0) components.” op. cit.

        So they did. If I wrote anything false, be sure to quote it and let us know how it was wrong. Oh, I did type OCH and OHC for OHT — sorry.

        Decomposition method. As is classically done in Helmholtz decomposition,
        we separate a vector field F, defined over the global ocean, into a divergent
        component, Fdiv, and a rotational component, Frot, that satisfy equations (1) and (2).
        F=Fdiv+Frot (1)
        × Fdiv=0 and Frot=0 (2)
        The divergent and rotational components are associated with, respectively, a scalar
        potential, P, and a streamfunction, S, as expressed in equations (3) and (4).
        Fdiv=P (3)
        Frot=× S (4)
        In practice, P is computed by solving equation (5) over the global ocean.
        2P=F (5)
        This Poisson equation is obtained by taking the divergence of equation (1)
        and then substituting equations (2) and (3). The divergent component, Fdiv, then
        readily derives from P via equation (3), the rotational component is computed
        as the remainder of equation (1), Frot = F − Fdiv, and integration of the transverse
        component of Frot following grid line paths gives S in equation (4).

      • Copying and pasting this does nothing to show that you understand the purpose.

        The meridional component of oceanic fluid flow has an internal energy and releases heat to the atmosphere – the loops don’t.

      • Robert I Ellison: The meridional component of oceanic fluid flow has an internal energy and releases heat to the atmosphere – the loops don’t.

        True enough.

        Have I written anything false?

      • “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”

        No they don’t.

        Robert I Ellison: I’m quite sure Matthew doesn’t understand what they did.

        “You have no call to say that. You ought to apologize.”

        I am sorry you were wrong – but more sorry that you can’t admit it.

      • Robert I Ellison: “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”

        No they don’t.

        Yes they do.

        from their abstract: Analyses of ocean heat transport
        tend to emphasize global-scale seawater pathways and concepts such as the great ocean conveyor belt. However, it is the divergence
        or convergence of heat transport within an oceanic region, rather than the origin or destination of seawater transiting
        through that region, that is most immediately relevant to Earth’s heat budget. … However, effective inter-ocean heat transports are smaller than expected, suggesting that global-scale seawater pathways play
        only a minor role in Earth’s heat budget.

        Numerous concordant propositions are in the text.

        Forget and Ferreira focused on heat transport.

        I don’t know whether anyone is following us, but if so I recommend that they read the paper and the supplemental online material.

      • Nor it seems will you let it go. The fundamental purpose of the Helmholtz decomposition is to separate heat transport in meanders and eddies to derive an estimate of heat transport north or south. An effective heat transport.

        “Progress can be made, however, through the decomposition of plain OHT (OHT0) into an ‘effective’ OHT that balances heat exchanges with the atmosphere (OHT∇) and a second term representing
        internal ocean heat loops that do not immediately affect Earth’s energy budget (OHTR,0). In mathematical terms, we carry out a Helmholtz decomposition of OHT0 into divergent (OHT∇)
        and rotational (OHTR,0) components.”

        You agree with this and then what?

      • Water at any temperature has a specific energy content. It may travel in oceans currents in meanders and eddies without losing energy to the atmosphere – or into higher latitudes where heat is lost. A simple idea.

      • Robert I Ellison: Water at any temperature has a specific energy content. It may travel in oceans currents in meanders and eddies without losing energy to the atmosphere – or into higher latitudes where heat is lost. A simple idea.

        Forget and Ferreira: However, effective inter-ocean heat transports are smaller than expected, suggesting that global-scale seawater pathways play only a minor role in Earth’s heat budget.

        Those statements are not in disagreement.

        Using mathematical curls to model heat swirls, eddies and vortices; and using mathematical divs to model transverse heat transfer, F & F set up a large set of simultaneous (partial) differential equations. Then with a Matlab diffeqn solver in a Matlab nonlinear parameter estimation program, they estimated parameters to the system to provide a good fit, on the grid of measured values, of the fitted/modeled values to the measured values. From the solutions, they computed net directional flows and rotational flows of the heat, not the water, in various areas of the ocean surface. So, from the possibilities of what may happen, they modeled quantitatively, to at least a first degree of approximation, what is happening in the heat flows. fwiw, they have put the code and data on the web.

      • Despite all the words – heat transport is via fluid flow in oceans. The energy is in the mass of water until it is lost to the atmosphere. One cannot have heat transport in oceans without fluid flow – as is very obvious and simple geophysics.

        Heat transport to higher latitudes – where energy is lost – is net of ‘heat loops’ – the rotational component. The problem that was examined in the paper is that the rotational component makes net heat transport to higher latitudes in basins difficult to quantify. Hence the decomposition to divergent and rotational components.

      • “The ocean plays an important role in redistributing heat within the evolving climate system1. Perhaps most importantly, it transports heat from the Equator, where oceans take up heat in excess, towards higher latitudes where heat gets released to the atmosphere. This is clearly seen in observational estimates of meridional ocean heat transport (OHT) integrated all the way around the Earth, which is directed poleward in both hemispheres2–4. However, interpretation of measurements can be more difficult when looking
        at individual ocean basins and sections as seawater can loop around land masses in complicated ways without immediately affecting the atmosphere. This complexity has led to high uncertainties in regional OHT analyses5. Here we provide a framework to help reconcile previous estimates.” op. cit.

        All this copying and pasting of what they did – I could easily copy and paste it myself – doesn’t mean that you understand why they did it. My first degree was in engineering – I can do the math and the hydrodynamic modeling. But first as Feyman said it’s about seeing the jiggle jiggle jiggle. That comes in the introduction and not the methods.

      • “…he suggests I should apologized for saying he doesn’t understand something. That’s hardly news.” RIE

        Of course you insult people here, you just get away with it because your style of delivery is somewhat subtle. You may not even realize it, so comfortable are you with it.

        You’re smart, people like your work. I like your work. But I have noticed you getting away with this a bit. You’re a leader here, and there’s LOTS of redundancy and repetition of knowledge-verbiage… you’re just going to have to figure out a way to be more patient with it all – an impossible task.

      • Robert I Ellison: “The ocean plays an important role in redistributing heat within the evolving climate system1. Perhaps most importantly, it transports heat from the Equator, where oceans take up heat in excess, towards higher latitudes where heat gets released to the atmosphere. This is clearly seen in observational estimates of meridional ocean heat transport (OHT) integrated all the way around the Earth, which is directed poleward in both hemispheres2–4.

        Nevertheless, Forget and Ferreira made a strong effort to distinguish between the heat transport and the mass transport, as I said, and as they said in the passages that I quoted.

      • “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”

        Heat transport in oceans is as internal energy in water. There is no disentangling energy from mass.

        And I have no ambition to be a ‘leader’ of Climate etc skeptical curmudgeons with crude and eccentric theories. Put on some big girl pants.

      • Robert I Ellison: Heat transport in oceans is as internal energy in water.

        Did you read their paper? Heat transport in oceans is not the only heat transport.

      • “Global ocean heat transport dominated by heat export from the tropical Pacific”

        Did you read the title?

      • Robert I Ellison: “Global ocean heat transport dominated by heat export from the tropical Pacific”

        Right, heat export from the tropical Pacific. But not strictly via the large massive ocean gyres.

      • All heat transport is via currents – effective heat transport is total transport less ‘heat loops’ that make direct measurement at sections highly uncertain.

      • Robert I Ellison: All heat transport is via currents

        That’s different from: The Pacific covers most of the global tropics and so gains most of the heat. This fact is not controversial. It is redistributed in great ocean gyres. These modes of ocean circulation have been mapped for navigation for centuries.

        Heat flow can be distinguished from mass flow. The hydrologic cycle (which includes vertical currents) is a net transporter of heat from the surface to the Cloud Condensation Layer and higher, but it is not a net transporter of water, which circulates. As I wrote earlier, there is net heat flow from the Equator and tropics to the poles, but the transport of water is in the gyres, with little to no net transfer to the poles.

        For more from Forget and Ferreira: While concepts such as the great conveyor belt are often highlighted in the OHT literature, our analysis instead emphasizes that heat redistribution within the Pacific is the largest term. This result stresses that sustained observation of the global ocean as a whole, not just at a few locations and gates separating ocean basins, is crucial to monitor and understand OHT.

        So you began with “great ocean gyres” and ended with vague “currents”.

        For my last word, I repeat the last sentence in their abstract: However, effective inter-ocean heat transports are smaller than expected, suggesting that global-scale seawater pathways play only a minor role in Earth’s heat budget.

      • “Here we use a recent gridded estimate of ocean
        heat transport to reveal the net effect on Earth’s heat budget, the ‘effective’ ocean heat transport, by removing internal ocean heat loops that have obscured the interpretation of measurements…. However, effective inter-ocean heat transports are smaller than expected, suggesting that global-scale seawater pathways play only a minor role in Earth’s heat budget.”

        The first sentence says what they did – estimate effective heat transport within basins which is as internal energy in mass movement of water – the second that heat transport between the Pacific, Indian and Atlantic Ocean basins plays a minor role in the global energy budget. Understood as is painfully evident in plain English – it lends no support to your incorrect contention.

        “Forget and Ferreira make a strong effort to disentangle energy transport from mass transport.”

        And frankly on a planetary scale the difference between great ocean gyres, planetary waves and currents such as the Gulf Stream are nothing more than semantics.

        https://www.nasa.gov/topics/earth/features/perpetual-ocean.html

      • And – groan – heat is transported in oceans to where it is lost to the atmosphere which then exits the system in the global energy budget. But then we are talking heat in the atmosphere and not heat transport in the oceans.

      • Robert I Ellison: the second that heat transport between the Pacific, Indian and Atlantic Ocean basins plays a minor role in the global energy budget.

        You made your own substitution for F&Fs phrase “seawater pathways.”

        And frankly on a planetary scale the difference between great ocean gyres, planetary waves and currents such as the Gulf Stream are nothing more than semantics.

        The semantics of science: the associations among human ideas, human language, and the knowable properties of the stuff of existence. Not to be confused with mere “renaming”! mass vs weight; weight vs density; linear momentum vs kinetic energy; heat vs temperature; latent heat vs tangible heat. Or did you mean mere renaming?

        Having first asserted that F&F did not make an attempt to distinguish heat flow from mass flow, you now assert that the distinction they arrived at is mere “semantics”.

        So, …, what did you think of the paper? Did you like the quantitative estimates of particular heat flows? Was there a mistake made in the decision to publish it?

      • Robert I Ellison: But then we are talking heat in the atmosphere and not heat transport in the oceans.

        Quite so.

        Not to be confused with: The Pacific covers most of the global tropics and so gains most of the heat. This fact is not controversial. It is redistributed in great ocean gyres. These modes of ocean circulation have been mapped for navigation for centuries.

        Forget and Ferreira focused on the heat flows not the mass flows, by modeling the OHT fields on large grids covering the oceans (computed for earlier papers). They report their results in units of energy flow: petawatts.

      • As some 92% of incoming energy ends up in oceans as heat carried on currents such as the Gulf Stream – with heat transport in oceans being the topic of the paper – and as we all know the power flux is fluid flow times specific energy… there seems to be an endless confusion.

      • “You made your own substitution for F&Fs phrase “seawater pathways.”

        ‘… effective inter-ocean heat transports are smaller than expected, suggesting that global-scale seawater pathways play only a minor role in Earth’s heat budget.’

        I named the oceans. You had made such a point of repeatedly quoting this in ways that were at odds with the plain English meaning.

        “The semantics of science: the associations among human ideas, human language, and the knowable properties of the stuff of existence. Not to be confused with mere “renaming”! mass vs weight; weight vs density; linear momentum vs kinetic energy; heat vs temperature; latent heat vs tangible heat. Or did you mean mere renaming?”

        … And frankly on a planetary scale the difference between great ocean gyres, planetary waves and currents such as the Gulf Stream are nothing more than semantics… is what I said… and I linked to a NASA page.

        “Having first asserted that F&F did not make an attempt to distinguish heat flow from mass flow, you now assert that the distinction they arrived at is mere “semantics.”

        All heat transport in oceans – mostly within rather than between basins – is as internal heat in fluid flow.

        “Progress can be made, however, through the decomposition of plain OHT (OHT0) into an ‘effective’ OHT that balances heat exchanges with the atmosphere (OHT∇) and a second term representing
        internal ocean heat loops that do not immediately affect Earth’s energy budget (OHTR,0).”

        It it clear – however – why no progress can be made here.

      • Here is the table of air-sea heat flux, from p. 7 of the Supplemental online material. of Forget and Farreira:

        Table S8: Air-sea heat flux integrated (in PW) over the 9 regions (columns) shown in Fig. 1 for each of the eight products listed in Tab. S7 (rows). The region names are abbreviated as follows: “Pan-Arc” denotes the Pan-Arctic region; “Ind”, “Atl”, and “Pac” respectively denote the Indian, Pacific, and Atlantic sectors; the “N-”, “T-”, and “S-” prefix respectively denote the Northern, Tropical, and Southern latitude bands.

        Name In Fig. S7 Pan-Arc T-Ind S-Ind N-Pac T-Pac S-Pac N-Atl T-Atl S-Atl
        ECCO-FLUX -0.30 +0.15 -0.32 -0.50 +1.47 -0.35 -0.60 +0.62 -0.08
        ERAi-FLUX -0.25 +0.29 +0.19 -0.18 +1.84 +0.09 -0.42 +1.09 +0.37
        UR-FLUX -0.45 +0.37 -0.39 -0.54 +1.58 -0.40 -0.77 +0.80 -0.01
        ERAi-CERES -0.35 +0.22 -0.29 -0.33 +1.45 -0.23 -0.65 +0.58 +0.08
        CFSR-CERES -0.48 +0.05 -0.52 -0.50 +1.55 -0.42 -0.84 +0.48 +0.01
        NCEP-CERES -0.30 +0.55 -0.39 -0.54 +0.96 -0.36 -0.85 +0.56 +0.03
        JRA-CERES -0.35 -0.20 -0.24 -0.32 +0.92 +0.01 -0.71 +0.21 +0.18
        CORE2-FLUX -0.36 -0.15 -0.05 -0.45 +1.33 -0.26 -0.77 +0.66 +0.06

        Better formatting in the original. How much of their textual summaries, numerical summaries, descriptions of data and methods anyone wants to discredit or ignore outright isn’t my main interest. But F&F clearly made an effort to distinguish heat flow from water flow, as in this table as just one example. Nothing in their paper can be interpreted to mean that they focused only on heat transported by ocean gyres. It is a substantive contribution to the study of heat flows (including heat export from the tropical Pacific) between regions. They also did not ignore global-scale seawater pathways but conclude from their results of their analysis and calculations that those pathways play only a minor role in the Earth’s heat budget.

        The whole paper and the supplemental information are available easily through Sci-Hub.

      • The arrows in the boxes are heat transport in oceans and between basins – blue total and red effective. The difference illustrates the problem with direct measurement of OHT. The power flux units are given as 0.01 PW – but all of the power flux is as internal energy in water as it moves across sections.

        The idea Matthew is defending to the bitter end is both wrong and trivial.

      • The data that are used by Forget and Ferreira are outputs from modeling, described in this paper:Forget, G. et al. ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation. Geosci. Model Dev. 8, 3071–3104 (2015). Updates are listed in the references as well, but this seems to be the most complete description, short of the MatLab code and comments themselves. The Forget el al ECCO version 4 paper is itself not an easy read. It is reportedly available openly from MIT, but I accessed it through Sci-Hub.
        One quote will have to suffice;

        Diagnosing mass, heat, and salt budgets requires snapshots of the ocean + sea-ice + snow model state (to compute the tendency terms), as well as time-averaged fluxes between snapshots (to match the tendency terms). The MITgcm flux output accounts for variations of layer thicknesses in the z∗ coordinate. Tendency terms are computed after the fact using
        snapshots of, e.g., η and θ (Sect. 3.1). The assembled mass, heat and salt budgets are provided online in the extensive form (in kg s^−1 , J s^−1
        , and g s^−1 , respectively) and in nctiles format (monthly, three-dimensional). The budget residuals are less than 10−^6 times the budget magnitude (a Euclidean norm is used).

        Kilograms/sec and Joules/sec are distinguishable flows, even though they are sometimes correlated.

      • “In practice, this means that for any given volume element V, the corresponding heat content is simply given by H = ρcCpθV where ρc is the constant Boussinesq density, Cp is the constant specific heat capacity and θ is the potential temperature averaged over V.” op. cit.

        The volume V has of course a mass – m – given by…. etc

      • Robert I Ellison: The volume V has of course a mass – m – given by…. etc

        Has anyone written that heat and mass flow are always independent or unrelated, or that either water or air lacks mass? The claim at issue is that Forget and Ferreira made an effort to distinguish heat flow from mass flow.

        For another example, heat flows from the Southern Ocean over Antarctica, but the water of the Southern Ocean flows in a gyre around Antarctica.

      • Effective heat transport in oceans carries heat to higher latitudes – all as the heat content of water – where it is lost to the atmosphere. Heat transport in oceans is a function of mass transport – and F&F define the function.

        There are of course other ways that power flows of course – but that is not the oceanic power flux estimated in this paper.

      • Robert I Ellison: Effective heat transport in oceans carries heat to higher latitudes – all as the heat content of water – where it is lost to the atmosphere

        The way you wrote that, heat is not lost from ocean water to the atmosphere until after the ocean current has transferred it to higher latitudes. Is that what you meant? That’s different from saying that heat is lost to the atmosphere all the way as water flows from low to high latitudes.

        Note that in that sentence you distinguish heat flow from water flow — you have been maintaining resolutely that such a distinction was not made by F&F.

      • The subject of the paper is heat transport in oceans. Heat lost to the atmosphere is not heat transported in oceans.

      • Robert I. Ellison | May 31, 2019 at 4:26 pm

        You missed or ignored the heat uptake from or release to the atmosphere. It’s in the figure caption.

        The figure represents the effort to distinguish between heat flow and water flow.

      • The global energy dynamic involves energy gained from the sun, transported in water moving around the planet and lost ultimately to space as electromagnetic radiation. That would seem obvious. The boxes with the numbers in them are the purpose of the study.

      • Robert I Ellison:; The boxes with the numbers in them are the purpose of the study.

        Maybe you are just having trouble with the words “distinguish” and “distinguishable”. The blue and orange arrows are parts of the effort to “distinguish”.

      • The purple heat loops and the orange ‘divergent’ flows?

      • Robert I Ellison: The purple heat loops and the orange ‘divergent’ flows?

        Sure. Read the caption.

      • You mentioned blue – get it right. The purpose of the paper was to ‘distinguish’ between divergent flows at sections and heat loops was it not?

      • Brother Bob has gone to a lot of trouble to establish that the following 2 flow graphs can’t be distinguished.

        A: Robert I. Ellison | May 31, 2019 at 4:26 pm |

        B: Robert I. Ellison | May 21, 2019 at 6:44 pm

        I think they are distinguishable, and Forget and Ferreira made an effort to make the second distinguishable from the first.

        Sorry I can’t copy and paste the figures themselves. If I confused any readers by confusing “purple” and “blue” in arrow colors, I apologize.

      • There is one way to redirect to comments – click on the date and copy the address. Ot you can click on the image itself the address can be copied. Until then I have no idea what it means.

        But the core of the paper is the Helmholtz decomposition into rotational components – that have no loss of energy to the atmosphere – and divergent flows that do. This was the second thing I said on the matter – after reluctantly entering a discussion with you. After you demanded an apology for saying you were wrong – are still are it seems. That’s not how it works. I can say you are wrong. And clearly some things will remain a mystery to those without training and experience in Earth system science. But clearly enough is much more than enough.

      • Robert I Ellison: But the core of the paper is the Helmholtz decomposition into rotational components – that have no loss of energy to the atmosphere – and divergent flows that do.

        They performed the Helmholtz decomposition of heat flow into rotational and divergent flows, not a decomposition of water flow into rotational and divergent flows

        I can say you are wrong.

        That was not what I requested an apology for. You wrote for no reason that I did not understand what a Helmholtz decomposition was. And still it seems that you have not caught onto their distinguishing between water flow (which you graphed) and heat flow (which you also graphed.)

      • Robert I Ellison, I shall let you have the last word:

        The Pacific covers most of the global tropics and so gains most of the heat. This fact is not controversial. It is redistributed in great ocean gyres. These modes of ocean circulation have been mapped for navigation for centuries.

    • All you have, ever, is insults.

      I said the prediction that were occurring here for the end of El Niño could be premature. That was not hard to figure out.

      With Niño 1+2 rising to 1.163, the El Niño is not to end soon.

      The PDO index went over 1.0 in April, entirely consistent with recent SAT anomalies and the rebound of El Niño, which may not end abruptly for months. And even if it does, so what? Another warmest ENSO neutral, or maybe even one those frightening warmest ever La Niña events. Gee.

      You called La Niña last year. That’s good your understanding ain’t.

      • This from someone who is routinely moderated for angry, crude and abusive language?

        La Nina happen when there is cold upwelling in the eastern Pacific. A high pressure divergent cell forms over the cold water with positive feedbacks that strengthens trade winds over the Pacific. Piling warm surface water up against Australia and Indonesia.

        At some stage atmospheric instability – the Madden-Julian Oscillation perhaps – cause the trade winds to falter and higher geopotential water in the west flows east. It hits the coast of the Americas and dissipates north and south. The cycle begins again.

        Potential energy in the west was exhausted in the 16/17 El Nino with little to nothing in the way of La Nina recharge since. The chances for any significant El Nino – as I said last year – is about the same as fairy dust and unicorns.



        https://www.esrl.noaa.gov/psd/enso/mei/

        Things have been meandering about in this noncommittal state for some time – but now we have a shoaling of the thermocline in the east the stage is set for the next La Nina. It is impossible to say just when feedbacks will kick in – but they surely will and it happens about this time of year.

        But this is just workings of the charge/discharge nonlinear oscillator that is ENSO. What seems to matter for climate is the abrupt modulation of the frequency and intensity of events over decades to millennia.

        “These shifts also have a profound effect on the average global surface air temperature of the Earth. The most recent shift in the 1990s is one of the reasons that the Earth’s temperature has not risen further since 1998. The study, published in the online edition of Journal of Climate, shows the potential for long-term climate predictions.” https://www.geomar.de/en/news/article/klimavorhersagen-ueber-mehrere-jahre-moeglich/

        Any recent warming is mostly the result of decreasing low cloud cover in the Pacific – showing the mechanism at play.

        https://www.mdpi.com/2225-1154/6/3/62

      • You can ask Judith Curry, from my first day on this blog I was almost never moderated: for years and years in a row.

        I got sick and tired of your nonstop abusive behavior, so I challenged the neighborhood thug: you.

      • Filling in the gaps in you selective interpretation of ENSO entrails is not thuggery.

      • And yet, you keep guessing wrong.

      • Since the day I said predictions that the El Niño was about to end I said those predictions could be premature.

        And they were. Each day has brought an increase in Niño 1+2. This El Niño, on the more modern NOAA standard, could hang in until fall 2019, maybe even winter 2020.

      • JCH: Nino 1,2 is known as somewhat volatile, in other words: Nino12 may come, may go. It’s not a good forecasting tool, however I think you know it.

      • Judy, a comment of mine is “awaiting moderation”. I swear that the content is conform with the rules here! :-)

    • JCH
      There is more to El Nino than warming of the equatorial Pacific. Personally I don’t consider the “Modoki” type of el Nino a real en Nino at all. El Nino should mean a strong excursion of the Bjerknes feedback, the positive feedback linking interruption of Peruvian upwelling with weakening or even reversal of the Trade Winds. This is followed by the second part of the two stroke cycle, the reactive strong upwelling and strengthened trades called La Nina. Generalised warming of the equatorial Pacific can occur through other mechanisms, but they are’nt El Nino and they don’t have the global climatic warming effect of El Nino. The term little boy is being used too loosely I feel. I’m not sure we’ve even had a “real” El Nino since 1999.

      The Pacific signals now – subsurface temperatures, ocean heat content decline, strong trades and rising SOI, don’t really point to El Nino. The anchovies aren’t saying anything conclusive.

  27. Periodicity disruption of a model quasi-biennial oscillation
    Antoine Renaud1, Louis-Philippe Nadeau2, and Antoine Venaille1∗

    The quasi-biennial oscillation (QBO) of equatorial winds on Earth is the clearest example of the spontaneous emergence of a periodic phenomenon in geophysical fluids.In recent years, observations have revealed intriguing disruptions of this regular behaviour, and different QBO-like regimes have
    been reported in a variety of systems. Here we show that part of the variability in mean flow reversals can be attributed to the intrinsic dynamics of wave-mean flow interactions in stratified fluids. Using a constant-in-time monochromatic wave forcing, bifurcation diagrams are mapped for a hierarchy of simplified models of the QBO, ranging from a quasilinear model to fully nonlinear simulations The existence of new bifurcations associated with faster and shallower flow reversals, as well as a quasiperiodic route to chaos are reported in these models. The possibility for periodicity disruptions
    is investigated by probing the resilience of regular wind reversals to external perturbations.

    Lots of modeling results. Sorry I could not copy/paste the figures.

  28. “It’s amazing that we were so spectacularly wrong about something we should understand really well: the sun,” said Brian Fields, a particle astrophysicist at the University of Illinois, Urbana-Champaign.

    The unexpected signal has emerged in data from the Fermi Gamma-ray Space Telescope, a NASA observatory that scans the sky from its outpost in low-Earth orbit. As more Fermi data have accrued, revealing the spectrum of gamma rays coming from the sun in ever-greater detail, the puzzles have only proliferated.

    “We just kept finding surprising things,” said Annika Peter of Ohio State University, a co-author of a recent white paper summarizing several years of findings about the solar gamma-ray signal. “It’s definitely the most surprising thing I’ve ever worked on.”

    Not only is the gamma-ray signal far stronger than a decades-old theory predicts; it also extends to much higher frequencies than predicted, and it inexplicably varies across the face of the sun and throughout the 11-year solar cycle. Then there’s the gap, which researchers call a “dip” — a lack of gamma rays with frequencies around 10 trillion trillion hertz. “The dip just defies all logic,” said Tim Linden, a particle astrophysicist at Ohio State who helped analyze the signal.”

    “….we were so spectacularly wrong ..” “….defies all logic…

    And yet the establishment has Solar all figured out and everyone should just move on and accept the IPCC conclusions. Regardless of the significance of these findings, the point is that current knowledge is not terminal knowledge. Did previous generations have this level of hubris to think they had aced the exam?

  29. Ireneusz Palmowski

    I’m sorry.
    Circulation over North America does not change. Another low from southern California will move to the center.

  30. WRT John Christy: Climate Models Have Been Predicting Too Much Warming: would anyone be interested in a temperature record from a station professionally maintained since before WW-II, free of site movement adjustments and urban heat island effects or other molestation? Such a site shows a warming of 1 oC per 80 years.

    • Christy’s analysis addresses the large discrepancy between models and observations regarding the missing hot spot in the tropical troposphere, And he notes that early results from CMIP6 models are getting worse, not better.
      https://rclutz.wordpress.com/2019/05/26/its-models-all-the-way-down/

      • Ireneusz Palmowski

        Ron Clutz:
        “Conclusion

        So the rate of accumulation of joules of energy in the tropical troposphere is significantly less than predicted by the CMIP5 climate models. Will the next IPCC report discuss this long running mismatch? There are three possible ways they could handle the problem:
        • The observations are wrong, the models are right.
        • The forcings used in the models were wrong.
        • The models are failed hypotheses.

        I predict that the ‘failed hypothesis’ option will not be chosen. Unfortunately, that’s exactly what you should do when you follow the scientific method.”

  31. The new Beijing Climate Model is another example of the recent boom in high quality research coming out of China. It mentioned that the new version has a lower impact of CO@, but (and this could just be me being thick) it is hard to see what their value is for warming with a ppm of 800 (*2CO2). Did anyone figure this out?

  32. I have just posted three one page gate breakers, to be used in the face of alarmist teachers and other gate keepers.

    One is paleo, on the topic of the Little Ice Age and Medieval Warm Period.

    One is on the role of the sun in present global warming.

    One is on extreme weather, specifically hurricanes.

    In each case I have posted two variants, one with a Google Scholar link to the recent research and one with no such link. Seeing the research might be useful in some cases, or a distraction in others, hence the choices.

    Here they are:

    https://ccdedu.blogspot.com/2019/05/is-sun-causing-global-warming.html with GS link.

    https://ccdedu.blogspot.com/2019/05/is-sun-causing-global-warming_27.html without GS link.

    https://ccdedu.blogspot.com/2019/05/are-we-coming-out-of-little-ice-age.html with GS links.

    https://ccdedu.blogspot.com/2019/05/are-we-coming-out-of-little-ice-age_27.html without GS links.

    https://ccdedu.blogspot.com/2019/05/are-hurricanes-getting-worse-or-not.html with GS link.

    https://ccdedu.blogspot.com/2019/05/are-hurricanes-getting-worse-or-not_27.html without GS link.

    • Steven Mosher

      “One of the natural causes is “emerging from the Little Ice Age” which happened a few hundred years ago”

      “emerging from the LIA” is
      NOT
      A
      CAUSE

      If you think it is a cause, what are the physical units of the “cause”

      emerging from the LIA is an Effect. We observe warming, the question is

      1. How much of the warming is due to external forcing
      2. How much is due to internal unforced natural variation.

      essentially #2 is everything that cannot be explained by #1

      • Good Morning Steven (UTC),

        At the risk of drifting off the topic of your comment, did you by any chance receive my recent communication via Twitter?

        If so, what did you make of it?

  33. Ireneusz Palmowski

    Be warned, low over the Kansas will remain for two days.

    • Judah Cohen’s AER blog about atmospheric physics has an interesting spring summary, looking ahead to summer.

      https://www.aer.com/science-research/climate-weather/arctic-oscillation/

      A lot of detail but essentially, Arctic high pressure systems and ridges are persisting from the winter. Therefore going forward to the summer, there is a chance (this is tentative, Judah admits) that such Arctic ridges will continue and cause meridional (north-south) airflows with the possibility of cold and unsettled weather. So it could be a miserable summer.

      • Speaking of Arctic high pressure systems, my alter ego has been attempting to draw Judith’s attention to the current prognosis via Twitter, together with Anthony Watts!

        Is there anybody in the Climate Etc. house who hasn’t been blocked at WUWT?

      • Judah Cohen’s work is very much worth following – unlike WUWT.

        The Arctic Oscillation has been negative – mostly – in April and May. Higher surface pressure at the pole relative to sub-polar sea level pressure. It is a very variable index over weeks and seasonally. It should settle down somewhat over the NH summer. But there are no hard and fast rules of course. Much longer term more negative values in both limited observations and models have been linked to low solar activity.


        https://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/ao_index.html

        “Since cold air is more dense than warm air, it causes pressure surfaces to be lower in colder air masses, while less dense, warmer air allows the pressure surfaces to be higher. Thus, heights are lower in cold air masses, and higher in warm air masses.” https://climate.ncsu.edu/images/climate/enso/geo_heights.php

        Showing geopotential heights in the negative mode.

        Warmer air over the Arctic and cooler over North America? The Pacific connection is via winds and currents. An intense Aleutian low spinning up a high pressure cell off California. That in turn spins up the North Pacific Gyre.

        Resulting in enhanced upwelling – more blue in the color enhanced satellite image – in the north-east Pacific.

        There is a polar vortex in the south as well of course – with interesting and coherent inter hemispheric patterns of variability over decades to millennia that appear related to subtle changes in the sun.

      • I asked Judah Cohen about his Arctic prognosis over on Twitter earlier.

        Strange to relate, but it coincided with my own:

      • Robert – Actually your WUWT remark makes my point for me. Anthony and his mods actively suppress pertinent information, whilst impertinent dross is permitted to flourish. How on Earth can you sensibly discuss Maslowski’s work without allowing the man to speak for himself about his research?

        Getting back to Judah’s Arctic sea ice prognosis in his most recent article, do you concur that the summer of 2019 could prove to be be another 2016 extent wise, and potentially even another 2012?

      • So if the AO persists in a more negative mode – it might be warmer than average in the Arctic with some sea ice melt?

        AO variability might settle down over the NH summer. But I don’t predict it or sea ice on the basis of overly simplistic notions. I don’t read Jim’s blog either. .

      • Robert – Evidently you cannot possibly have any conceivable idea what you are missing out on!

        A blow by blow account of the 2019 Arctic sea ice melting season at the very least.

        And the wondrous words of wisdom of Bill the Frog:

        http://GreatWhiteCon.info/tag/bill-the-frog/

      • Jim Hunt
        Why are you endlessly posting links to the incoherent great white con post? It contains no science and makes no sense.

      • Jim Hunt
        Are you seriously still holding out for your “Arctic death spiral”?

      • Good evening Phil (UTC),

        Have you read the collected works of Bill the Frog yet? I think you’ll find he doesn’t mention any alleged “Arctic Death Spiral”. Here’s an extract for you:

        It is clear that Mr Monckton has the ability to keep churning out virtually identical articles, and this is a skill very reminiscent of the way a butcher can keep churning out virtually identical sausages. Whilst on the subject of sausages, the famous 19th Century Prussian statesman, Otto von Bismarck, once described legislative procedures in a memorably pithy fashion, namely that … “Laws are like sausages, it is better not to see them being made”.

        You invoked the name of Judah Cohen. Robert apparently approves of his work too. I went and asked him his opinion. Judah agreed with me that abnormally persistent high pressure over the Arctic Basin during the rapidly approaching melting season may well lead to this year’s sea ice minimum extent challenging previous lows. I explicitly suggested 2016 and even 2012. Judah didn’t. I haven’t asked him that specific question yet.

        What exactly is your gripe?

      • Jim Hunt

        Yes I did visit the “Great White Con” / “Bill the frog”, but was puzzled by its content. Much of it was retrospective attacks on WUWT and UK Press articles in 2013 or 2016 pointing to the failure of Arctic ice to fulfil the promised death spiral. But why are these 2013 / 2016 articles relevant now?

        I did notice in one discussion at the end of the Bill The Frog page in your answer to “Kasia” a commitment to “do your best to pile on public ridicule” on Judith Curry (your own words). That’s most charming of you and I’m sure it will endear you to the host of this blog.

        @Kasia – For all I know you may well be correct about Judy Curry. Rest assured we will continue to do our level best to pile on the “public ridicule”!

      • Good Morning Phil (UTC),

        Some of Bill’s articles do indeed date from 2013 and/or 2016. What precisely was he “attacking” way back then do you suppose? What do you make of his prose style and sense of humour though?

        Regarding the “ridicule” of which you speak, and given my own extremely dry Anglo Saxon sense of humour, it all seems to be going rather well?

        Putting your cherry picked quote in context:

        It seems likely that Kasia’s first language isn’t English, does it not? Perhaps the “piling on of ridicule” should be applied to all “the rebloggers, retweeters, plagiarisers and other assorted acolytes” she mentions?

        Now if we can get back to the topic du jour, what do you make of Judah Cohen’s recent Arctic prognostications described above.

      • Bang to rights guv’nor! LOL

        You’ll have to excuse me, but I’m just about to head off for the first ever music festival of my entire life! Alice’s Wicked Tea Party, rather than Glasto. Fortunately the sun is shining :)

        More (much) later.

      • Good morning Phil (UTC),

        Has the cat got your tongue?

        Alternatively perhaps you have belatedly realised that you have been proving Kasia’s point for her?

        Any thoughts on the state of Arctic sea ice? Big Joe Bastardi seems to have got his knickers in a twist on that score:

      • Jim
        sorry for the late reply. I’m partly banned on this site by the blocking of my most used devices. This conversation is a bit of a misunderstanding. I was commenting to Ireneusz about Judah Cohen’s predictions about NH weather, not ice. Your replies are all about Arctic ice. In any case these two are opposite sides of the same coin, colder mid latitude NH weather which we’re getting is “borrowed” from the Arctic which in turn gets warmer. But still I don’t think the September minimum will be especially low this year, and NOAA’s prediction is the same.

      • Good morning Phil (UTC),

        I’m sure we don’t want to open that old can of worms do we? Apology accepted.

        “These two are opposite sides of the same coin, colder mid latitude NH weather which we’re getting is “borrowed” from the Arctic which in turn gets warmer.”

        That was precisely one of my points, so agreed wholeheartedly.

        “But still I don’t think the September minimum will be especially low this year, and NOAA’s prediction is the same.”

        Whereas the NSIDC seem to agree with Robert that predicting anything further out than 50 days is fraught with difficulty. Meanwhile at the present time in the actual Arctic, extent in the basin is rapidly heading into uncharted waters:

        http://GreatWhiteCon.info/2019/06/facts-about-the-arctic-in-june-2019/

  34. Why carbon credits for forest preservation may be worse than nothing [link]

    ‘WORSE THAN NOTHING”

    Go to your room you’re grounded. If we said that to all the high flying politicians in the Gore/Schwarzenegger stratosphere perhaps that would be something. It seems like these ideas are such disconnected nonsense they are delusional. Climate science has always been about talking nonsense. Me thinks it’s shrinking in relevance.

  35. “Non-uniform contribution of internal variability to recent Arctic sea ice loss”

    Low solar driving a warmer AMO, which is not internal variability, it’s a negative feedback.

    “An enormous volcanic eruption on Iceland in 1783-84 did not cause an extreme summer heat wave in Europe. But, as Benjamin Franklin speculated, the eruption triggered an unusually cold winter, according to a Rutgers-led study.”

    The low altitude aerosols certainly would have exacerbated land surface temperatures in the summer, similar to Moscow in the summer of 2010 with forest fire smoke.

    The winter of 1783-84 had absolutely nothing to do with the eruption. Changes in wind direction were taking the aerosols away from West Europe by Autumn 1783. The winter of 1783-84 had the same cause as winters that froze the River Nile in 829 and 1010, and the winters of 1600-1602, and the winter of 1962-63. They are all exactly the same type of heliocentric quadrupole configuration of the gas giants, with their synodic periods defining their intervals at 179-181 years and at 953 years. E.g 1010 + 953 = 1963.

    Typically, larger eruptions on a warm burst following extreme cold northern hemisphere winter periods. In 2008 when I had identified that pattern, I predicted a major eruption for April 2010.

  36. Ireneusz Palmowski

    This time, the jet stream moves to the east.

  37. Ireneusz Palmowski

    This time, the jet stream will bring strong storms to the south of the US. Tornado Alley rest.
    http://en.blitzortung.org/live_lightning_maps.php?map=30&fbclid=IwAR00_jS7dumo65K324o6KnKkWiQ0lZ6jZ1QAqFhtdghNPQMkxjDsLZoiORI

  38. Skeptics are hysterical alarmists and crackpots:
    https://newrepublic.com/article/154014/climate-deniers-hysterical-alarmists

    Makes tripe look good.

  39. Pingback: Tutkimus: maaperän hiilinielu ei herkästi häiriinny | Roskasaitti

  40. Question for Robert:
    Long time reader here – and I applaud your clear aptitude and knowledge in regards to science – but I also tend to agree you seem to troll others, albeit in a condescending manner as opposed to outright mockery.
    That said, having read, what must be thousands of comments of yours, I’m still left guessing: what is your position on AGW?

    And please, I beg you, spare me the technical jargon and respect my lamen’s understanding of the subject matter.. simply put – do you believe that AGW is the core of climate change and that we are heading towards a ‘tipping point’?

    I ask this, not to disrespect you at all, contrary, I believe you to be quite intelligent and well versed in the matter and would like to know (for lack of a better term) your ‘position’?

    • Generally this sort of question would be an attempt at a gotcha – let’s see.

      I wrote this a couple of years ago.

      https://watertechbyrie.com/2014/06/23/the-unstable-math-of-michael-ghils-climate-sensitivity/

      I never use jargon but the fundamental mode of operation of the Earth system requires a different way of thinking about things.

      “You can see spatio-temporal chaos if you look at a fast mountain river. There will be vortexes of different sizes at different places at different times. But if you observe patiently, you will notice that there are places where there almost always are vortexes and they almost always have similar sizes – these are the quasi standing waves of the spatio-temporal chaos governing the river. If you perturb the flow, many quasi standing waves may disappear. Or very few. It depends.” https://judithcurry.com/2011/02/10/spatio-temporal-chaos/

      Tomas’ quasi standing waves in the Earth system are nonlinear oscillators like ENSO, PDO, AMO, AO, etc. in the globally connected fluid flow field. The fundamental mathematics of the global spanning system – at least conceptually – is the Navier-Stokes equation of fluid flow.
      We know what the physics are we just can’t solve it with sufficient accuracy.

      Anthropogenic greenhouse gases may perturb the flow – triggering an abrupt shift to a new state via ocean and atmospheric circulation, ice, clouds, biology, AMOC…

  41. It is puzzling that the actual measured increased water vapor is ignored. WV increased about twice as fast as calculated from vapor pressure increase due to the temperature increase of liquid water in the period 1989-2017. The measured and extrapolated WV increase accounts for about 70% of the temperature increase 1909-2018. A DIY analysis using measured WV and two other factors calculates average global temperatures that match measured 98+% over the period examined 1895-2018. http://diyclimateanalysis.blogspot.com

    • Where the atmosphere meets space – all energy is electromagnetic. Incoming from the Sun and outgoing from reflected light and emitted heat. At most times incoming and outgoing energy at the top of atmosphere (TOA) are not equal and Earth warms or cools – mostly in the oceans that are by far the largest planetary heat store. There must of course be AGW in there somewhere with a warmer planet tending to energy equilibrium – so not measurable as a reduction in outgoing IR directly.

      Conservation of energy gives the first differential global energy equation.

      The equation can be written as the change in heat in oceans is approximately equal to energy in less energy out at TOA.

      d(ocean heat)/dt ≈ Ein – Eout

      Ocean heat is measured by the Argo project – accessed via the ‘Global Marine Argo Atlas‘. Radiant flux – a power term – is measured by the Clouds and the Earth’s Radiant Energy System (CERES) project – accessed via the CERES data products page. Keeping things in original units – a cumulative space based power flux imbalance – adding average monthly energy in less energy out – is compared to ocean temperature. They should of course co-vary – providing a cross validation of data sets.

      The CERES data contains a built in energy imbalance. The ARGO data is more down to earth. With an explanation needed for recent warming I suppose.

      “We find a marked 0.83 ± 0.41 Wm−2 reduction in global mean reflected shortwave (SW) top-of-atmosphere (TOA) flux during the three years following the hiatus that results in an increase in net energy into the climate system. A partial radiative perturbation analysis reveals that decreases in low cloud cover are the primary driver of the decrease in SW TOA flux. The regional distribution of the SW TOA flux changes associated with the decreases in low cloud cover closely matches that of sea-surface temperature warming, which shows a pattern typical of the positive phase of the Pacific Decadal Oscillation.” https://www.mdpi.com/2225-1154/6/3/62

      “Marine stratocumulus cloud decks forming over dark, subtropical oceans are regarded as the reflectors of the atmosphere.1 The decks of low clouds 1000s of km in scale reflect back to space a significant portion of the direct solar radiation and therefore dramatically increase the local albedo of areas otherwise characterized by dark oceans below.2,3 This cloud system has been shown to have two stable states: open and closed cells. Closed cell cloud systems have high cloud fraction and are usually shallower, while open cells have low cloud fraction and form thicker clouds mostly over the convective cell walls and therefore have a smaller domain average albedo.4–6 Closed cells tend to be associated with the eastern part of the subtropical oceans, forming over cold water (upwelling areas) and within a low, stable atmospheric marine boundary layer (MBL), while open cells tend to form over warmer water with a deeper MBL. Nevertheless, both states can coexist for a wide range of environmental conditions.5,7” (Koren et al, 2017).


      “Large reductions in clear-sky SW TOA flux are also found over much of the Pacific and Atlantic Oceans in the northern hemisphere. These are associated with a reduction in aerosol optical depth consistent with stricter pollution controls in China and North America.” op. cit.

      Although I suspect it is more volcanoes than reduction in human sulfur emissions – for two reasons. The reduction in volcanic emissions in the post hiatus years.

      And the fact that fossil fuels (and biomass burning) emit unburnt black carbon particles along with sulfur and organic carbon that when mixed amplify black carbon warming.

      But I suppose it could all be water vapor.

      • Dan Pangburn

        RE,
        No, only about 70% is due to the water vapor increase. The rest is solar, quantified by the proxy of SSN, plus the net of natural ocean surface temperature cycles. A top-down analysis according to Spencer “Rather than model the system from the bottom up with many building blocks, one looks at how the system as a whole behaves”. This makes global assessment doable as DIY and identifies the contributions of the main factors. What isn’t accounted for explicitly is accounted for by proxy.

        The bottom up method used in GCMs has been predicting temperature increase rates about twice measured. Trying to account for all the minutia individually, as you have described, is theoretically promising and has kept a lot of folks employed but just is not working.

      • I go about as top down as can be. Warming in shortwave and cooling in infrared. Then you have to ask why. And it is not water vapor.


        Most of it is cloud – most in the Pacific. I’ve seen Roy Spencer suggest it was all shortwave warming – btw – in ERB data.


        https://journals.ametsoc.org/doi/full/10.1175/JCLI3838.1

      • Dan Pangburn

        RE,
        Apparently you are not familiar with what a top-down analysis is. It is a way of avoiding getting mired in the minutia as your lengthy description above of numerous things to consider indicates could happen in the bottom-up type analyses typical of GCMs.

        Water vapor is a ghg
        Since 1988 global measured WV trend has increased 4.4%. Extrapolated to 1909 the approximate WV increase since then is 9%.
        At sea level on average, WV molecules outnumber CO2 molecules 24 to 1.

        Which of these are you unaware of?

      • i’d suggest that the first differential global energy equation is a simpler way of seeing the system as a whole.

        Δ(ocean heat) ≈ Ein – Eout

        The Argo program measures ocean heat – it changes with the transient energy imbalances at top of atmosphere. Satellites measure change in energy in and energy out well but are not so good at absolute values – the inter-calibration problem – so that energy imbalances at TOA are not immediately obvious. Energy in and out varies all the time. Energy in varies with Earth’s distance from the Sun on an annual basis and with much smaller changes over longer terms due to changes in solar radiation. Outgoing energy varies with cloud, ice, water vapor, CO2, dust, aerosols, … – in both shortwave (SW) and infrared (IR) frequencies. And the planet responds with a Planck feedback to net warming or cooling.

        https://watertechbyrie.com/2018/06/10/a-maximum-entropy-climate-earth-in-transient-energy-equilibrium-2/

        The system is complex and dynamic in the sense of being spatio-temporally chaotic. I don’t think you have quite cracked it.

      • Dan Pangburn

        RE,
        If you look closely at my ‘model’ you might realize that it (of course) uses conservation of energy. It is so stated in Section 17 of http://globalclimatedrivers2.blogspot.com “The basis for assessment of AGT is the first law of thermodynamics, conservation of energy, applied to the entire planet as a single entity.” In the much shorter DIY link that statement got limited to “with strict compliance with physical laws”.

        SST is what is important to climate and that is what is considered in determining average global temperature. The Argo data is interesting but no one knows how it translates to SST. None of that matters in a top-down analysis. What does matter is that factors found to be important in the past and produced a good match with measured continue to be important in the same proportions in the future.

        I see that you persist in trying to account for the minutia explicitly. That is never going to work as long as you continue to ignore the three primary contributors that I have identified.

        I predict that you and the world will eventually discover that CO2 has little, if any, effect on climate.

      • Everything obeys thermodynamic laws. But unless ocean heat is considered there is no way of knowing what the balance of incoming and outgoing energy is. And satellite power flux data shows where and why things are changing.

        But the predicted effect of greenhouse gas of photon scattering in the atmosphere is confirmed – and replicated – at a global scale by taking wide spectrum snapshots at different times of energy emissions through narrow apertures.

        With temperature increases and water vapor. cloud, ice, dust and biological feedbacks in the spatio-temporal chaotic system. Against a backdrop of large intrinsic variability. I suspect that only more and more longer term data will suffice to explain these major influences on climate.

      • Dan Pangburn

        RIE,
        It appears that you might be mired in the minutia.

        You ignored my June 1 question.

        Since 1980 the WV increase as a result of temperature increase is about 3%. The measured WV increase is about 5.3%.

        Are you still ignoring the ‘extra’ WV increase?

      • “The top-of-atmosphere (TOA) Earth radiation budget (ERB) is determined from the difference between how much energy is absorbed and emitted by the planet. Climate forcing results in an imbalance in the TOA radiation budget that has direct implications for global climate, but the large natural variability in the Earth’s radiation budget due to fluctuations in atmospheric and ocean dynamics complicates this picture.” https://link.springer.com/article/10.1007/s10712-012-9175-1

        The one not to miss is internal variability.

      • You miss the major factors and elevate too few minor. You assume without any evidence that the increase in water vapor is due to other than atmospheric temperature feedbacks. And then ascribe most of the warming to water vapor.

        https://iopscience.iop.org/article/10.1088/1748-9326/aae018/pdf

        Even then there are too many unknowns in too few equations. Only data provides answers – and you have too little.

      • Dan Pangburn

        RIE,
        You say that I “…miss the major factors and elevate too few minor.” I say exactly the same thing about the ‘consensus’ method using GCMs. The GCMs have predicted about twice the rate of temperature increase as measured. That amounted to being about 0.6 K too high in 2018.

        Paraphrasing Richard Feynman: Regardless of how many experts believe a theory or how many organizations concur, if it doesn’t agree with observation, it’s wrong.

        My method allows ‘prediction’ from any time in the past. For example, prediction from 2005 results in an average global temperature prediction that is only 0.057 K lower than the 2018 5-year smoothed HadCRUT4 temperature. The match back to 1895 is 98+%.

        The assumption is that, on average, the % increase in WV is directly proportional to the increase in vapor pressure of the liquid water. I don’t know what is assumed for this in the GCMs. I don’t “…ascribe most of the warming to water vapor”. The model determines the fraction of warming that each named factor (and everything each factor is a proxy for) is responsible for.

        The observation that “…too many unknowns in too few equations” (not to mention the unknown unknowns) is IMO contributing to the miserable performance of the GCMs.

      • You keep rabbiting on about models while I focus on data. And even then you have no clue about how they actually work. Most people don’t.

        “Sensitive dependence and structural instability are humbling twin properties for chaotic dynamical systems, indicating limits about which kinds of questions are theoretically answerable.” https://www.pnas.org/content/104/21/8709

        “In 1963, Lorenz published his seminal paper on ‘Deterministic non-periodic flow’, which was to change the course of weather and climate prediction profoundly over the following decades and to embed the theory of chaos at the heart of meteorology. Indeed, it could be said that his view of the atmosphere (and subsequently also the oceans) as a chaotic system has coloured our thinking of the predictability of weather and subsequently climate from thereon.

        Lorenz was able to show that even for a simple set of nonlinear equations (1.1), the evolution of the solution could be changed by minute perturbations to the initial conditions, in other words, beyond a certain forecast lead time, there is no longer a single, deterministic solution and hence all forecasts must be treated as probabilistic. The fractionally dimensioned space occupied by the trajectories of the solutions of these nonlinear equations became known as the Lorenz attractor (figure 1), which suggests that nonlinear systems, such as the atmosphere, may exhibit regime-like structures that are, although fully deterministic, subject to abrupt and seemingly random change.” https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161

      • Dan Pangburn

        RIE,
        You present a pretty good argument of why the bottom-up approach is not working. It might be good for job security but so far its projections pose a threat to a prosperous civilization. The top-down approach (aka emergent structures analysis) that I have used avoids all of the complexity by simply looking at how the system as a whole behaves.

        IMO the first step is to recognize the importance of the increase of water vapor. It cools faster and farther on dry, cloudless desert nights than it does on clear nights where it is humid. This well-known observation demonstrates that water vapor is IR active at earth temperatures (i.e. a ghg), that there is a GHE and that WV is at least a substantial contributor to it.

        Average global WV (TPW) has been accurately measured by satellite and reported publicly by NASA/RSS since 1988. The numerical data for April, 2019 is at http://data.remss.com/vapor/monthly_1deg/tpw_v07r01_198801_201904.time_series.txt (last six digits are year-month). This is graphed as Figure 3 in the blog/analysis at http://globalclimatedrivers2.blogspot.com . It has been long enough to establish a trend and a rational approximate extrapolation back to 1700. Comparison of the WV trend with the CO2 trend reveals that, at low altitude, the atmosphere has gained about 6 molecules of WV for each molecule of CO2.

        IMO the hash at WV wavenumbers in graphs of TOA radiation flux vs wavenumber results from the decline in population of WV molecules from about 10,000 ppmv average at surface to 32 ppmv at top of troposphere (about -50 C). As the population of WV molecules thins out with increasing altitude, more of the emission from WV molecules makes it all the way to space. The hash indicates the range of altitudes (temperatures) over which the emission originated. Anything that posits that increased WV has not at least substantially contributed to warming is wrong.

        Multiple compelling evidence demonstrates that CO2, in spite of being a ghg, has little if any effect on average global temperature. A likely explanation is that the comparatively small % increase in the number of absorbers at sea level (WV molecules, on average, outnumber CO2 molecules by about 24 to 1) is countered by the large % increase in emitters above the tropopause (CO2 molecules outnumber WV molecules by about 13 to 1).

        Because WV increase is self-limiting, warming of the planet is self-limiting. The increased WV should substantially mitigate if not outright prevent a temperature decline resulting from the quiet sun.

        Combining the average global temperature increase from the added WV with a simple approximation of the influence of SST cycles and a solar influence quantified by a proxy which is the SSN anomaly results in a 98+% match with measured average global temperatures 1895-2018.

        The EXCEL file so constructed is fairly easily modified to solve using data up to any year and projecting from that year. I modified the code to project from 2005 using only data up to 2005. The projected temperature in 2018 was within 0.056 K of the measured trend.

      • “Climate is ultimately complex. Complexity begs for reductionism. With reductionism, a puzzle is studied by way of its pieces. While this approach illuminates the climate system’s components, climate’s full picture remains elusive. Understanding the pieces does not ensure understanding the collection of pieces. This conundrum motivates our study.” https://pielkeclimatesci.wordpress.com/2011/04/21/guest-post-atlantic-multidecadal-oscillation-and-northern-hemisphere%E2%80%99s-climate-variability-by-marcia-glaze-wyatt-sergey-kravtsov-and-anastasios-a-tsonis/

        LOL. What it shows is another level of dynamic complexity. But you don’t to have even the components correct.

        https://physicsworld.com/a/are-our-water-vapour-emissions-warming-the-climate/

        https://link.springer.com/article/10.1007/s10712-012-9175-1

      • Dan Pangburn

        RIE,
        I get the right average global temperature (within 0.057 K of the trend after 13+ years) and your approach does not (projecting twice the measured temperature increase). Apparently you have picked the wrong ‘components’, perhaps also the wrong process/mechanism for an atmosphere containing both CO2 and WV molecules and are unaware of your mistakes.

      • “I get the right average global temperature ”

        It would be more surprising if – while your little formula – you didn’t.

      • Models can be trained – prediction is another country.

    • Thanks, DanP. Your two blogspot sites are very good indeed. The minutiae that occupy so much time and space, and not just here, ignore, as you point out, the primary questions:
      1. Does CO2 at this time at these levels control climate? and
      2. Can we control CO2?
      Since the answer to #1 is No, that moots #2 – but the answer to that is No also.
      The uncontested facts are:
      1. CO2 is not in control of climate. It as, as we will repeatedly hear, a feedback, but there is no time in the past 550 million years when CO2 change has preceded a temperature reversal.
      2. We are not in control of CO2. The natural experiment was run in 1929-1931 when human CO2 production went down by 30% and global CO2 levels did not decrease and temperature kept rising to 1942. Temp then declined slightly but measurably during WWII and postwar reconstruction despite our CO2 production. Declined enough to raise alarms about the Oncoming Ice Age (see the covers of Time and Newsweek and ScienceNews in the early 70’s)
      3. 30% of the agriculture increase since 1950 has been attributed to CO2 increase. Satellite photos show the greening of the world over time.
      4. CO2 is virtually the only GHG in the stratosphere capable of radiating IR out to space.
      5. And then, the exponential decline in the GHG effect of CO2 was noted by Arrhenius. It is at 50% in the first 20 ppm, and declines exponentially after that. So that the next doubling to 800 ppm will increase its GHG effect by less than 2%.
      In sum, CO2 warms us and cools us and feeds us.
      Climate change is a given, not a problem.
      CO2 mitigation is a problem, not a solution.

  42. From WNA Newsletter: https://mailchi.mp/world-nuclear-news/17-24-31-may-2019

    International Energy Agency urges priority for nuclear power

    The OECD’s International Energy Agency (IEA) has published a report, Nuclear Power in a Clean Energy System, concluding that a failure to invest in existing and new nuclear plants in advanced economies would have profoundly adverse implications for emissions, costs and energy security. In particular, global efforts to transition to a cleaner energy system would become drastically harder and more costly. The report recommends that markets should value dispatchability, since the system costs of intermittent renewables is high. Electricity markets should properly reward nuclear power plants that provide the system services needed to maintain electricity security, including capacity availability and frequency control services.

    The IEA report, its first addressing nuclear power for nearly 20 years, says that strong policy support is needed to secure investment in existing and new nuclear plants. The focus should be on designing electricity markets in a way that values the clean energy and energy security attributes of low-carbon technologies, particularly nuclear power. It warns that without a lot of positive action there will be electricity security concerns, and a global power mix that depends largely on natural gas-fired capacity. Any significant decline in world nuclear power would give rise to the need for $1,600 billion in additional investment over the next two decades to fill the clean energy shortfall “which would end up hurting consumers through higher electricity bills.”

    The report notes that over 1971 to 2018 emissions from electricity generation would have been some 20% higher without the contribution of nuclear power. Nuclear power has provided around half of all low-carbon electricity in advanced economies – a total of 76,000 TWh, which is more than ten times the total output of wind and solar combined. For advanced economies – including the USA, Canada, the European Union and Japan – nuclear has been the biggest low-carbon source of electricity for more than 30 years and remains so today. Furthermore, any “drastic increase in renewable power generation would create serious challenges in integrating the new sources into the broader energy system.”

    The World Nuclear Association commented that “much more will be needed to achieve the target of supplying at least 25% of global electricity demand with nuclear energy by 2050 as required by the nuclear industry’s Harmony goal or even the near six-fold increase required by the IPCC ‘middle of the road’ scenario. We welcome the IEA report’s recommendation for more government interventions to secure investment in new nuclear plants.”
    WNN & IEA 28/5/19.

    • From the IEA 2015 technology roadmap.

      “R&D in ageing of systems and materials is needed to support safe, long-term operation of existing nuclear power plants (NPPs) for 60 years operation or more.

      To open up the market for small modular reactors (SMRs), governments and industry should work together to accelerate the development of SMR prototypes and the launch of construction projects (about 5 projects per design) needed to demonstrate the benefits of modular design and factory assembly.

      Public-private partnerships need to be put in place between governments and industry in order to develop demonstration projects for nuclear cogeneration in
      the area of desalination or hydrogen production.”

      https://www.iea.org/publications/freepublications/publication/Nuclear_RM_2015_FINAL_WEB_Sept_2015_V3.pdf

      Design and development has stalled for many decades leaving a legacy of plants struggling to achieve 34% thermal efficiency in huge installations with immense capital costs.

      Advanced nuclear plants have many benefits – but beyond prototypes they need to be cost competitive.

      https://watertechbyrie.com/2019/04/08/small-modular-nuclear-reactor-promise-smr-prospects-are-good/

      • aporiac1960

        Robert,

        Something you didn’t explicitly address in your ‘SMR Prospects Are Good’ piece are the obstacles to getting timely approval for novel designs from regulators who are just not used to dealing with such challenges.

        All of the companies I know about who are working on SMR’s have identified this as a major factor inhibiting progress. It is not merely the enormous cost in time and money of getting design approval, but that it’s very hard to iterate designs, which is a critical requirement in an R&D setting, if each time you want to change something you have to enter a new approval procedure that is painfully slow and bureaucratic.

        The complaint is that for SMR’s to have a chance, regulators are going to need to be much more engaged and responsive. The complaint I’ve heard from US companies is that threshold for what qualifies as a ‘research’ reactor is far too low at 1MW. As I understand it, even if they were ‘successful’ with a new design at this scale, the things they’d learned in the process wouldn’t necessarily translate to a commercial scale reactor, so they may, in effect, be no closer to their final objective.

        Perhaps in the public-private partnership you mention, the most valuable ‘public’ contribution would be to modernise those parts of the nuclear industry that are, in fact, in the public province.

      • I return to General Atomics EM2 because it is the best of the advanced designs. And the company has such a broad and deep technology expertise. They are meant to be modular installations. Factory constructed and delivered on trucks. This opens the way to generic approval on which the USA is working. GA have licenced and built a couple of helium cooled reactors. And the Tennessee Valley Authority have recently been given approval to reduce safety zones to site boundaries for these inherently safe designs.

        https://watertechbyrie.com/2016/06/18/safe-cheap-and-abundant-energy-back-to-the-nuclear-energy-future-2/

      • The public/private partnership needed is more in terms of supporting ‘first of a kind’ deployments.

    • Peter
      It is easy to test whether the media-green establishment is honest about (a) believing that CO2 is harmful and (b) wanting to reduce CO2 emissions.

      Just observe the strength of their commitment to promote nuclear power, the only technology for reducing CO2 emissions that ever has existed and ever will exist.

      If they are pro-nuclear then even if they’re mistaken about the catastrophic harmfulness of CO2, then at least it’s an honest belief inducing an honest and rational response.

      If on the other hand they are anti-nuclear then it means that the entire CO2-warming rhetoric is a pantomime and a gigantic lie. In that case it’s a purely political malleus malofacoram, a hammer of witches to destroy political, religious and class enemies.

    • Nuclear has a cost down the line. AFAIK no one has found a solution. Fifty years ago I got a scare, knowing that politicians are easily taken in with lofty promises but never stay around to face the fallout. Over that half century I observed from a distance how the big end-of-life was dealt with, with exorbitant costs passed to the tax paying public. Now there is a not so nice collection of such ‘events’ – the ones that are known.
      This was what urged this post: http://www.thebigwobble.org/2019/06/radiation-levels-up-to-6000-times.html

      • All over the world there is a legacy of novel radioactive nuclides – in oceans, air, dust. From hundreds if not thousands of accidental releases and hundreds of nuclear weapons tests.
        They will try and tell you how safe it is – but the reality is that these substances are pathogenic and long lived. Cytotoxicity depends on the mode of exposure – radiation, ingestion or inhalation. No one knows what the impacts of such wide exposure for such a long time are.

        Part of the solution is to close the fuel cycle. Recycle used fuel through multiple burns with the addition of fertile material and the removal of light fission products.

        Resulting in far less and much less long lived waste. Hundreds of years to decay to background levels rather than many thousands.

      • Peter Lang

        melitamegalithic,

        On this matter you have been grossly misinformed.

        Nuclear power is the safest way to generate electricity. Always has been, The deaths caused by nuclear power accidents and routine operation are much less per TWh than any other electricity generation technology.

        Waste disposal is a political problem, not a technical problem. It’s been safely handled for 60 years so far. Whereas, it’s toxicity declines with time, the toxicity of emissions from other technologies do not. The proposed waste disposal programs are massive overkill, and totally unnecessary.

        It is easy to find radiation hot spots, but very difficult to find chemical pollution.

        The high costs of nuclear power are entirely due to the onerous costs of regulations. These have been caused by 50 years or anti-nuclear scare mongering. If not for this, the cost of nuclear power could now be around 10% of what it is. Nearly all world power would be generated by nuclear. There’d be no intermittent, unreliable generators (e.g. wind and solar).

        Here’s a short opinion piece on GWPF:
        WHAT COULD HAVE BEEN – IF NUCLEAR POWER DEPLOYMENT HAD NOT BEEN DISRUPTED https://www.thegwpf.com/what-could-have-been-if-nuclear-power-deployment-had-not-been-disrupted/

      • Peter Lang

        If each technology was required to pay insurance or compensation for the annual cost of deaths caused by that technology, the amounts they would have to pay per MWh are:

        Technology $/MWh
        Coal 141
        Natural gas 38
        Hydro 13
        Solar 4.1
        Wind 1.4
        Nuclear 0.8

        Or, if each technology is not penalized for the deaths it causes, society should subsidise nuclear $140/MWh to substitute for coal and 37/MWh to substitute for natural gas generation.

      • Peter Lang
        I admit I have always looked at that technology from a distance, a safe one. But you may have inadvertently confirmed my belief/opinion.

        Waste disposal is not a political problem, because politics don’t solve no such problems. From a political view its a world problem, technically unsolved. The onerous regulations only seem to cover costs up to erection. The ‘end-of-life’ cost is still an insidious one (sellafield, Chernobyl, Fukushima)

        I am an engineer, an old one with some acquired wisdom from bitter experiences. Good intentions serve for nothing in engineering. A dose of politics is a guarantee of failure. QA documents are a guarantee that the proper checks have been bypassed and docs faked (ask EDF; my experience- ~100%). And general risk-ignorant carelessness, top to bottom (see Tokaimura for that).

        That said, all technology has its price ultimately, but one has to be realistic about the figure.

      • I believe Peter to be incorrect. It is easy enough to make coal plants for instance essentially pollution free. All modern plants are. Waste from light water reactors is essentially never safe. The risk to people is both insidious and incalculable.

        The problems of LWR include cost, weapons proliferation risk, waste and safety. The size and capital cost of these is huge in order to attain even 34% thermal efficiency. Containing waste after using just 1/2% of the fuel energy content is both an impossible engineering problem and a waste of resources. With so much waste sitting in leaky drums and ponds diversion into weapons streams of one sort or another is an ever present potential. And there are so many vulnerable systems that disaster is an ever present possibility.

        But there are advanced nuclear technologies being built.

      • Peter Lang

        melitamegalithic

        Thank you for your reply. Please see my responses to your points below.

        Waste disposal is not a political problem, because politics don’t solve no such problems. From a political view its a world problem, technically unsolved. The onerous regulations only seem to cover costs up to erection. The ‘end-of-life’ cost is still an insidious one (sellafield, Chernobyl, Fukushima)

        No. It’s a political problem because it is not a significant technical problem. There are many ways to demonstrate this.

        Sellafield, Chernobyl, Fukushima caused, and will cause, very few deaths compared with other technologies. The deaths caused by radiation and radioactive contamination from Fukushima are zero so far and projected to be about zero ever. Known deaths from Chernobyl radiation and radioactive pollution are less than 100, and the estimated total is around 200 to date. Projections using the Linear No Threshold (LNT) hypothesis are way too high.

        The ranking of the electricity generation technologies in terms of life cycle deaths per TWh have been virtually unchanged for some 40 years. For example:
        Coal electricity, world avg. = 60 (50% of electricity)
        Coal electricity, China = 90
        Coal, U.S. = 15 (44% U.S. electricity)
        Natural Gas = 4 (20% global electricity)
        Solar (rooftop) = 0.44 (0.2% global electricity)
        Wind = 0.15 (1.6% global electricity)
        Hydro, world avg. = 1.4 (15% global electricity)
        Nuclear, world avg. = 0.09 (12% global electricity w/Chern&Fukush)

        [Nuclear is actually 0.04 if we use 200 deaths for Chernobyl instead of the 9000 projected by the LNT hypothesis.]

        Nuclear waste is an insignificant issue technically. It has killed no one so far and probably never will. Compared with the deaths and ecosystem damage caused by pollution from all the other technologies nuclear waste is of minor consequence.

        The onerous regulations only seem to cover costs up to erection.

        That is incorrect. The regulatory costs massively increase the operation and maintenance, decommissioning and waste management costs.

        I suggest that the regulatory requirements should be the same for all technologies. The health impacts is an important externality that should be brought into the price of electricity from all technologies. If that was done, none could compete with nuclear.

        QA documents are a guarantee that the proper checks have been bypassed and docs faked

        As is the case in every technology and every industry (medicine, hospitals, air transport, etc. etc.). But nuclear is by far the most closely watched.

        That said, all technology has its price ultimately, but one has to be realistic about the figure.

        I agree. But to be realistic we have to be unbiased. We need to investigate beyond the nonsense being spread by the anti-nuclear industry, the environmental activists, the media and entertainment industry.

        I can provide authoritative references to support all these statements if you want them.

        Did you read my 2017 paper: ‘Nuclear Power Learning and Deployment Rates; Disruption and Global Benefits Forgone’ https://www.mdpi.com/1996-1073/10/12/2169/htm . Also read the Notes in Appendix B.

      • Peter Lang

        Is radioactive waste disposal a huge problem?

        No it isn’t.

        It’s a negligible problem if considered in rational perspective. The big picture perspective the anti-nuke activists fail to recognise is that if we allowed nuclear power (e.g. remove the irrational, unjustified, impediments we’ve imposed on it), it could largely substitute for fossil fuel baseload power generation over the decades ahead and doing so would avoid around 1 to 2 million fatalities per year worldwide. That’s the big picture.

        Nuclear waste is a relatively minor technical issue and cost impact if considered in rational perspective:

        1. The quantities of radioactive waste from power stations are trivial compared with the chemical toxic wastes we release to the environment continuously, and compared with the quantities of CO2 released.

        2. Nuclear wastes are contained and not released to the environment, whereas the toxic chemical wastes are released to the environment all over the world.

        3. Nuclear waste disposal is about 1% of the total cost of electricity generation (see Figure ES-2 in ‘The Economics of the Back End of the Nuclear Fuel Cyclehttp://www.oecd-nea.org/ndd/pubs/2013/7061-ebenfc-execsum.pdf

        4. Nuclear waste can be stored safely at or near surface or disposed of permanently in deep geological repository (DGR). There is negligible chance of any impacts on health from DGR. I’d urge you to research this; a good place to start would be with the excellent Canadian Nuclear Waste Management Organisation’s DGR site. Start at Home and progress down to the technical reports, e.g. TR-08-10 here: http://www.nwmo.ca/dgrsitecharacterizationtechnicalreports

      • Peter Lang

        Origins, Goals, and Tactics of the U.S. Anti-Nuclear Protest Movement. Rand Corporation https://www.rand.org/pubs/notes/N2192.html

      • Peter Lang

        How to respond to a major nuclear accident.

        1. Thomas, P. Quantitative guidance on how best to respond to a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 4-15. https://www.sciencedirect.com/science/article/pii/S0957582017302665 (2018-06-29).

        2. Thomas, P.; May, J. Coping after a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 1-3. https://linkinghub.elsevier.com/retrieve/pii/S0957582017303166 (2018-06-14).

        3. Nuttall, W.J.; Ashley, S.F.; Heffron, R.J. Compensating for severe nuclear accidents: An expert elucidation. Process Safety and Environmental Protection 2017, 112, 131-142. https://www.sciencedirect.com/science/article/pii/S0957582016303032 (2018-06-29).

        4. Yumashev, D.; Johnson, P.; Thomas, P. Economically optimal strategies for medium-term recovery after a major nuclear reactor accident. Process Safety and Environmental Protection 2017, 112, 63-76. https://www.sciencedirect.com/science/article/pii/S0957582017302665?via%3Dihub (2018-06-29).

      • “The review found that many, though not all, studies of solid cancer supported the continued use of the linear no-threshold model in radiation protection. Evaluations of the principal studies of leukemia and low-dose or low dose-rate radiation exposure also lent support for the linear no-threshold model as used in protection. Ischemic heart disease, a major type of cardiovascular disease, was examined briefly, but the results of recent studies were considered too weak or inconsistent to allow firm conclusions regarding support of the linear no-threshold model. It is acknowledged that the possible risks from very low doses of low linear-energy-transfer radiation are small and uncertain and that it may never be possible to prove or disprove the validity of the linear no-threshold assumption by epidemiologic means. Nonetheless, the preponderance of recent epidemiologic data on solid cancer is supportive of the continued use of the linear no-threshold model for the purposes of radiation protection. This conclusion is in accord with judgments by other national and international scientific committees, based on somewhat older data. Currently, no alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the linear no-threshold model.” https://journals.lww.com/health-physics/Abstract/2019/02000/Recent_Epidemiologic_Studies_and_the_Linear.23.aspx

        Low dose response arguments ignore that – just from natural sources – we are all beyond a radiation threshold if one exists.

      • Peter Lang

        15,896 Deaths caused by the 2011 Tōhoku Earthquake and Tsunami in Japan

        1,599 Deaths cause by the Unnecessary Evacuations following the Fukushima Nuclear Accident

        0 Radiation deaths from the Fukushima nuclear accident

        7,000,000 W.H.O. estimated DEATHS PER YEAR due to air pollution from fossil fuels, most occur in Asian and the Western Pacific

        100,000 Estimated Deaths from Fear Induced Abortions after the Chernobyl Accident

        31 Deaths from Acute Radiation Syndrome following the Chernobyl Accident of 1986

        12,000 Deaths from the Great London Smog of 1952

        171,000 Deaths from the failure of China’s Banqiao Dam in 1975

      • Peter Lang

        Why radiation is safe & all nations should embrace nuclear technology – Professor Wade Allison

      • Peter Lang

        The Linear No-Threshold Relationship Is Inconsistent with Radiation Biologic and Experimental Data
        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2663584/

      • The disruption to deployment of nuclear power:

        What caused it?
        Cost escalation caused by –
        increasing size and complexity, and long approval times caused by –
        Regulatory ratcheting, caused by –
        widespread concerns and fear fear caused by
        anti-nuclear power protest movement and environmental activist groups like Greenpeace, WWF, FOE, ACF etc

      • I think he forgot a few unrelated disasters there. But the fact is that high efficiency – low emission coal are the logical choice for the developing world. Purely on cost considerations. Which saves far more lives – not least by supplanting cooking on wood and dung fires – where most lives are lost – because cheaper energy is more accessible.


        Source: ASEAN Energy Equation

        The LNT paradigm is the choice of authorities world wide. And while there is a discussion in some quarters. It is not all that broadly supported or influential.

        Cytotoxicity depends on the ingestion route.

        The population effects of increased long term radiation exposure are unknown.

        It is necessary to evacuate population after a nuclear disaster.

        And as far as Greenparce and WWF are concerned – not to mention the long suffering public – there are easier sells out there.

        Factory made SMR – dropped into a bunker or a mine, run without refueling for 20 or 30 years using leftover ‘nuclear waste’ – of which there is enough for hundreds of years of energy supply. And then recycle the fuel core to burn more of the energy in fissionable material. Producing far more power from the same ore with far less waste and far shorter lived – 300 as opposed to 30,000 year – waste products. And FFS replacing aluminium fuel cladding with silicon-carbide coated safe fuels. Melting aluminium in superheated steam produces hydrogen which then explodes. In the history of bad ideas – this one gave us Chernobyl and Fukushima.

        The General Atomics version is the Energy Multiplier Module – EM2. It is a version of their helium cooled research reactor. Helium cooling instead of water means the module can be placed nearly anywhere. I’d put them into disused mine shafts. The 3% fission product waste is useless as fission bomb material – but hot enough for a few hundred years to be a concern if released. Blow the entrances and let the terrorists negotiate to be dug out.

      • Steven Mosher

        cost of disposal just got decreased
        and safety increased
        https://www.deepisolation.com/

      • Peter Lang
        Thank you for the long reply, and your interest (and others as well).
        Let me first say that I acknowledge that there are more efficient ways for humanity to destroy itself; drugs, smoking, the automobile, the gun, medicine abuse, religion — etc.
        As I think RIE also inferred, deaths from non nuclear power plants are higher and that figure is likely to increase at a faster rate for the time being. It is after all a trade-off in the search for quality of life. They had to be suffered, first because of ignorance, then many time because or irresponsible recklessness. Today we think we know the extent of that.

        With nuclear I feel we have not yet began to fathom the extent, and worse, our inability to deal with it. Caveat here; I am not adequately informed. But like my first post on the subject, the surprises are only beginning. Such as:
        Chernobyl https://earther.gizmodo.com/why-were-confronting-the-chernobyl-disaster-now-1835235381 If I recall correctly the coffin is deteriorating fast while the interred is still in its early stages of stink.
        Steven Mosher added a point on disposal. I do not like ‘out of sight out of mind’ solutions. Q: will that be a source of radon leakage? https://www.radon.com/radon_facts/
        RIE mentioned again the EM2 reactor. Q: is there a chance of helium deteriorating to hydrogen in the core (hydrogen enbrittlement of stressed hot ferrous metals)?
        PL’s video is from 2014. I suspect the gentleman might revise his opinion; with both fukushima and chernobyl.
        Call me a skeptic here.

      • Robert
        All over the world there is a legacy of novel radioactive nuclides – in oceans, air, dust.

        Insignificant in the context of global human challenges. As soon as nuclide releases decrease to background levels then it poses zero health impact and is of academic interest only.

        Remember that the linear no threshold hypothesis is false. There is a threshold (somewhere 30-100 mGy) below which ionizing radiation is either without biological effect or a health benefit. A huge and entirely ignored scientific literature shows cancer prevention and lifespan extension in mice and other organisms from low dose ionizing radiation. (because it provokes an enhanced immune response that does good things like destroying cancers.)

        Only the fiction of the LNT hypothesis allows fictitious risk and “deaths” to be generated from trivial radioactivity in the environment. All those thousands and millions who died from Chernobyl and Fukushima (beyond the immediate victims in the Gray dose range) are statistical artefacts only, purely virtual entities. Radioactive elements are uniquely easy to measure at the level of a handful of atoms, on account of their high energy particle emissions. So you can find them everywhere, even in Antarctica. But they are not a health risk until they give you a few hundred mGy.

      • Melita
        Here is a comment in this Week’s Nature journal on the relatively low risk of nuclear, in the context of all risks:

        https://www.nature.com/articles/d41586-019-01740-3

        Losing out on the nuclear-related benefits of low carbon, small footprint, technology spin-offs, training and employment and others, is also a risk. Economic enhancement prolongs lifespan, a negative risk. (For epidemiological statisticians people are like quantum particles popping back and forth between life and death.)

      • phil salmon
        It is the link in your link that is worrying.
        Quote: “The effects aren’t always obvious or easy to trace. But researchers are now starting to see some subtle impacts that linger 30 years after the Polygon closed. Studies show elevated risks of cancer, and one published in the past year suggests that the effects of radiation on cardiovascular health might be passed down from one generation to the next.”

      • Peter Lang,
        Linking to the video in your post, Youtube promptly recommended this one: https://www.youtube.com/watch?v=ryI4TTaA7qM
        I recommend seeing it through; its long. I suspect the speaker there is a more reliable source and worth listening to.
        Particularly at 45:50, re complacency; and 46:23. plus at 47:00
        Compare at 01:00:00 with what actually happened elsewhere, when the first thing done was to keep all under wraps. And at 33:55 and 01:05:30

        Many years ago, in my early years in power plants, I began to realise there was a lack of tech knowledge and risk appreciation; overconfidence on the system’s immunity to failure, and consequently upper layer complacency, leading to low interest in staff training. My fears were to be realised in one blackout. Later, in a moment of reflection, I saw others more sapient than I shared my opinion. The holy icon on the wall inside the control room always had a candle lighted. Listening to the video at 45:50 brought back that memory.

      • Melitamegalithic,

        Thank you again for your reply. However, I am concerned you are being overly-influenced by the anti-nuclear advocacy groups rather than researching properly evaluating the authoritative sources.

        Did you read the links I posted on the impacts of large nuclear accidents and how best to manage them. For example most people should not have been evacuated from Chernobyl and Fukushima.

        Starting top down, nuclear is the safest way to generate electricity by a country mile, and has been since the first power reactor began supplying electricity to the grid in 1954, i.e. 65 years ago. I posted the comparison of the deaths per TWh of electricity supplied by the technologies in a comment. This is a place to start. Excerpt from that comment:

        The ranking of the electricity generation technologies in terms of life cycle deaths per TWh have been virtually unchanged for some 40 years. For example:
        Coal, world avg. = 60 (50% of electricity)
        Coal, India = 99
        Coal, China = 90
        Coal, U.S. = 15 (44% U.S. electricity)
        Natural Gas = 4 (20% global electricity)
        Solar (rooftop) = 0.44 (0.2% global electricity)
        Wind = 0.15 (1.6% global electricity)
        Hydro, world avg. = 1.4 (15% global electricity)
        Nuclear, world avg. = 0.09 (12% global electricity w/Chern&Fukush)

        [Nuclear is actually 0.04 if we use 200 deaths for Chernobyl instead of the 9000 projected by the LNT hypothesis.]

        So, electricity generated by coal in the USA kills 150 to 400 times more people per TWh than nuclear (world average).

      • Peter Lang

        Paul Salmon, thank you for the link to the Nature Correspondence by Wade Allison. Here is another https://www.nature.com/articles/d41586-019-01749-8 :

        “I disagree with your view that the risks of chronic exposure to ‘low level’ radioactivity in Kazakhstan should inform debate on expanding nuclear power to reduce carbon emissions (Nature 568, 22–24; 2019). I find it alarmist and misleading.

        It is alarmist because the detonation of nuclear weapons at the Semipalatinsk test site exposed the public to much higher doses of radiation than even the most catastrophic accidents at nuclear reactors such as Chernobyl and Fukushima. It is misleading because, despite extensive research, no adverse effects of chronic exposure to low-level radiation (less than 500 millisieverts per year) have been detected (M. Tubiana et al. Radiology 251, 13–22; 2009). Safety levels are set far below this by regulators out of caution, not because there is any evidence of harm.

        The risks of nuclear energy need to be compared with the higher risks of alternative energy sources, notably fossil fuels. By replacing some generators fired by fossil fuels, nuclear energy has saved an estimated 2 million lives since 1971 (see P. A. Kharecha and J. E. Hansen Environ. Sci. Technol. 47, 4889–4895; 2013). Moreover, avoiding the risk of severe climate change requires a rapid reduction in greenhouse-gas emissions, which is not achievable without the expansion of nuclear power.”

      • There are two major fields of contemporary alarmism – radiation alarmism and the LNT hypothesis, and climate alarmism and the CAGW hypothesis. They are similar in many ways. Both are science forced to come to politically mandated conclusions. Both serve a left wing political agenda. The funny thing is that they flat out contradict and oppose each-other. Nuclear generation is the only possible solution to a CO2 scare. But nuclear has already been killed by the radiation scare. But since logic and Popperian epistemological rigour have also been killed off politically, no-one seems to notice or care about this contradiction.

        I experienced the political mandate behind radiation research first hand. I studied and worked in radiation biology and detection for a number of years, and was involved in setting up a lab in Kiev to implement a new solid state method to detect alpha radioactive contamination cheaply on a wide scale. I visited the Chernobyl site and the conference in Minsk 10 years after the accident in 1996. I also worked at Dounreay, Sellafield and Harwell, UK. My PhD was in radiation biokinetics and dosimetry on internal alpha emitting nuclides, including polonium 210. Yes, as in Litvinenko. The best knowledge of this nuclide is of course in Russia and I translated several papers from Russian for my PhD research (no Google translate in those days). This involved working with Canadian caribou as a natural animal model whose natural level of Pb-210 and Po-210 in their tissues is 500-3000 times higher than in human populations (from purely natural sources). But they suffer no ill effects. Once I attended a press conference by my research funding agency that was delivering an alarmist message about radiation and cancer. A government official approached me and told me bluntly to keep my mouth shut about caribou (“caribou free zone” was the phrase she used). Not long after that my funding was stopped. I left radiation research and had to leave the UK also to find work – now I am working in Belgium in xray imaging technology.

      • Peter Lang,
        Thank you for the concern (its been long since someone was concerned about me – other than for my sanity). But be assured its not the case.
        In my third/final year an extra module was belatedly introduced on nuclear. The lectures were given by the head of the mechanicals himself (to just four of us). He had a career in nuclear but had to leave (apparently he ‘exposed’ himself too much; we never got to know if it was safe near him. But I owe the gentleman much).
        Anyway, the Gamma function in the maths of the core looked more like the business end of a gallows for an engineer. That was 1968. The ‘greenes’ weren’t even conceived.
        Some 3yrs later, the year I joined the powerplant, over drinks in a local haunt I was introduced to the nuke specialist advising the ‘top’, a bunch of tech dumb lawyers politicos. I was reassured – no nukes.
        What I feared then I have seen happen elsewhere, unfortunately repeatedly- I always kept a wary eye on that eng sideline. The political class will never solve it. Don’t ever trust them.

      • Peter Lang

        melitamegalithic

        Thank you for your reply. However, I suggest we should debate the relevant facts instead of talking about our backgrounds, what we’ve worked on and who told us what.

        The political class will never solve it. Don’t ever trust them.

        The politicians make the laws. They listen to and have to address what the public wants. The public is influenced by activists, MSM, entertainment industry, etc. So, we need to educate the public and refute the activists. We need to get the facts out to the public, not accept and keep repeating the activists’ mantra. Here are some relevant facts:

        1. Nuclear power has been by far the safest way to generate electricity since it began 65 years ago.

        2. Over 60 years of nuclear powered submarines – people working for years in a steel tube under water – with minimal adverse health effects from radiation.

        3. Risk of severe accidents in the different energy chains (note the log scale vertical axis) – (This study predates Fukushima)

        Source: ‘What is risk? A simple explanation’ https://bravenewclimate.com/2010/07/04/what-is-risk/

        4. The cost of the back-end of the nuclear fuel cycle, which includes nuclear waste management, is about 1% of the total cost of nuclear generated electricity.

        5. Radioactive contamination is easily detected. Chemical pollutants are not.

        6. Nuclear waste toxicity declines logarithmically with time. The toxicity of chemical pollutants does not. Many chemical pollutants in the environment are far more toxic, and at much more toxic concentrations than radioactive contamination around nuclear accident sites.

        7. Low level radiation is beneficial, not harmful.

        8. Radiation levels from radioactive contamination around Chernobyl and Fukushima are low level. Most people should not have been evacuated. But governments were forced to act because of the deep levels of concern in the population. The concern is caused by the anti-nuclear nonsense that has been propagated for 50 years. Most people believe it.

        9. Nuclear waste has been safely managed for ~60 years. Chemical wastes have not. Most chemical pollution is not managed at all – just dumped in the environment.

        10. The regulatory requirements for nuclear waste are extreme and out of balance with the requirements for chemical wastes. The externality of health impacts of pollutants should be managed by laws and regulations that apply equally to all toxic substances. That is, the externality should be managed uniformly.

      • Peter Lang

        Phil Salmon,

        Thank you for your comment and for the information on your background. Sorry to hear the Canadian Caribou incident disrupted your career prospects in nuclear energy.

        I agree strongly with you on this:

        There are two major fields of contemporary alarmism – radiation alarmism and the LNT hypothesis, and climate alarmism and the CAGW hypothesis. They are similar in many ways. Both are science forced to come to politically mandated conclusions. Both serve a left wing political agenda. The funny thing is that they flat out contradict and oppose each-other. Nuclear generation is the only possible solution to a CO2 scare. But nuclear has already been killed by the radiation scare. But since logic and Popperian epistemological rigour have also been killed off politically, no-one seems to notice or care about this contradiction.

        However, from my perspective nuclear is needed for many reasons, but the CO2 scare is not one of them because I believe there is no valid justification for the scare. I believe any warming we can get this century will be beneficial – but cooling would be severely damaging.

      • Peter Lang

        Here is a new example of the entertainment industry scaring the population about nuclear power:

        Chernobyl drama stokes radiation concerns
        “A drama mini-series loosely based on the 1986 Chernobyl accident and broadcast by HBO has attracted large audiences. Exaggerated depiction of radiation effects accentuate the drama but are misleading and even suggest that radiation effects are contagious. It is far from being a documentary. While Chernobyl correctly shows the unprecedented steam explosion in the accident, it hints at worse being somehow just averted. As probably the most intensively-studied industrial accident in history with ample documentation of the radiological aspects, it is a pity that some asserted ‘facts’ have no basis. Anyway, the three countries involved with the accident – Ukraine, Russia and Belarus – are all continuing to expand their nuclear power capacity safely.”
        WNN 5/6/19. https://mailchi.mp/world-nuclear-news/weekly-digest-7-june-2019

        Chernobyl accident: http://www.world-nuclear.org/information-library/safety-and-security/safety-of-plants/chernobyl-accident.aspx

      • Peter Lang
        (I hope here I don’t overstay the welcome) First, I had to delve into my background to dispel any misconceived bias on influence. Eom.

        Politicians make laws to suit their ends. History is replete with evidence of such. The state of things today is no different. Worse, it seems today is an age of extreme egoism. Phil Salmon has touched on that with, quote “Both are science forced to come to politically mandated conclusions.”. Once the world was flat, and everything revolved around it, and many got burned, and humanity in general was ‘scorched’ for the best of a thousand years; because it was politically convenient.

        I am hesitant to reply to your points before I am better informed. Except perhaps to some extent, point 9. We know how the politically expedient solution to chemical waste was: “just dumped in the environment”. We are only now beginning to feel the backlash, so to speak. Plastics, in the news, may be a good example; chickens going home to roost? Bad things invariably backfire (I have a list; – for a book?? ‘Tech tales of terror and unease’).

        The strict reg requirements of point 10 were drawn by people with, hopefully, clear hindsight and wise foresight. Careful there. The ‘code of ethics’ for my warrant is very onerous; drawn by a lawyer politician and enacted into law. When push came to shove, the politician felt he was above that law. (No way).

      • aporiac1960

        Robert E. Ellison “It is necessary to evacuate population after a nuclear disaster.”

        The British authorities decided not to evacuation the residents of Windscale after a fire at the local military reactor in the 1950’s on the grounds that more harm would be done than good. It has been estimated that approximately 220 cancer deaths occurred over a 60 year period as a result of that accident (against a background level of 165,000 per year).

        Interestingly, it was not only the British authorities who decide not to evacuate. A sizeable proportion of the residents of Windscale consisted of the families of people who worked at the plant, and they deliberately and consciously didn’t self-evacuate.

        Even if a decision had been made to evacuate, the area that would have needed to be evacuated to have any impact on the 220 deaths would have been many orders of magnitude greater than any proposals that were considered at the time. In other words, the only possible motivation to evacuate would have been to perform some meaningless token gesture for political reasons. Obviously, we now live in different times and so today there would have been an evacuation because of this imperative. As you say “It is necessary to evacuate the population after a nuclear disaster”.

        You could argue that the failure to evacuate was a cynical policy decision. I don’t disagree. How many more lives of young Australian men were sacrificed on the beaches of Normandy by an equally cynical policy decision?

  43. From the Lewis and King article: The Q-Q plots highlight that while much of the change in distributions can be describe by shifts in mean, changes in the tails of the distributions should be considered explicitly. We employed a quantile regression approach, which allows trends in specific portions of the distribution to be identified, independent of the variability occurring in the remainder of the distribution (Lee et al., 2013). A quantile regressionis applied to each temperature time series (annual-average and dailyTmean, Tmax and Tmin for each region for each model realization). We explore the slope of the 5th, 10th, 25th, 50th, 75th, 90th and 95th quantile regression lines for the period of 1976–2095 using the mean of the multi-model ensemble. These results are summarised inTable 1,together with the standard error of the regression, which is calculated by bootstrap resampling … .

    I am glad to see the quantile regression used instead of merely means and extremes. Granted, they performed the analysis on model outputs assuming RCP 8.5, but as the 21st century unfolds, they can add the quantile regression analysis of (a) other CO2 scenarios and (b) the actual ensembles of measurements.

    • I should add that the paper and its supplemental material are freely downloadable (no need for Sci-Hub). For the historical simulations they provide Kolmogorov-Smirnov and Anderson-Darling tests of the fitted distributions to the obtained distributions, separately by winter/summer, annual, geographic regions, Tmax/Tmin, … .

  44. Ireneusz Palmowski

    Abstract
    Multiple lines of evidence point to one or more moderately nearby supernovae, with the strongest signal at ∼2.6 Ma. We build on previous work to argue for the likelihood of cosmic ray ionization of the atmosphere and electron cascades leading to more frequent lightning and therefore an increase in nitrate deposition and wildfires. The potential exists for a large increase in the prehuman nitrate flux onto the surface, which has previously been argued to lead to CO2 drawdown and cooling of the climate. Evidence for increased wildfires exists in an increase in soot and carbon deposits over the relevant period. The wildfires would have contributed to the transition from forest to savanna in northeast Africa, long argued to have been a factor in the evolution of hominin bipedalism.
    https://www.journals.uchicago.edu/doi/full/10.1086/703418

  45. “Every single body of water—from the greatest ocean to the tiniest stream, cascade, or even well—is under the jurisdiction of a dragon. Legend has it that the Goddess of Creation (Nü Wa) tasked four dragons with administering the Four Seas surrounding the Eastern Continent. Ever since, these Dragon Kings have held court in grand Crystal Palaces on the ocean floor.” https://www.shenyun.com/explore/view/article/e/zBbVt1SWfxc

    They bring us storm and flood at their whim. In their modern incarnation – Didier Sornette named extreme outliers – events beyond power law distributions – as the dragon-kings of complex dynamical systems.

    “The maximum urban-induced rainfall anomaly concentrates to the northeast of the city (i.e., downstream of the Lower Verde River basin), with two additional “pockets” of positive rainfall anomalies located at the northern and western margins of Phoenix (Figure 5a). Urban inudced rainfall anomaly is also observed in downwind Atlanta during the record-breaking 2009 flood event [Debbage and Shepherd, 2018]. The maximum rainfall difference exceeds 40 mm at the grid scale (i.e., accounting for approximately 40 % of maximum storm total rainfall). Both New River and Cave Creek watershed are located in the regions of positive rainfall anomalies, with the increase of basin-average rainfall of 9 mm and 14 mm for New River and Cave Creek watershed, respectively. The rainfall anomalies exhibit a strong
    influence on flood response over the two watersheds. For Cave Creek, the flood peak magnitude is increased by as much as two-fold, i.e., from 209 m3/s to 413 m3/s simply due to the increased 14 mm rainfall over the watershed (Figure 5b). For New River, the flood peak magnitude increased by nearly 12 % due to the 9 mm increased rainfall over the watershed. The percentage of flood peak amplification increases by 32 % with a basin-average rainfall increase of 14 mm over New River. ” https://agupubs.onlinelibrary.wiley.com/doi/pdf/10.1029/2019GL083363

    The form of urban centers in complex terrain can change flooding by up to a factor of 2?

    • I definitely prefer that ancient Chinese intellectual model of the climate system to our, more scientific, sophisticated expert one: “hey, scumbag, humans, you’re all dead in 10 years. Nah, nah, na, Nah Na”

  46. A new Tim Palmer perspective on stochastic forecasting.

    https://www.nature.com/articles/s42254-019-0062-2

    This is an example of a stochastic ENSO forecast. More useful at times of the year other than now – and in periods when one or other of the ENSO canonical patterns is fully established.

    “Better ENSO and subsequent climate predictions alone, however, are not enough to reduce the risks associated with such events. The socio-economic and political context in which climate finds expression and in which climate forecasts have potential value also need to be understood.” https://rmets.onlinelibrary.wiley.com/doi/10.1002/joc.6157

    Better seasonal forecasting goes some way to assessing seasonal risk – with a weather eye on decadal, centennial and millennial variability. As this new study reminds us the societal task of mitigating risk is of fundamental importance. There are many ways of mitigating climatic risk through better water, soil and ecosystem management.

    1 million sand dams for 0.5 billion people by 2040 for instance.

    • Ireneusz Palmowski

      Real El Niño performed this year in March.

      • I recommend this for anyone who sincerely wants to understand ENSO.

        And Tim Palmer on climate model uncertainty? Or why opportunistic ensembles containing a 100 or so deterministic solutions from different models aren’t reliable.

        https://royalsocietypublishing.org/doi/full/10.1098/rsta.2011.0161

      • I recommend this for anyone who sincerely wants to understand ENSO.

        Only if the level of understanding sought is that of a pretentious layman! Those seek genuine scientific understanding of ENSO will soon recognize that ENSO is not even close to a “quasi-standing wave” and will ignore the recommended U-tube video when it conflates a gravity-forced eastward surge of surface waters with oscillatory Kelvin waves on the thermocline that do NOT transport water.

      • Hardly subtle or informed. The term ‘quasi standing wave’ was used by Tomas Milanovic years ago here to describe nodes of nonlinear oscillation in the Earth system. To say ENSO is not a quasi standing wave is to mistake an attempt to define these oscillatory indices found across the planet in terms of a deeper understanding of their spatio-temporal chaotic source.

        https://judithcurry.com/2011/02/10/spatio-temporal-chaos/

        Seen below is the east west sea surface height asymmetry in December 2010 – that is created by trade winds at the equator.

        When the winds falter due to the Madden-Julian Oscillation or other atmospheric instability – the thermocline flattens. This happens as an equatorial trapped Kelvin wave rather than just as a gradual movement of water to the east.

        The videos are an excellent introduction from Duke University – Sakagami, T., R.B. Brady, and R.T. Barber, El Nino Animation V1.25.
        http://www.science.earthjay.com/instruction/HSU/2017_spring/GEOL_308/lectures/lecture_05/El%20Ni%C3%B1o%20(ENSO)%20Animation%20V1.25.htm

        Go back to the videos John – we’ll get onto advanced ENSO dynamics later.

      • To say ENSO is not a quasi standing wave is to mistake an attempt to define these oscillatory indices found across the planet in terms of a deeper understanding of their spatio-temporal chaotic source.

        Before offering pretentious “gems” about spatio-temporal chaos, take a rigorous course in dynamics. You might discover that ENSO is not just an index, but a complex physical process, and that Kelvin waves are oscillatory gravity waves with orbital motion, but no en masse transport.

      • As I said – the term ‘quasi standing wave’ was used by Tomas Milanovic years ago here to describe nodes of nonlinear oscillation in the Earth system. That this a complex dynamical system consisting of spatially and temporally coupled atmosphere, cryrosphere, aquasphere, pedosphere and biosphere is perfectly obvious to an Earth system scientist.

        It is a way of understanding the world that requires a new math as Tomas said in the link provided. At the simplest level it involves Rayleigh-Bénard convection in the atmosphere and closed and open cell cloud formation I discussed above. A highly significant ocean/cloud feedback in the global energy budget it seems.

        “We investigate complex pattern formation in a system for which we have excellent experimental control and where the underlying equations are well known: Rayleigh-Bénard convection. This is a system where a horizontal fluid layer is heated from below and cooled from above, so that above some critical temperature difference convection rolls form.” http://www.ds.mpg.de/LFPB/chaos

        “,,, and that Kelvin waves are oscillatory gravity waves with orbital motion, but no en masse transport.” John

        Seems made up to me – but at rate is so devoid of meaningful application to the complex dynamical ENSO subsystem as to be pointless. What I need as a trained and experienced Earth system scientist is to see how it works in practice. Pictures help.

        This is the latest – June 3 2019 – ENSO update from NOAA.

        “Significant weakening of the low-level easterly winds usually initiates an eastward propagating oceanic Kelvin wave.”

        https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf

        That can be seen in the evolution of subsurface temperature at the top if you know what it is.

        A rigorous course in dynamics indeed. I always wonder if the guy has any practical training and experience – or just superficial abstractions and an acerbic nature?

      • “,,, and that Kelvin waves are oscillatory gravity waves with orbital motion, but no en masse transport.” John

        Seems made up to me – but at rate is so devoid of meaningful application to the complex dynamical ENSO subsystem as to be pointless. What I need as a trained and experienced Earth system scientist is to see how it works in practice. Pictures help.

        Any genuine “trained and experienced Earth system scientist” would never write these words. They unmistakably betray total ignorance of Kelvin wave kinematics, let alone dynamics, presented at a basic level inter alia by

        https://www.soest.hawaii.edu/MET/Faculty/bwang/bw/paper/wang_103.pdf

        Such are the hazards of unaccomplished sanitation engineers obsessively quoting high-level abstractions whose proper scientific meaning and import patently escape them.

      • Scientific terms have a scientific context – and the ones I used here have a context provided by the Max Planck Institute and NOAA. The Kelvin wave quote was from NOAA’s latest ENSO round up.

        And I really am a sanitary engineer – lol – there’s a CV in the about section of my WordPress site – linked at my name. I am an award winning designer of integrated urban water systems – supply, stormwater and sewage – using nuts and bolts and cutting edge technology. I am an engineer, hydrologist and environmental scientist. My strength is a broad background in science, technology and policy.

        My interests include water treatment technology – everything from the CRAPPER low-cost toilet to ion exchange equipment for recycling acid mine drainage for a low cost urban water supply. . My hydrological background involves an ongoing interest in global ocean and atmospheric circulation and in hydrodynamical modelling. Environmental science expands that to biogeochemical cycling – the movement of substances through global environments – and to energy technology, economic development, climate policy, environmental risk assessment and environmental law and management.

        John’s comments are purely personal disparagement and in context with little suggestion of any depth of understanding – of ENSO or dynamical complexity. Merely a pretense of it. Don’t we all know those sort of people?

      • Comparing my training and experience with an anonymous internet blogger begs the question. My guess is that John has zilch.

      • Once again, much verbal posturing and ad hominems without any substantive physical reasoning. Such empty, self-promoting commentary, produced compulsively each day, is a blight upon this blog.

      • You complain about a heading in a video from oceanographers from Duke University – and I back that up with the latest NOAA ENSO wrap up that discusses in some detail the 2019 evolution of the equatorial Kelvin Wave. You whine about quasi standing waves and spatio-temporal chaos. I refer to both Tomas Milanovic here and the Max Planck Institute. The first are nonlinear oscillators in the global fluid flow field. The second is the dynamic of complex physical systems – as opposed to temporal chaos of models. That the nonlinear oscillators are an expression of.

        Not prepared to say what your training and experience is? I’m not surprised.

      • More polemical diversions to avoid acknowledging that equatorial Kelvin-waves are propagating baroclinic waves, with a negative and positive phase that oscillates the thermocline and surrounding water parcels. It’s not initiation of such waves that transports surface water en masse across the Pacific. This well-established scientific fact stands entirely independent of how or by whom captions are written or read…or assertions made. Competent scientists let knowledge speak for itself.

      • “The near-surface winds in the tropical Pacific, the trade winds, were near average in April. The trade winds usually blow from the east to the west, keeping warm water piled up in the far western Pacific. Changes in these winds are a critical component of the El Niño system. When they weaken in the central Pacific, surface waters can warm, and sometimes allow a downwelling Kelvin wave to form: a large blob of warmer-than-average water that moves from the west to the east under the surface of the Pacific.”
        https://www.climate.gov/news-features/blogs/enso/may-2019-el-ni%C3%B1o-update-feliz-cumplea%C3%B1os

        I’d suggest that the current state of the equatorial Pacific is more barotropic than baroclinic. But you are no scientist at all and have such a shallow understanding – pun intended – and such an acerbic manner that it hardly seems worth it.

      • [I]n the central Pacific, surface waters can warm, and sometimes allow a downwelling Kelvin wave to form: a large blob of warmer-than-average water that moves from the west to the east under the surface of the Pacific.

        But that movement is entirely confined to the orbital radius of the baroclinic Kelvin wave and reverses direction in the subsequent upwelling phase of the wave packet. That characteristic feature is even shown on p. 15 of your previously-referenced Powerpoint presentation: https://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf

        Loose wording from a PR blog aimed at a layman audience simply doesn’t cut it in any serious discussion. Reliance upon such wording in the face of widely-available rigorous explanations of Kelvin wave dynamics shows a laughable lack of scientific comprehension masquerading as superior knowledge.

        BTW, if the equatorial Kelvin waves were barotropic, they would extend throughout the water column and propagate at a speed that would cross the Pacific in days, rather than months.

      • “But that movement is entirely confined to the orbital radius of the baroclinic Kelvin wave and reverses direction in the subsequent upwelling phase of the wave packet. That characteristic feature is even shown on p. 15 of your previously-referenced Powerpoint presentation.”

        It was of course precisely that page I was referring to. But this seems to be the only thing of any relevance said – and it is incorrect.

        “A westerly wind anomaly excites downwelling Kelvin waves, which propagate into the eastern Pacific, suppressing the thermocline and causing the SST to rise; this, in turn, enhances the central Pacific westerly wind anomaly by increasing the eastward pressure gradient force in the atmosphere. This positive feedback provides a development mechanism for ENSO SST warming. Second, the cyclonic wind stress curl associated with the central Pacific westerly wind anomaly can induce upwelling oceanic Rossby waves that propagate westward. These waves are eventually reflected at the western ocean boundary, generating upwelling equatorial Kelvin waves, which propagate into the eastern Pacific and offset the warming by enhancing vertical cold advection. This negative feedback provides a mechanism for turning the coupled system to its opposite (La Niña) phase and sustaining the ENSO cycle.” https://www.sciencedirect.com/topics/earth-and-planetary-sciences/kelvin-wave

        It takes some two months to propagate from west to east in a mechanism that is fundamental to the coupled ENSO dynamic that you misunderstand – and in such an absurdly, contemptuously small minded manner.

        And you confuse the speed of tidal motions with a presently quiescent tropical Pacific.

      • Waves have a rotational component.

        But an equatorial Kelvin wave is not wind friction generated and the wave propagates to the east in an El Nino.

      • Amply evident here is a pitiful inability by a quotation-fixated
        mind to comprehend what is being said in any rigorous scientific context.
        Even inadvertent nonsense about Kelvin waves “suppressing [sic!] the
        thermocline and causing the SST to rise” is blindly accepted at face value.
        And then the commentary wanders off into the amusing non sequitur: “you confuse the speed of tidal motions with a presently quiescent tropical Pacific.”

        To anyone competent in oceanic physics in an equatorial setting, it’s clear
        that the thermocline may become DEpressed during the downwelling phase of passage of baroclinic Kelvin waves, but that any rise in temperature must be due to other factors. A far-more-expert review of ENSO physics summarizes:

        Along the equator the Kelvin waves are also
        accompanied by anomalous surface currents that induce an
        eastward displacement of the eastern edge of the western
        Pacific warm pool [Matsuura and Iizuka, 2000; Picaut et al.,
        2002; Lengaigne et al., 2002]. The combined effects of zonal
        advection and thermocline depression increase the SST in the
        central and eastern Pacific and thus decrease the zonal SST
        gradient and weaken the trade winds [Lindzen and Nigam,
        1987]. The weakening of the trade winds will cause more
        warm water to flow eastward, causing even weaker trade
        winds.

        https://climate-dynamics.org/wp-content/uploads/2015/05/wang04b.pdf

        The ironic aspect of the non sequitur is that it betrays a total lack of
        knowledge that the celerity of ALL gravity waves in water much shallower
        than the wavelength is c = sqrt(gh), where g is gravity and h is the water
        depth. Thus there’s no possibility of the claimed confusion. That I was
        simply referencing the difference between the water depth of the ocean
        (relevant to barotropic waves) as opposed to the thermocline depth
        (relevant to baroclinic waves) went totally uncomprehended.

        Total confusion is evident in the follow-on comment, which seems to suggest that because an equatorial Kelvin wave is not generated by wind friction, it does not produce oscillatory orbital motion. To compound the
        error, the illustrated motion is termed “a rotational component”–a
        wholly inappropriate invocation of the curl of particle motion for
        typically irrotational waves (q.v.)

        In the face of the apparent impossibility of stopping colossal oceanographic misconceptions from being inflicted further by total amateurs , I will not waste any more time on this thread.

      • At the height of La Nina easterly winds pile warm surface water up in the western warm pool. Where the surface current meets the western boundary at the equator it is reflected downward within the confine of the thermocline. The downwelling plume is directed as a Kelvin wave to the east. It is a turbulent flow. It is kept in balance by winds and currents in both hemispheres – south along the North American coast and north on the South American. These converge near the equator – from Coriolis forces – in easterly winds and currents that cause a shoaling of the thermocline and upwelling of cold water. A baroclinic feedback strengthens winds and currents. Ekkman transport north and south of the equator results in central Pacific upwelling as the Pacific cold tongue advances.

        The Madden-Julian Oscillation is a coupled high/low pressure system that travels around the planet on the equator from west to east. Westerlies in the Madden-Julian Oscillation can disrupt the balance when conditions are ripe – which results in slow movement of warm surface water to the east in an enhanced pulse of downweilling warm water that propagates to the eastern boundary as a Kelvin Wave. As described in every link so far. And as I have said.

        To quote John’s source more honestly.

        “The generated downwelling Kelvin waves propagate eastward along the thermocline to the central and eastern Pacific. The resulting rises in sea level along the South American coast are observed to occur approximately 6–7 weeks following the WWB events
        that generate them. Along the equator the Kelvin waves are also accompanied by anomalous surface currents that induce an eastward displacement of the eastern edge of the western Pacific warm pool [Matsuura and Iizuka, 2000; Picaut et al.,
        2002; Lengaigne et al., 2002].”

        This merely confirms everything I have said and quoted. John’s idea is that as El Nino emerges warm water flows on the surface to the east on a slope of 50 cm in 1000’s of kilometers while Kelvin Waves transport no heat or mass as they have a perfect circular, orbital or rotational motion – one can play with words but not reality – found in surface waves generated by wind friction. It is unsophisticated, amateur nonsense – and truculently so.

      • [T]here is a clear distinction
        between free Kelvin waves and slow wave-like
        anomalies associated with ENSO. This is further
        emphasized by the measurements in Figure 15 that
        contain evidence of freely propagating Kelvin waves
        (dashed lines in the left panel) but clearly show them
        to be separate from the far more gradual eastward
        movement of warm water associated with the onset
        of El Nin˜o of 1997 (a dashed line in the right panel).
        This slow movement of warm water is the forced
        response of the ocean and clearly not a wave that
        could satisfy the unforced equations of motion. The
        characteristic timescale of ENSO cycle, several years,
        is so long that low-pass filtering is required to isolate
        its structure.

        https://people.earth.yale.edu/sites/default/files/files/Fedorov/50_Fedorov_EqWaves_Encyclopedia_2009.pdf

      • Solving primitive continuity and momentum equations will give a phase speed for assumed simple vorticity. But wind driven currents downwell at the western boundary and drive far more complex eddies in real turbulent flow below the surface to the eastern boundary at the equator.

        Evident in observations of subsurface temps following the most recent and very modest La Niña.

        It is the difference between real and idealized Earth systems. And yes – clearly distinct from the slow movement of warm surface water eastward when the trade winds falter.

  47. Ireneusz Palmowski

    The problem is that in the period of low solar activity, the jetstream interferes with the ENSO cycle.
    http://tropic.ssec.wisc.edu/real-time/mtpw2/product.php?color_type=tpw_nrl_colors&prod=samer2&timespan=24hrs&anim=html5

  48. “People are rewarded for being productive rather than right, for building ever upward instead of checking the foundations. These incentives allow weak studies to be published. And once enough have amassed, they create a collective perception of strength…”
    <– I don't think I've encountered a single climate alarmist who understands 'radiative forcing' in the way someone like Richard Feynman would demand of himself. Hardly surprising. How can one understand pseudoscience?

  49. Ireneusz Palmowski

    Thunderstorms over North America.

  50. Ireneusz Palmowski

    High pressure in the Eastern Pacific does not correspond to the conditions of El Niño.

  51. I’ve returned from the moshpits of rural Dorset, only to discover that a significant sequence of comments seems to have disappeared. Perhaps the moderator(s) could provide an explanation?

    Worst of all, the legacy of the NSIDC’s late, great Drew Slater has disappeared into oblivion. Here once again for posterity is the SPIE 50 day Arctic sea ice extent forecast:

  52. Dan Yurman at Neutron Bytes interviews Dr. Jose Reyes, a co-founder of NuScale and chief designer of their small modular reactor (SMR):

    https://neutronbytes.com/2019/06/04/interview-with-nuscale-ceo-jose-reyes/

    NuScale is a decade ahead of the pack in getting an SMR design into commercial production. Their current schedule calls for the first US-manufactured SMR design to be in operation in eastern Idaho by the end of 2026.

    • … cont

      https://bravenewclimate.com/2015/01/24/what-can-we-learn-from-kerala/
      The dark monazite-laced sands of Kerala Sea Beach, India

      So the Kerala data confirms what is obvious from a modern understanding of DNA repair. Namely that radiation damage isn’t cumulative at normal background dose rates and also that it isn’t cumulative at even 30 times normal dose rates. Meaning that 70 milliSieverts a year for a lifetime does nothing. The very concepts of “annual dose” or “cumulative dose” are simply misleading in such a situation. The best available evidence is that an annual exposure to 100 milliSieverts results in an actual dose of zero because it is below a person’s capacity for perfect repair. When experts discuss these matters they always distinguish exposure, measured in Grays, from dose, measured in Sieverts, but they don’t take account of delivery rate or DNA repair because the power and mechanisms of DNA repair were unknown at the time and by the people formulating the theories and standards. It’s high time they got their house in order. The suffering caused by clearly obsolete science has been and continues to be immense.

      Researchers in 2012 from MIT confirmed that radiation damage isn’t cumulative at even 400 times background rates for six months. But they did this work in mice, and while mice are more prone to cancer than people, meaning the results should hold for people, mice aren’t people and extrapolation both of species and time introduces uncertainty. The Kerala data is substantial and unequivocal and backs a recent judgement by UK radiation expert Malcolm Grimston that the Fukushima evacuation was, and still is, “stark raving mad”.

      When the Japanese Government recently lifted the throwing-out-of-home orders on Minamisoma City in Fukushima Prefecture, because they estimated that the annual radiation level had dropped to 20 milliSieverts per year, city officials reckoned that 80 percent of residents will not return because of radiation fear.

      Perhaps the Japanese should have looked at what’s been happening at Kerala for thousands of years before deciding on their 20 milliSievert limit; nothing.

      The implication is clear. Nuclear accidents are no different from other industrial accidents. There is a clear need to worry about genuinely dangerous short lived radioactive isotopes just like we worry about flames and dense smoke from other accidents or natural calamities like bush fires. But we don’t need to waste time, energy and money cleaning up levels of radioactive contamination that don’t do anything. Sandblasting trees, roofs, driveways and especially putting valuable topsoil in black plastic bags, like they have been doing in Japan since 2011, makes as much sense as sending the army out into the bush to sandblast blackened tree trunks after a bush fire.”

    • “While the LNT model cannot be scientifically proven by epidemiologic evidence at very low doses or low dose rates, the preponderance of high-quality epidemiologic data is reasonably consistent with the LNT assumption.”
      https://iopscience.iop.org/article/10.1088/1361-6498/aad348/meta

      A review of 29 epidemiological studies. While the health effects of long term exposure of large populations to small doses of ionizing radiation are uncertain – the LNT model is about minimizing exposure. At which the nuclear industry has been spectacularly incompetent.

      And you think you can change perceptions with a mouse model?

      • Robert
        And you think you can change perceptions with a mouse model?
        Were you not aware that the mouse is the animal model used in most medical research? Much physiology and biochemistry is conserved across the chordates and even more widely among living organisms. Even more distantly related species such as zebrafish, fruitflies, nematode worms, yeasts and even bacteria are used in research into biology with human medical application. This is where all your medicines come from.

        At which the nuclear industry has been spectacularly incompetent.
        I heard from a nuclear plant dosimetrist an interesting example of the extreme paranoia that now characterises radiation protection practices due to the Medieval superstitious dread of radiation arising from LNT. A spill of water occurred from what was classified as an active area. Safety requirements demanded that the people who cleaned it up wore whole cumbersome body suits and respirators. Some calculations later showed that the radioactivity levels in the water that these workers were cleaning up, dressed like astronauts, was lower than the level of natural radioactivity in the bodies of the workers themselves. They were more likely to contaminate the water than the other way around. Many people who hold strong views on nuclear issues are unaware that we live in a radioactive world and are ourselves radioactive. More than a thousand radioactive disintegrations take place in your body every second.

      • You give me a problematic mouse model, an anecdote and ignore my mention of BED?

    • Phil Salmon,

      I agree. There are many studies of the impact of life long exposure to high natural background levels of radiation that are well above the allowable radiation exposure levels. Here’s an excerpt from an article (blog post) on one from India.

      What can we learn from Kerala
      https://bravenewclimate.com/2015/01/24/what-can-we-learn-from-kerala/

      “Kerala’s been on the radar of the World Health Organisation for over half a century ago and the reasons have nothing to do with population or rice or wood cooking fires or dodgy forest data. Kerala has a very high rate of background radiation due to sands containing thorium. The level ranges from about 70 percent above the global average to about 30 times the global average. For thousands of years, some of the population of Kerala have been living bathed in radiation at more than triple the level which will get you compulsorily thrown out of your home (evacuation) in Japan. The Japanese have set the maximum annual radiation level at 20 milli Sieverts per year around Fukushima while some parts of Kerala have had a level of 70 milliSieverts per year … for ever.

      Scientists have been looking for radiation impacts on Keralites (people from Kerala) for decades. In 1990 a modern cancer registry was established and in 2009 a study reported on the cancer incidence in some 69,958 people followed for an average of over a decade. Radiation dose estimates were made by measuring indoor and outdoor radiation exposure and time spent in and out of doors. They haven’t just been bathing in radioactivity for thousands of years, Keralites have been eating it. An early 1970 study found that people in Kerala were eating about 10 times more radioactivity than people in the US or UK, including alpha particle emitters (from fish).

      The cancer incidence rate overall in Kerala is much the same as the overall rate in India; which is about 1/2 that of Japan and less than 1/3rd of the rate in Australia. Some 95 new cancers per 100,000 people per year compared to 323 per 100,000 per year in Australia (age standardised).

      Cancer experts know a great deal about the drivers of these huge differences and radiation isn’t on the list.
      The Kerala study has several advantages over other studies of low dose radiation. They are dealing with a mainly rural population which is less likely to be exposed to other carcinogens which could complicate the analysis. They are also dealing with a genuinely low rate of radiation exposure. This mirrors what would be the case in Fukushima if the Government hadn’t forcibly moved people. Most radiation protection standards derive from studies of atomic bomb victims who got whatever dose they got in a very short time. They may have got a dose which fits the definition of low (less than 100 millisieverts), but at an extremely rapid rate. Getting bombed just isn’t like living in a slightly elevated radiation field. In Kerala people are getting a low rate for a long time.

      With very high radiation delivery rates, such as occur with an atomic weapon, it is obvious that DNA repair capacity can be overwhelmed. I need to mention here, that atomic weapons don’t primarily kill with radiation, they kill in exactly the same way as other bombs; with a shock wave, flying debris and fire. Radiation can certainly kill people outside the lethal area of the blast, but the dose required is huge. As you get further away from the blast center, people can get 100 milliSieverts or even thousands, but getting such a dose in such a situation is nothing like getting it over a one or ten year period; just as a single 1 or 10 hour exposure to summer sun is very different from getting the same amount in very tiny increments over a month. Despite the clear methodological problems, data from the atomic bomb victims dominates radiation protection standards, primarily because, for many years, it was the only data.

      The Kerala data clearly contradicts the assumptions behind all of those radiation safety standards. People getting a dose of 500 mSv should show a measurable rise in cancer rates. They did when the dose was delivered quickly as with the atomic blasts. They don’t at Kerala.

      continued below …

    • What can we learn from Kerala Continued…

      “So the Kerala data confirms what is obvious from a modern understanding of DNA repair. Namely that radiation damage isn’t cumulative at normal background dose rates and also that it isn’t cumulative at even 30 times normal dose rates. Meaning that 70 milliSieverts a year for a lifetime does nothing. The very concepts of “annual dose” or “cumulative dose” are simply misleading in such a situation. The best available evidence is that an annual exposure to 100 milliSieverts results in an actual dose of zero because it is below a person’s capacity for perfect repair. When experts discuss these matters they always distinguish exposure, measured in Grays, from dose, measured in Sieverts, but they don’t take account of delivery rate or DNA repair because the power and mechanisms of DNA repair were unknown at the time and by the people formulating the theories and standards. It’s high time they got their house in order. The suffering caused by clearly obsolete science has been and continues to be immense.

      Researchers in 2012 from MIT confirmed that radiation damage isn’t cumulative at even 400 times background rates for six months. But they did this work in mice, and while mice are more prone to cancer than people, meaning the results should hold for people, mice aren’t people and extrapolation both of species and time introduces uncertainty. The Kerala data is substantial and unequivocal and backs a recent judgement by UK radiation expert Malcolm Grimston that the Fukushima evacuation was, and still is, “stark raving mad”.

      When the Japanese Government recently lifted the throwing-out-of-home orders on Minamisoma City in Fukushima Prefecture, because they estimated that the annual radiation level had dropped to 20 milliSieverts per year, city officials reckoned that 80 percent of residents will not return because of radiation fear.

      Perhaps the Japanese should have looked at what’s been happening at Kerala for thousands of years before deciding on their 20 milliSievert limit; nothing.

      The implication is clear. Nuclear accidents are no different from other industrial accidents. There is a clear need to worry about genuinely dangerous short lived radioactive isotopes just like we worry about flames and dense smoke from other accidents or natural calamities like bush fires. But we don’t need to waste time, energy and money cleaning up levels of radioactive contamination that don’t do anything. Sandblasting trees, roofs, driveways and especially putting valuable topsoil in black plastic bags, like they have been doing in Japan since 2011, makes as much sense as sending the army out into the bush to sandblast blackened tree trunks after a bush fire.”

      • A blog? But the 2009 study says.

        “In conclusion, the cancer incidence study in Karunagappally, India, showed no HBR-related excess of malignant tumors. Although the statistical power of the study might not be adequate due to the low dose, these findings suggest it unlikely that estimates of cancer risk at low doses are substantially greater than currently believed.”

      • Peter
        Yes the Kerala thorium sands are an important example of unusually high natural radioactivity and the absence of any negative health effects of such activity.

      • Peter Lang

        Yes. And other places with high natural radiation levels have been studied too. These places have lower cancer rates, people are healthier and live longer than places with lower natural radiation levels.

        Some time ago I saw a modified LNT chart. The line is linear with increasing doses at medium and high doses, but the line drops to zero below a threshold and is negative at low doses. I’ve tried to find the chart to post here, but cant. Have you seen it? If you have, could you please post it here?

    • The Chernobyl accident – UNSCEAR’s assessments of the radiation effects

      Excerpt:

      “Apart from the dramatic increase in thyroid cancer incidence among those exposed at a young age, and some indication of an increased leukaemia and cataract incidence among the workers, there is no clearly demonstrated increase in the incidence of solid cancers or leukaemia due to radiation in the exposed populations. Neither is there any proof of other non-malignant disorders that are related to ionizing radiation. However, there were widespread psychological reactions to the accident, which were due to fear of the radiation, not to the actual radiation doses.

      There is a tendency to attribute increases in the rates of all cancers over time to the Chernobyl accident, but it should be noted that increases were also observed before the accident in the affected areas. Moreover, a general increase in mortality has been reported in recent decades in most areas of the former Soviet Union, and this must be taken into account when interpreting the results of the accident-related studies.

      The present understanding of the late effects of protracted exposure to ionizing radiation is limited, since the dose-response assessments rely heavily on studies of exposure to high doses and animal experiments. Studies of the Chernobyl accident exposure might shed light on the late effects of protracted exposure, but given the low doses received by the majority of exposed individuals, any increase in cancer incidence or mortality will be difficult to detect in epidemiological studies.”

      • “Although the effects of high dose radiation on human cells and tissues are relatively well defined, there is no consensus regarding the effects of low and very low radiation doses on the organism. Ionizing radiation has been shown to induce gene mutations and chromosome aberrations which are known to be involved in the process of carcinogenesis.” https://www.ncbi.nlm.nih.gov/pubmed/29333114

        So what does an increase in radiation doses – from multiple sources – mean for billions of people.

        “Chernobyl accident exposure might shed light on the late effects of protracted exposure, but given the low doses received by the majority of exposed individuals, any increase in cancer incidence or mortality will be difficult to detect in epidemiological studies.”

        According to neo-nuclear evangelists – that you can eat it for breakfast.

      • Peter
        The rural folk near to Chernobyl including many old people were evacuated in ever expanding areas as the radiation levels considered dangerous constantly dropped lower and lower. Eventually overlapping with natural activity levels. All due to the nuclear-phobic hectoring and bullying influence of the international community. A large proportion of the old people moved in this way died not long after being evacuated. These old village folk moved from where they had lived all their lives were stressed and their lives drastically shortened. It was a kind of holocaust attributable to the LNT hypothesis. I’ve seen the same thing happening with elderly people moving from Russia or Ukraine/Belarus to Europe and America. The culture shock and stress sometimes causes early death.

        Way more died from this than from radiation in connection with Chernobyl.

      • Peter Lang

        Phil,

        Yes. All correct. WHO, UNSCEAR and The Chernobyl Project have been reporting this for some 30 years.

    • US EPA estimates 15,000 to 35,000 deaths per year are attributable to coal fired electricity generation (full life cycle analysis). Approximately zero from nuclear.

  53. Nuclear radiation is relatively harmless

    “Popular concern about nuclear radiation focusses in particular on the effect of internal radiation — that is on radiation emitted over an extended period by radioactivity absorbed into the human body itself. In the Fukushima accident attention has centred on Caesium-137 which spreads throughout the body and has a 30-year radioactive life time, even though at Chernobyl no casualty could be linked to it. Any accident involving internal doses 1000 times greater than any measured at Fukushima would provide a convincing demonstration of any risk — that is above a few million Bq. Such an accident happened at Goiania, Brazil, in 1987 when a Caesium-137 radiotherapy source of 20 TBq was stolen and broken open. It glowed with an enticing blue light and children painted themselves with it, spreading it around their home and kitchen, and their neighbours were invited in to see and admire. When finally resolved, 249 people had been contaminated, internally or externally. Four died of ARS including a girl with an internal radioactivity of 1000 million Bq. In addition 28 had serious burns requiring surgery. Since the accident two babies were born to women with a high internal dose, one with 0.2 million Bq who was pregnant at the time and another with 300 million Bq who gave birth 3 years 8 months later. No problems with the births has been reported. Now, more than 25 years later the total number of cancers reported with any possible link to the radiation is zero. How can this be? The internal radioactivity that spread throughout their bodies gave a dose protracted over many months which enabled the action of the repair and adaptive responses. Certainly the residents of Fukushima need have no concern whatever on account of Caesium-137 and the work of decontamination is just not necessary.


    The sight of officials in protective clothing suggests danger and kills confidence

    But residents of the contaminated regions of Fukushima have other worries. The sight of officials with meters and protective gear probing a children’s playground would be enough to frighten the most hardened parent. Except within the plant itself this gear cannot be necessary. It may be an exercise in authority and “I am an official doing something important” but it certainly damages public confidence.”

  54. This is all anthropogenic. Cesium-137 has a 30 year half life – so there are no naturally occurring sources of any significance. One cubic meter of seawater has about a 300-60,000 BED (banana equivalent dose). So nothing to get too excited about.

    But there are of course much longer lived actinides floating around in dust clouds all over the planet. And many of us are at exposure levels – even if we don’t live in Fukushima or Chernobyl – where detectable effects are found.


    https://www.arpansa.gov.au/understanding-radiation/radiation-sources/more-radiation-sources/ionising-radiation-and-health

    And if only 1% of cancer deaths – epidemiologically undetectable – is caused by anthropogenic radiation that is 100,000 deaths per year. In contrast – coal plants can and are being made (at much less cost than nuclear) to be very clean. 🤣

  55. Peter Lang, RIE; Thank you, a good informative read. (Maybe I shift my worries on the next Eddy cycle peak. 🤔 )

  56. Dr. Curry, the Teenage Super Sleuths have a bunch of new videos out about Global Warming. Please share them with your readers. It is important that we encourage the youth to understand the science behind climate change.

    The Teenage Super Sleuths have made some videos about your website.












  57. The Linear No-Threshold Relationship Is Inconsistent with Radiation Biologic and Experimental Data
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2663584/

    Chernobyl early firefighters: Not LNT – see slide 29 here: http://efn-usa.org/component/k2/item/693-why-fear-of-radiation-is-wrong-personally-scientifically-environmentally-wade-allison-uk-1-news
    Above 4,000 mSv 27/42 died of acute radiation sickness in 2 to 3 weeks. Below 2,000 mSv zero out of 195 died.

    What is wrong with using the LNT hypothesis to predict cancer risk?

    “First, there is no statistically significant data that supports the use of this hypothesis to predict cancer risk at low dose, which is why it is still a hypothesis 58 years after it was adopted. The LNT hypothesis is employed to calculate hypothetical risks. It creates uncertainty and great fear about potential cancer risks from low radiation doses.

    Second, there are enormous amounts of data on the biological effects of: a low radiation dose, repeated doses of radiation, and low radiation levels, which contradict the predictions of the LNT hypothesis (Cuttler 2013; 2014). These data were recorded over more than 115 years, from the late 1890s until the present time. Compliance with the requirements of The Scientific Method should have led the NAS to reject the LNT hypothesis instead of adopting it in 1956.

    Third, it continues to be defended as being a “conservative” means of radiation protection by requiring the minimizing of radiation exposures to as low as reasonably achievable (ALARA). This policy has led to precautionary measures, such as emergency forced evacuations, which cause many premature deaths and enormous psychological suffering due to fears of cancer.”
    http://radiationeffects.org/wp-content/uploads/2014/10/Cuttler-2014Oct_LNT-Guide-for-perplexed.pdf

    • Dangerous doses of nuclear radiation


      The mortality of early fire fighters at Chernobyl shown by crosses. The numbers give died/total in each dose range. The curve is for rats.

  58. “A finding of note is that family pedigrees living in the HNBR area were found to have an increased level of germ-line point mutations between mothers and their offspring [42]. This implied that the radioactive conditions accelerated the mutations that have been evolutionary hotspots for more than 60 000 years.” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4030667/

    A very readable review of work in ‘high natural background radiation (HNBR) areas (Guarapari, Brazil; Kerala, India; Ramsar, Iran; Yangjiang, China), including radon-prone areas, to low dose risk estimation.

    We may swap formal studies – or in Peter’s case blog sites – all day. But at the end of the day – sample sizes and controls are insufficient for high statistical power. But this is certainly not to say that there is not demonstrated cause for concern even with low additional (to natural sources) radiation exposure.

  59. It Is Time to Move Beyond the Linear No-Threshold Theory for Low-Dose Radiation Protection

    Abstract

    “The US Environmental Protection Agency (USEPA) is the primary federal agency responsible for promulgating regulations and policies to protect people and the environment from ionizing radiation. Currently, the USEPA uses the linear no-threshold (LNT) model to estimate cancer risks and determine cleanup levels in radiologically contaminated environments. The LNT model implies that there is no safe dose of ionizing radiation; however, adverse effects from low dose, low-dose rate (LDDR) exposures are not detectable. This article (1) provides the scientific basis for discontinuing use of the LNT model in LDDR radiation environments, (2) shows that there is no scientific consensus for using the LNT model, (3) identifies USEPA reliance on outdated scientific information, and (4) identifies regulatory reliance on incomplete evaluations of recent data contradicting the LNT. It is the time to reconsider the use of the LNT model in LDDR radiation environments. Incorporating the latest science into the regulatory process for risk assessment will (1) ensure science remains the foundation for decision making, (2) reduce unnecessary burdens of costly cleanups, (3) educate the public on the real effects of LDDR radiation exposures, and (4) harmonize government policies with the rest of the radiation scientific community.”

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6043938/

  60. LNT RIP: It is time to bury the linear no threshold hypothesis

    “The potential health effects of low doses of ionizing radiation (IR) have long been a concern as well as a source of controversy. Many have felt that the scientific questions were definitively answered by the Life Span Study (LSS) and that the effects of radiation can be modeled with a linear no threshold (LNT) model. We must remember that science—true science—is never static. Science is never settled; it is an iterative process. When a theory fails to explain new data, it must be adapted or rejected in a never-ending process of evolution. Newer insights into the cellular defenses with respect to IR have come from recent clinical and molecular studies and were recently summarized elegantly in these pages.1 We believe it is important to extend these observations since, as we will demonstrate, the existence of effective cellular defense mechanisms against IR make the LNT hypothesis untenable.”

    “When science becomes “settled,” it ceases to be science and becomes dogma. When a hypothesis is inconsistent with new data, it must be modified or rejected. LNT, though simple to apply, clearly cannot stand up to scientific or even logical scrutiny.

    LNT is no longer tenable. Using it to predict the health risks of IR in low doses is indefensible and should be abandoned.”
    https://link.springer.com/article/10.1007/s12350-019-01646-7

  61. “The committee concludes that the current scientific evidence is consistent with the hypothesis that there is a linear, no-threshold dose-response relationship between exposure to ionizing radiation and the development of cancer in humans.” https://www.nap.edu/read/11340/chapter/15#323

    There is no support at the institutional level for any but the LNT. And this simply argues for limiting exposure as much as practical.

  62. Controversy of LNT Model
    Excerpt:

    “Problem of this model is that it neglects a number of defence biological processes that may be crucial at low doses. The research during the last two decades is very interesting and show that small doses of radiation given at a low dose rate stimulate the defense mechanisms. Therefore the LNT model is not universally accepted with some proposing an adaptive dose–response relationship where low doses are protective and high doses are detrimental. Many studies have contradicted the LNT model and many of these have shown adaptive response to low dose radiation resulting in reduced mutations and cancers. This phenomenon is known as radiation hormesis. Alternative assumptions for the extrapolation of the cancer risk vs. radiation dose to low-dose levels, given a known risk at a high dose: LNT model, and hormesis model.”
    https://www.nuclear-power.net/nuclear-engineering/radiation-protection/radiobiology/linear-no-threshold-model/controversy-of-lnt-model/

  63. Yet another blog? You going to do this all day?

    “The Radiation Health and Safety Advisory Council supports the continued appropriate use of the ‘linear
    no-threshold’ (LNT) model as a regulatory tool. The effects of high doses of radiation are extensively researched and well known. Radiation effects at low
    doses and dose rates remain a subject of scientific research and the focus of investigation by major international organisations. There is established evidence of harm at exposures of populations at ionising radiation doses above
    approximately 100 mSv. “However, as the dose decreases, the power of epidemiological studies becomes less and less, although there may be sensitive subgroups within the population for which increased frequency of occurrence of specific disease types may be discernible.”
    https://www.arpansa.gov.au/sites/default/files/rhsac_-_position_statement_on_the_use_of_the_lnt_1_may_2017.pdf

  64. Radiation Hormesis: Historical Perspective and Implications for Low-Dose Cancer Risk Assessment
    Abstract:
    Current guidelines for limiting exposure of humans to ionizing radiation are based on the linear-no-threshold (LNT) hypothesis for radiation carcinogenesis under which cancer risk increases linearly as the radiation dose increases. With the LNT model even a very small dose could cause cancer and the model is used in establishing guidelines for limiting radiation exposure of humans. A slope change at low doses and dose rates is implemented using an empirical dose and dose rate effectiveness factor (DDREF). This imposes usually unacknowledged nonlinearity but not a threshold in the dose-response curve for cancer induction. In contrast, with the hormetic model, low doses of radiation reduce the cancer incidence while it is elevated after high doses. Based on a review of epidemiological and other data for exposure to low radiation doses and dose rates, it was found that the LNT model fails badly. Cancer risk after ordinarily encountered radiation exposure (medical X-rays, natural background radiation, etc.) is much lower than projections based on the LNT model and is often less than the risk for spontaneous cancer (a hormetic response). Understanding the mechanistic basis for hormetic responses will provide new insights about both risks and benefits from low-dose radiation exposure.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2889502/

    • This is especially important as we are beginning to think about extended space flight. We can’t perpetuate bad science that prevents us from learning how to adequately protect astronauts while allowing them to undertake their missions.

    • pochas94

      This is true. However, space flight now and in the foreseeable future involves only tens of people per year. By far the most important reason to increase the allowable radiation limits from As Low As Reasonably Achievable (ALARA) to As High as Relatively Safe (AHARS) – where relative means relative to other electricity generation technologies – is because of the impact the ALARA regulations have had on the cost of electricity. They have been a primary cause in the regulatory ratcheting that has increased the cost of nuclear power by a factor of ten compared with what they could have been now. This has disrupted the deployment of nuclear globally and caused an estimated 4.2 to 9.5 million deaths in the 35 years from 1980 to 2015.

      This first figure below shows what happened to the cost of nuclear power plants. The second shows how the deployment was disrupted. The cause was fear spread by the anti-nuclear protest movement, the MSM and entertainment industries, which led to ALARA and regulatory ratcheting.


      Figure 1. Overnight construction cost (in 2010 US $/kW) plotted against cumulative global capacity (GW), based on construction start dates, of nuclear power reactors for seven countries, including regression lines for US before and after 32 GW cumulative global capacity.


      Figure 5. Annual global capacity of construction starts and commercial operation starts, 1954–2015.

      Source: https://www.mdpi.com/1996-1073/10/12/2169/htm

      • The increased cost of electricity compared with what it could have been is negatively impacting global economic growth and the welfare of 7.4 billion people, not just the tens of people involved in space flight.

      • As an aside, I once had some business with a nuclear station and heard tales of trades people who would “burn out” their radiation badges so they had to move on. They would do an annual circuit of nuclear plants, burning out at each, and grow wealthy in doing so. I hope they stayed healthy.

      • A am sure they would have stayed healthy. I’ll tell you a wee story. Once the Canadian Broad Cast Commission news crew made a visit to a site where I worked. When they passed through the radiation scanners on the way into the site (everyone has to do it every time they arrive at or leave the site), the radiation detectors were set off. The security scanned everyone and confiscated their TV cameras. The thorium in the lenses set off the detectors. They confiscated the cameras and held them for a day or two. They were simply making the point to the alarmist media that they are copping more radiation every day than the people working at the site.

    • Peter, you’re right again. See if yoou can get Ellison to read these and understand them.
      Dose Response. 2010; 8(2): 172–191.
      Published online 2010 Jan 18. doi: 10.2203/dose-response.09-037.Vaiserman
      PMCID: PMC2889502
      Radiation Hormesis: Historical Perspective and Implications for Low-Dose Cancer Risk Assessment
      Alexander M. Vaiserman
      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2889502/ full
      here is a growing body of experimental and epidemiological evidence that does not support the use of the LNT model for estimating cancer risks at low doses of low linear-energy-transfer radiation (Cohen 1995; Azzam et al. 1996; Redpath et al. 2001; Boreham et al. 2006; Sakai et al. 2006; Day et al 2007; Mitchel 2007a,b; Portess et al. 2007; Sanders and Scott 2008; Tubiana et al. 2006, 2009). Instead, the results support the existence of hormetic-type, dose-response relationships with low doses and dose rates being protective and high doses causing harm (Calabrese 2008, 2009b; Pollycove and Feinendegen 2008; Scott, 2007, 2008). The literature that shows health benefits from increased exposure to ionising radiation has more than 3000 reports (Luckey 2008a). The average level of natural radiation for the Earth is 2–3 mSv/y (UNSCEAR 2000, Annex B). According to T.D. Luckey ‘we would have abundant health for any increased level up to the threshold, almost 3000 times the ambient level’ with dose rates from 3 mSv/y to 8 Sv/y (Luckey 2008a, figure 3).
      The radiation hormesis model, unlike the LNT model, assumes that adaptive/protective mechanisms can be stimulated by low-dose radiation and that they can prevent both spontaneous and toxicant-related cancers as well as other adverse health effects (Calabrese et al. 2007). Such stimulated adaptive protection can thereby improve health (Prekeges 2003).

      Br J Radiol. 2005 Jan;78(925):3-7.
      Evidence for beneficial low level radiation effects and radiation hormesis.
      Feinendegen LE1.
      https://www.ncbi.nlm.nih.gov/pubmed/15673519 abstr
      Low doses in the mGy range cause a dual effect on cellular DNA. One is a relatively low probability of DNA damage per energy deposition event and increases in proportion to the dose. At background exposures this damage to DNA is orders of magnitude lower than that from endogenous sources, such as reactive oxygen species. The other effect at comparable doses is adaptive protection against DNA damage from many, mainly endogenous, sources, depending on cell type, species and metabolism. Adaptive protection causes DNA damage prevention and repair and immune stimulation. It develops with a delay of hours, may last for days to months, decreases steadily at doses above about 100 mGy to 200 mGy and is not observed any more after acute exposures of more than about 500 mGy = 0.5Gy = 50 rem 1 Gy = 100 rem.
      also – https://www.ncbi.nlm.nih.gov/pubmed/19330141

      Evidence Supporting Radiation Hormesis in Atomic Bomb Survivor Cancer Mortality Data
      Mohan Doss
      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3526329/ abstr
      A major flaw in the standard ERR [excess relative risk] formalism for estimating cancer risk from radiation (and other carcinogens) is that it ignores the potential for a large systematic bias in the measured baseline cancer mortality rate, which can have a major effect on the ERR values. Cancer rates are highly variable from year to year and between adjacent regions and so the likelihood of such a bias is high. Calculations show that a correction for such a bias can lower the ERRs in the atomic bomb survivor data to negative values for intermediate doses. This is consistent with the phenomenon of radiation hormesis, providing a rational explanation for the decreased risk of cancer observed at intermediate doses for which there is no explanation based on the LNT model. The recent atomic bomb survivor data provides additional evidence for radiation hormesis in humans.
      In summary, consideration of the effect of systematic bias in the baseline cancer mortality rate has uncovered the phenomenon of radiation hormesis in the atomic bomb survivor data. This provides a rational explanation for the observed reduced cancer incidence in the intermediate dose range among the atomic bomb survivors, whereas the LNT model is incapable of explaining this significant feature of the data.

      American Journal of Clinical Oncology:
      Post Author Corrections: November 3, 2015
      doi: 10.1097/COC.0000000000000244
      Review Article: PDF Only
      The Birth of the Illegitimate Linear No-Threshold Model: An Invalid Paradigm for Estimating Risk Following Low-dose Radiation Exposure.
      Siegel, Jeffry A. PhD; Pennington, Charles W. MS, MBA; Sacks, Bill PhD, MD; Welsh, James S. MS, MD, FACRO
      Published Ahead-of-Print
      Abstract

      This paper examines the birthing process of the linear no-threshold model with respect to genetic effects and carcinogenesis. This model was conceived >70 years ago but still remains a foundational element within much of the scientific thought regarding exposure to low-dose ionizing radiation. This model is used today to provide risk estimates for cancer resulting from any exposure to ionizing radiation down to zero dose, risk estimates that are only theoretical and, as yet, have never been conclusively demonstrated by empirical evidence. We are literally bathed every second of every day in low-dose radiation exposure due to natural background radiation, exposures that vary annually from a few mGy to 260 mGy, depending upon where one lives on the planet. Irrespective of the level of background exposure to a given population, no associated health effects have been documented to date anywhere in the world. In fact, people in the United States are living longer today than ever before, likely due to always improving levels of medical care, including even more radiation exposure from diagnostic medical radiation (eg, x-ray and computed tomography imaging examinations) which are well within the background dose range across the globe. Yet, the persistent use of the linear no-threshold model for risk assessment by regulators and advisory bodies continues to drive an unfounded fear of any low-dose radiation exposure, as well as excessive expenditures on putative but unneeded and wasteful safety measures.

      • And then there’s…
        Oxford Physics Estimate using the study of survivors of the atomic bombs based on radiation exposure as they were tracked for many decades – 28(acute)+3(thyroid)+c.78(solid cancers)+c.3(leukaemia). Crude but unbiased estimate. Anyway less than 200 deaths.
        •Radiation is like other hazards –life has defenses
        •Low-dose repair time is on the scale of a day or so
        •Doses below threshold (100mSv) cause no damage.
        •Above threshold, permanent damage (scar tissue) results. Such scar tissue may remain benign, or later become malignant, like other scars

        Examples where radiation at 2-9 millisieverts (200-900 milliRem) per year and 40 millisieverts (4 Rem) per year did not increase cancer risk
        In Taiwan, there was an incident where radioactive steel rebar was used in a building and people lived with the radiation for many years.
        Recycled steel, accidentally contaminated with cobalt-60 (half-life: 5.3 y), was formed into construction steel for more than 180 buildings, which 10,000 persons occupied for 9 to 20 years. They unknowingly received radiation doses that averaged 0.4 Sv – a collective dose of 4,000 person-Sv.
        Based on the observed seven cancer deaths, the cancer mortality rate for this population was assessed to be 3.5 per 100,000 person-years. Three children were born with congenital heart malformations, indicating a prevalence rate of 1.5 cases per 1,000 children under age 19.
        The average spontaneous cancer death rate in the general population of Taiwan over these 20 years is 116 persons per 100,000 person-years. Based upon partial official statistics and hospital experience, the prevalence rate of congenital malformation is 23 cases per 1,000 children. Assuming the age and income distributions of these persons are the same as for the general population, it appears that significant beneficial health effects may be associated with this chronic radiation exposure.

        RADIATION EXPOSURE OF AIRLINE CREWS. (100 REMS EQUALS ONE SIEVERT)
        Epidemiological studies of cancer in aircrew find no increase in cancer (other than skin cancer for those who tan too much)
        There are about 250,000 pilots and flight attendants in the world. Another 450,000 pilots of other types.
        Full time pilots and flight attendants can get two to four times the regular amount of radiation in a year.
        2.2 mSv: airline crew member, short flights for one year
        3-6 mSv: airline crew member, cross-country flights, 900 hrs/yr for one year
        10 mSv: cooking with natural gas (radon) for a year
        5-15 mSv: one full-body CT scan for about 20 minutes
        6-18 mSv: one chest CT scan for about 10 minutes
        9 mSv: airline crew member, polar flights, such as Tokyo-NYC, 900 hrs/yr for one year
        13 mSv: smoking one pack of cigarettes per day for a year
        20 mSv: nuclear plant worker, maximum 5-year average*+
        50 mSv: nuclear plant worker, maximum total exposure in one year
        50-100 mSv: changes in blood chemistry
        100 mSv: lowest clearly carcinogenic level; 1 millimort
        So airline crew flying long haul routes for ten years would get 30-90 mSv and for 20 year would get 60-180 mSv
        Where is the increased cancer ? The peer reviewed studies do not find it. Not for the 10,000 people in Taiwan and not for the airline crews

        This is not exonerate radiation risk, just to point out it’s not unalloyed.

  65. “Commentary No. 27 was produced by an interdisciplinary group of radiation experts
    who critically assessed recent epidemiologic studies of populations exposed to low dose
    and low dose-rate ionizing radiation. The studies were then judged as to their strength
    of support for the LNT model as used in radiation protection.

    NCRP concludes that the recent epidemiologic studies support the continued use of the LNT model for radiation protection. This is in accord with judgments by other national and international scientific committees, based on somewhat older data, that no
    alternative dose-response relationship appears more pragmatic or prudent for radiation protection purposes than the LNT model.

    The Commentary provides a critical review of 29 high-quality epidemiologic studies of
    populations exposed to radiation in the low dose and low dose-rate range, mostly published within the last 10 years. Studies of total solid cancers and leukemia are emphasized, with briefer consideration of breast and thyroid cancer, heritable effects, and some noncancers, e.g., cardiovascular disease and cataracts.” https://ncrponline.org/wp-content/themes/ncrp/PDFs/Product-attachments/commentry/27/overview.pdf

    Opposed to Alexander M. Vaiserman from the Laboratory of Mathematical Modeling of Aging Processes, Institute of Gerontology, Kiev, Ukraine?

    There is a balanced argument to be had by specialists. But one thing we should all be aware of is the sample size over decades required to reach even 80% statistical power.

    “A requirement that is often underestimated is the population size. The approximate size of the population required to detect differences in excess cancers with 80% statistical power can be calculated. These data can be roughly interpreted to estimate sample size requirements for a study of HNBR areas using particular annual doses. For example, if a mean internal plus external effective dose of 6 mSv y−1 is assumed, as might be received in the HNBR areas of China or Iran, and it is agreed to study the population having reached 10 years of age (or having received 60 mSv), lifetime surveillance of about 56 000 persons would be required to reliably observe about 45 excess cancer cases (assuming an incidence according to the linearno-threshold (LNT) model) among nearly 17 000 cancers likely to occur in a similar nonexposed population. While identifying a sample of 56 000 persons is achievable, the difficulty in conducting the follow-up of that many people during the remaining 60 years or more of life would be great.”
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4030667/

    Up to 1.1 million in HNBR regions of China. Yet Peter wants to throw around quotes and numbers that support his very selective – and politically counter productive – not to mention pointless – neo-nuclear evangelism.

    This is before we get to confounding factors and accurate exposure estimates. Look at me – I just had 2 BED (banana equivalent dosage) for lunch. It’s going to be very difficult to estimate my lifetime exposure to bananas.

  66. This one surfaced again: http://www.thebigwobble.org/2019/06/dead-gray-whale-radiation-levels-were.html . Last week I visited a local fishing village/haunt that I remember in the past for its swordfish steaks. Mercury removed that delicacy from the menu. We humans are at the top of the food chain and accumulation is a problem (it still is for birds-of-prey from pesticides; as it was here for the lizard, cricket, resident warblers, bees ,,, the silent sterile age,).
    What is the opinion out there? Keep in mind that most failures in nuclear are of ‘human’ origin, and there is no sign of that ‘curse’ abating. What is the science out there on accumulation?

  67. Epidemiology Without Biology: False Paradigms, Unfounded Assumptions, and Specious Statistics in Radiation Science

    Abstract:
    “Radiation science is dominated by a paradigm based on an assumption without empirical foundation. Known as the linear no-threshold (LNT) hypothesis, it holds that all ionizing radiation is harmful no matter how low the dose or dose rate. Epidemiological studies that claim to confirm LNT either neglect experimental and/or observational discoveries at the cellular, tissue, and organismal levels, or mention them only to distort or dismiss them. The appearance of validity in these studies rests on circular reasoning, cherry picking, faulty experimental design, and/or misleading inferences from weak statistical evidence. In contrast, studies based on biological discoveries demonstrate the reality of hormesis: the stimulation of biological responses that defend the organism against damage from environmental agents. Normal metabolic processes are far more damaging than all but the most extreme exposures to radiation. However, evolution has provided all extant plants and animals with defenses that repair such damage or remove the damaged cells, conferring on the organism even greater ability to defend against subsequent damage. Editors of medical journals now admit that perhaps half of the scientific literature may be untrue. Radiation science falls into that category. Belief in LNT informs the practice of radiology, radiation regulatory policies, and popular culture through the media. The result is mass radiophobia and harmful outcomes, including forced relocations of populations near nuclear power plant accidents, reluctance to avail oneself of needed medical imaging studies, and aversion to nuclear energy—all unwarranted and all harmful to millions of people.

    https://link.springer.com/article/10.1007/s13752-016-0244-4

  68. However, this LNT discussion is a distraction from the key point, which is that no amount of scaremongering employing false implications can change the fact that nuclear is at least 100 times safer than coal and safer than any other electricity generation technology.

    It is disingenuous to imply that new coal plants (e.g. HELE) are safer than nuclear. They are actually only marginally safer than existing plants. They still require mining, processing and transport of 20,000 tonnes of coal for every tonne of uranium The deaths per tonne of coal mined hasn’t changed. They still dump their fly ash, along with the toxic substances they contained, in dumps that are eroded and end up in creeks, rivers, ground water and oceans. This hasn’t changed. They still emit toxic pollutants through their smoke stacks to the atmosphere. This is the cause of most of the deaths. There is some improvement in reducing these emissions over current best practice in developed countries, but it marginal. It may reduce deaths from around 15 to perhaps around 10 deaths/TWh. But that is still 110 to 250 times more deaths/TWh than from nuclear power.

  69. “Radiation science is dominated by a paradigm based on an assumption without empirical foundation. Known as the linear no-threshold (LNT) hypothesis, it holds that all ionizing radiation is harmful no matter how low the dose or dose rate.”

    Radiation is one of the causes of cell damage that is most often repaired but that very infrequently results in proliferating or inheritable mutations. Most inheritable mutations are not viable. Life is a lottery.

    “Cancers and heritable mutations are called stochastic (probabilistic) effects. The cancer or mutation behaves the same whether the organ received a high absorbed dose or a low one, all that changes are the odds (probability) of a cancer forming or a mutation occurring.” https://www.arpansa.gov.au/understanding-radiation/what-is-radiation/ionising-radiation/health-effects

    The argument seems to be that because most cell damage is repaired then we may with impunity add further cellular insult.

    Complete nonsense. As is Lang’s retreat from very uncertain epidemiological studies. Something that was addressed in the balanced and authoritative sources I provided. At the national and international level LNT is under scrutiny but in no danger of being overturned. It is another Lang windmill.

    Particulate, SO and NOx emissions from HELE coal plants are 0.7, 15.1 and 17.2 mg/m3 respectively. Compared to some 100, 100 and 1500 mg/m3 for well operated older plants.

    These generating plants are about half the cost of nuclear – despite another of Lang’s windmills. The technology is critical not only to reducing pollutant emissions in new plants but in adapting it to older plants cost effectively – and to transitioning away from cooking over wood and dung more quickly. This is where the overwhelming preponderance of health risks are and not mining, transport or ash ponds ffs. Despite the numbers and scenarios pulled out of his arse. I have worked in all these areas as a consultant. I know best practice.

    “To provide [electricity] in today’s world, an ‘advanced reactor’ must improve over existing reactors in the following 4-core objectives. It must produce significantly less costly, cost-competitive clean electricity, be safer, produce significantly less waste and reduce proliferation risk. It is not sufficient to excel at one without regard to the others.” Dr. Christina Back, Vice President, Nuclear Technologies and Materials for General Atomics, May 2016 testimony before the US Senate Energy and Natural Resources Committee hearing on the status of advanced nuclear technologies.

    All this to defend the unnecessary release of large quantities of extremely long lived mutagenic substances into the world? It’s a poor excuse for sanity.

  70. Coal kills two to three orders of magnitude more people per TWh of electricity supplied than nuclear. So why do commenters here believe the ALARA approach to nuclear regulation should be applied to nuclear when it is not applied to other technologies. Why are the other technologies not banned or taxed out of existence because of their health externalities? What has happened to objective science and engineering in this debate?

    These papers (and others) explain the appropriate response to major nuclear accidents [1-4]

    1. Thomas, P. Quantitative guidance on how best to respond to a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 4-15. https://www.sciencedirect.com/science/article/pii/S0957582017302665.
    2. Thomas, P.; May, J. Coping after a big nuclear accident. Process Safety and Environmental Protection 2017, 112, 1-3. https://linkinghub.elsevier.com/retrieve/pii/S0957582017303166.
    3. Nuttall, W.J.; Ashley, S.F.; Heffron, R.J. Compensating for severe nuclear accidents: An expert elucidation. Process Safety and Environmental Protection 2017, 112, 131-142. https://www.sciencedirect.com/science/article/pii/S0957582016303032.
    4. Yumashev, D.; Johnson, P.; Thomas, P. Economically optimal strategies for medium-term recovery after a major nuclear reactor accident. Process Safety and Environmental Protection 2017, 112, 63-76. https://www.sciencedirect.com/science/article/pii/S0957582017302665?via%3Dihub.

  71. “Unable to maintain post-shutdown reactor cooling for the three operational reactors, core overheating occurred and when nuclear fuel cladding reacted with high temperature steam hydrogen was formed which accumulated and exploded.”

    In the history of bad ideas – this was a doozy. The papers just above are about post disaster economic optimization.

    “Experts were drawn from three principal communities: specialists on aspects of risk and insurance; lawyers concerned with issues of nuclear law; and safety and environmental regulators.”

    So where’s the science and engineering? There are more economic ways to go.

    “Nuclear energy – the most reliable source of clean non-intermittent electricity in the United States – is under threat from economic factors that could result in the premature closure of many or all of our current reactors. If the U.S. is to maintain this precious resource, which supplies 20% of our electricity needs and 60% of our low-carbon electricity generation, we must invest in cutting-edge technology known as Accident Tolerant Fuel (ATF). ATF can extend the life of current reactors by making them cheaper and nearly meltdown proof, while simultaneously paving the way for advanced nuclear reactors that can greatly exceed the capabilities of the current fleet.” http://www.ga.com/accident-tolerant-fuel

    But these are still hugely expensive dinosaurs that have predictably – it was predicted – resulted in hundreds of accidental releases and which has now hundreds of thousands of tons of high level waste sitting in leaky drums and ponds around the world. What we need is advanced designs that are cheaper, can’t melt down, burn nuclear waste as fuel, create far less and shorter lived waste by closing the fuel cycle, waste that cannot be used in bombs. Oh – right.

    Remembering that:

    1. we are even with modest exposure (6 mSv annually rather the 2-4 mSv we all get) they are looking for 45 excess cancers in 17,000 in a population of 56,000 – we don’t know what the impact of nuclear releases has or will be;

    2. industry generally is now safer than it used to be even a decade ago.

    Then Peter’s oft repeated numbers are technical gobbledygook.

    We have comprehensive water quality standards.

    http://www.waterquality.gov.au/guidelines

    And for air quality.

    https://www.environment.gov.au/protection/publications/factsheet-national-standards-criteria-air-pollutants-australia

    For the protection of health and environments. It is unlettered nonsense to suggest any industrial technology is not measured against objective standards.

    For the foreseeable future HELE coal generation is the standard for the developing world and will contribute to saving many millions of lives both directly and indirectly through the economic growth it brings.


    Source: ASEAN energy equation

  72. New analysis of adapting to a 1.5C target by 2050:
    https://www.aalto.fi/en/department-of-design/15-degrees-lifestyles
    “1.5-Degree Lifestyles: Targets and options for reducing lifestyle carbon footprints”

    Key findings of the report

    * 3-2-1 tonnes per person by 2030-2040-2050. Globally, citizens and society need to aim for per-person consumption-based greenhouse gas emissions targets of 2.5 (tCO2e) in 2030, 1.4 by 2040, and 0.7 by 2050 in order to keep global temperature rise to within 1.5 degrees. The gap analysis reveals that footprints in the developed countries studied (Finland and Japan) must be reduced by 80–93% by 2050, assuming that actions for a 58–76% reduction, necessary to achieve the 2030 target, start immediately. Even for the developing countries studied (China, Brazil, and India), a 23–84% reduction, depending on the country and the scenario, would be required by 2050.
    * What we eat, how we live and move. Nutrition, housing, and mobility have the largest impact on climate change, accounting for approximately 75% of lifestyle carbon footprints. Hotspots include meat and dairy consumption, fossil-fuel based energy, car use, and air travel.
    * Low-carbon lifestyles. Options with large emission reduction potentials include: car-free travel and commuting, ride sharing, living closer to workplaces and in smaller living spaces; renewable grid electricity and off-grid energy, heat pumps for temperature control; and vegetarian-vegan diets, and substituting dairy products and red meat with plant-based options. If these options are fully adopted, each of them could reduce per-capita footprint by several hundred kg to over a tonne annually.
    * The limits of technology. The various reduction scenarios studied indicate that most of the existing emission scenarios assume extensive use of negative emission technologies and production-side efficiency improvement. Only a few scenarios for the 1.5-degree target have focused on lifestyle changes and demand-side actions. However, the actual availability, feasibility, and costs of technologies are uncertain, and thus solely relying on their assumed extensive and broad-ranging roll-out is a risky societal decision.

    • Far beyond what you imagine the imperatives of climate change are – the engine of negative emissions is economic growth. Increased agricultural productivity, increased downstream processing and access to markets build local economies and global wealth. Economic growth provides resources for solving problems – conserving and restoring ecosystems, better sanitation and safer water, better health and education, updating the diesel fleet and other productive assets to emit less black carbon, nitrous oxides and sulfur dioxide – and reduce the health and environmental impacts, developing better and cheaper ways of producing electricity, replacing cooking with wood and dung with better ways of preparing food thus avoiding respiratory disease and again reducing black carbon emissions.

      And meat is not only crucial to using marginal resources to provide food security and contribute to economic growth – but modern grazing techniques are themselves a powerful negative emissions technology.

  73. This discussion prompted a bit of research on my part. Electricity prices in $US per kWh ranged from 0.33 in Germany to 0.19 in France to 0.13 in the US to 0.08 in China, for all types of fuel. The average for all European countries except France is 0.25. [1] “France is the world’s largest net exporter of electricity due to its very low cost of generation, and gains over €3 billion per year from this.” [2] France generates 75% of its energy from nuclear.

    The local availability of hydro, natural gas or coal helps tremendously, but otherwise nuclear is attractive economically. But political and regulatory delays impacting capital costs have seriously affected nuclear.

    [1]
    https://www.statista.com/statistics/263492/electricity-prices-in-selected-countries/

    [2]
    From http://www.world-nuclear.org/information-library/country-profiles/countries-a-f/france.aspx

    • Pochas94,

      Thank you for this contribution to the discussion. It’s important to recognise that nuclear power could now be around 1/10th of its current cost if not for the disruption that began in the late 1960’s and continues to this day.

      The root-cause of the disruption was arguably the success of the anti-nuclear power protest movement at scaring the hell out of the public. This caused and still causes politicians and regulators to make laws and regulations that have driven up the cost of nuclear power by a factor of ten compared with what it could be now.

      Figure 1. Overnight construction cost (in 2010 US $/kW) plotted against cumulative global capacity (GW), based on construction start dates, of nuclear power reactors for seven countries, including regression lines for US before and after 32 GW cumulative global capacity.
      Source: https://www.mdpi.com/1996-1073/10/12/2169/htm

      The focus has been on regulating nuclear power on the basis of As Low As Reasonably Achievable (ALARA) rather than on As High As Relatively Safe (AHARS), but this approach is not used for the other technologies.

      The cost of the health impacts externality should be applied to all electricity generation technologies. If it was, the price of electricity from these technologies would be increased by about, $/MWh:
      Coal 141
      Natural gas 38
      Hydro 13
      Solar 4.1
      Wind 1.4
      Nuclear 0.8

      If the cost of the health externality had been applied to all technologies for the past 50 years, nuclear power would likely now provide the bulk of our electricity, reducing electricity costs by around a factor of ten, and avoiding around a quarter to a half million deaths per year.

    • La Transition Énergétique:

      “Pointless, Costly And Unfair”

      “London, 12 June: A top French economist has slammed his country’s attempts to decarbonise its economy.

      Professor Rémy Prud’homme accuses the government of wasting money on schemes that will make almost no difference to the climate and will cause great harm to the poor.

      France already has relatively low carbon dioxide emissions because it gets most of its electricity from low-carbon sources like nuclear and hydro. But despite this the French government has embarked on a programme of building renewables. As Professor Prud’homme explains:

      “We are spending billions to switch from reliable low-carbon nuclear power to unreliable low-carbon renewables. This will almost certainly increase our carbon dioxide emissions rather than reduce them.”

      And the policies put in place are hitting the poor very hard, particularly those living in rural areas.

      “The government is forcing up the price of energy everywhere. There are some subsidy schemes to reduce the impact on the poorest, but these cannot do nearly enough to soften the blow, as the Gilets Jaunes protests have shown us.”

      The protests, now in their thirtieth week, and in which more than 4000 people have been injured, began as a demonstration against fuel price increases imposed as part of the government’s decarbonisation drive.”

      GWPF Press Release: https://mailchi.mp/3fdd3cf6f435/la-transition-nergtique-pointless-costly-and-unfair

      • “France. Nuclear plants provided 71.6 percent of the country’s electricity, the lowest share since 1988. This is a decline for the fourth year in a row and 7 percentage points below the peak year of 2005 (78.5 percent). France’s load factor at 67.7 percent was the fifth lowest in the world. https://www.worldnuclearreport.org/IMG/pdf/20180902wnisr2018-lr.pdf

        France’s aging nuclear fleet seems far from reliable. More problematic and uncertain – with escalating cost and risk. And with no clear path to cost competitive alternatives – other than imported gas. Apart from some minor increases in wind and solar – well within the reasonable penetration levels with existing technology.

        There is a quick view of the global state of play at the bottom of this page.

        https://www.worldnuclearreport.org/

      • edimbukvarevic

        Madness!

        Absolutely correct. It is a clear demonstration of the madness of cult beliefs – such as the nuclear phobia that so many are gripped by.

        It’s frustrating that apparently intelligent people can’t recognise or acknowledge the bleeding obvious.

        Nuclear is high cost because of the massive costs imposed by 50 years of ridiculous, and unjustifiable regulations.

        Nuclear regulation are based on the ALARA rather than the AHARS approach. But this applies only to nuclear, not the competing technologies.

        The LNT hypothesis is wrong for low level radiation. If this was corrected, and regulations were based on AHARS instead of ALARA, and the regulatory and design approval requirements were changed accordingly, we could move much more rapidly to SMRs and with them return to the rapid learning rates that existed up to about 1967, or perhaps even faster. Then costs could reduce by around 25% per doubling of cumulative global capacity, or faster – c.f. the learning rate for US air safety (deaths per passenger mile) was 87% from 1960 to 2013 (see Note [AII] in Appendix C here: https://www.mdpi.com/1996-1073/10/12/2169/htm . Similarly, the learning rate for solar PV (with persistent strong public support, favourable regulatory environments and high financial incentives) has remained high at 10 to 47%.

        I’ll repeat for those who have trouble understanding: nuclear is the safest way to generate electricity by one to two orders of magnitude. Therefore, how ridiculous is it to advocate for coal instead.

        If not for the ridiculous anti-nuclear bias, the external cost of the health impacts of the different technologies would be internalised. Then, over time, nuclear would replace just about all other technologies. It would also supply all out transport fuels (gasoline/petrol, diesel, jet fuel, etc.).

        What a disaster the anti-nuclear phobia has been, and still is, for the world.

      • LNT is the model for national and international regulators – and the great preponderance of nuclear scientists it seems. And I have cited well balanced reports to this effect. As opposed to blog sites and quite underwhelming authorities to the contrary. The sky dragon slayer economist at the top of the thread for another example.

        As for 10% of current costs. It is nowhere near that in economies with low paid but productive work forces and standardized designs.

        A reduction of 90% in materials? A 90% increase in productivity? Or is this the decline the result of a magical Peter Lang invented acronym? There is a thing called a sanity test in engineering. This fails that test.

  74. The cost of new nuclear plants is high,
    and this significantly constrains the growth of nuclear power under scenarios that assume
    ‘business as usual’ and modest carbon emission constraints. In those parts of the world where a carbon constraint is not a primary factor, fossil
    fuels, whether coal or natural gas, are generally a lower cost alternative for electricity generation.”
    http://energy.mit.edu/wp-content/uploads/2018/09/The-Future-of-Nuclear-Energy-in-a-Carbon-Constrained-World.pdf

    The MIT study recommends ways to reduce costs in the US – and these have very little to do with regulatory reform. Reduction to 10% of current capital costs is another delusional fantasy.

    The solution is standardization and production efficiencies – in factories especially.

    Modest carbon constraint – btw – is the Paris fallback for the parts of the planet where power demand is growing most strongly. Fossil fuels in modern plants save far more lives than more expensive options.

  75. HNBR

    https://judithcurry.com/2019/05/25/week-in-review-science-edition-102/#comment-893756

    Stochastic risk – and modern fossil fuel pollutants.

    https://judithcurry.com/2019/05/25/week-in-review-science-edition-102/#comment-893756

    Epidemiology.

    https://judithcurry.com/2019/05/25/week-in-review-science-edition-102/#comment-893669

    Modern fossil fuel generation at half the cost of nuclear is the rational choice for energy and saving lives. Yet these sky dragon slayers have a evangelistic fixation with nuclear for reducing CO2 emissions. If only they could convince the world that you can safely eat radiation for breakfast. And magically reduce costs with a Peter Lang made up acronym.

Leave a Reply to scotts4sf Cancel reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s