by Judith Curry
A few things that caught my eye this past week.
Climate modeling suggests Venus may have been habitable [link]
Earthlike habitable planet discovered around the closest star to the Sun [link]
Piecing together the Arctic’s sea ice history back to 1850 [link]
Yes, climate change threatens #biodiversity. But the biggest threats to life remain guns, nets, and bulldozers [link] …
35 New Scientific Publications Confirm Ocean Cycles, Sun Are Main Climate Drivers: [link]
There Is No Scientific Method [link]
#climatechange feeds Chesapeake Bay algal blooms via droughts and downpours [link]
Climate change threatens species — but agriculture, hunting and fishing are far worse, researchers say [link]
Is the detection of sea level rise imminent? Wow, compare press release
[link] with paper [link]
Sizing up the tsunami thread [link]
Seeing ‘Frankenstein’ through the lens of climate change [link]
Must science be testable? A defense of Popper [link]
Good article about the drought of U.S. landfalling hurricanes [link]
Academic Orthodoxy Is A Bigger Threat Than Climate Change [link]
Tweets can be used to create real-time maps of floods [link]
Should scientific research be funded publicly? [link]
Observed vs ‘Real’ Global Temperature. What thermometers do & don’t yet show! [link]
Ahem! Observed precip record too noisy to attribute obs changes in extreme US precip to human-caused climate change. [link]
More adjustments: “In the case they havent been adequately corrected, Little Ice Age may appear colder than reality” [link]
How do volcanic eruptions affect climate? Found out in our new Nature Collection [link]
New Solar Research Raises Climate Questions — Interview with Prof Valentina Zharkova [link]
The anomalous change in the QBO in 2015-16 [link]
Greenland ice sheet ‘summit’ plunged to record low July temperature. So what? [link]
‘Effective radiative forcing from historical land use change’ [link]
NSIDC on Arctic Sea Ice: ‘A new record low September ice extent now appears to be unlikely.” [link]
What would it take to achieve the Paris temperature targets? [link]
Kelvin Helmholtz #clouds resemble breaking #ocean waves [link]
Here is Rutgers Snow Lab analysis of NH snow cover. Not much change since mid-1990s. [link]
The tyranny of simple explanations [link]
“Intellectual orthodoxy is a bigger threat than climate change” [link]
Interesting new course. Everything is fucked: the syllabus [link]
Regarding the Kealey podcast — http://www.libertarianism.org/media/free-thoughts/science-doesnt-need-public-funding?utm_content=buffer33394&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer — I am a big fan of him, a true gentleman. In fact I am working on a book on this with him. My focus is how pro-AGW biased federal funding has screwed up climate science.
See this of mine as an example, NSF denies the existence of recent long-term (dec-cen) natural variability, so funds no research thereon: http://www.cato.org/blog/nsf-climate-denial?utm_content=buffer2695b&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer. They fund gobs of research on short term variability, to try to explain away the hiatus. This is fundamental, policy-driven funding bias.
Governments should not fund research where they have a pre-existing policy interest.
David, almost all federal research agencies let government policies decide funding, despite Eisenhower’s warning in his JAN 1961 farewell address to the nation about the danger that a “scientific-technological elite” might try to take control of US policy by that very method.
I think it is the othervway around. The government has taken control of the scientific-technological elite, making them a lot less elite.
Quid pro quo arrangement: The US NAS controls review of budgets of federal research agencies.
The power that controls Earth’s climate and human destiny will be revealed at the London GeoEthics Conference on 8-9 Sept 2016.
To prepare for the unveiling, please read the last paragraph of Aston’s Nobel Lecture (12 DEC 1922) and page 7 of Kuroda’s biography describing an embarrassing question from a ~30 year old physicist to Aston’s lecture at the Imperial University of Tokyo (3 JUN 1936).
Hyperlinks in the abstract (word.docx or .pdf) give history of the “SECRET“ to be exposed in London on 8-9 SEPT 2016.
New Solar Research Raises Climate Questions — Interview with Prof Valentina Zharkova
Zharkova paper is here
outlining a prediction based on thee (21-23)cycles.
I have looked into solar activity for some years now, cycle activity usually comes in packets of smaller groups of 4-5 and the larger ones of 9-10 cycles (see graph below), hence I would suggest that multi-decadal prediction based on thee cycles is just not credible.
The best one can do (or at least I did) some 13 years ago is to outline a general long term envelope as in here:
which produced what I described as an
‘incredible coincidence’ of predicting SC24 to a decimal point, but anything less than + or –5% it could be considered as success.
However the state of science is such unless research is not done under an umbrella of an academic or science research institution there is very little of if any recognition and absolutely no chance of acceptance.
prediction based on three cycles (21, 22 & 23)
Thanks, Vukcevic. Politicians are now publicly on notice that global cooling is very likely ahead.
Zharkova paper has already been debunked in peer reviewed literature. It does not reflect past solar activity correctly based on a number of procies including IIRC 14C and 10B. So it cannot predict the future except by chance.
Should not be much of a surprise.
The envelope in my graph is ok back to 1800, fails totally 1760-1800, then it ‘staggers’ back to the Galileo’s early records as shown in this independent (Svalgaard’s) rendition of the formula
It just shows that sun goes through periodic ‘spasms’ of activity which can not be easily explained end even less correctly predicted.
Oh stop, the hypothesis can now be tested, we are going into a minimum and we get to see the results, best kind of validation, as such I fail to see how any other paper can invalidate the predictions.
If you follow precautionary logic to its conclusion, this is more of a threat than warming, far more, however likely or not. The LIA did happen after all, despite Mickey’s best efforts to revise it from history.
There is little doubt about two things:
– Sunspot activity is going into a prolonged minimum, most optimistic forecast is for early 1900’s while some think it may be as severe as the Maunder. I think Maunder type minimum is unlikely, but Dalton (early 1800s) is on the cards.
It is Zharkova method which is questionable, one more of current crop of the ‘johnnies come home lately’. Situation in 2003 was opposite, no one was predicting downturn in solar activity, the NASA’s to top experts were predicting for the current cycle to be highest ever. My suggestion that their prediction of rise in the solar activity may be wrong was ridiculed by their top man.
– Global temperature cooling is also very likely, but since 75% of the surface is oceans with high thermal capacity, it may take some time before it becomes a convincing downturn.
– However land temperatures are responding faster, a downturn in the CET average may have already started
NCAR SLR paper makes no sense, the PR less so. Pinotubo effect lasted at most 18 months, not the decade of the 1990’s. Paper says second decade of sat alt shows slower rate than first decade, but does not put error bars around that finding. At least it admits there has been no detecable SLR acceleration yet as predicted by models. Another small climb down.
I love precise data with no error bars.
Oh, sorry – it is a model with no known error bars.
Just model some error bars.
Or, as Michael Mann suggests, throw it all away as pointless, and watch 24 hour news TV.
Let me help you by posting the caption’s explanation for your modeled curves.
PS Note that this graph from the press release is not provided in the paper itself. The paper’s supplemental materials display a different graph.
About a mile and a half downthread, I see that you eventually posted the caption for these modeled curves. Good on ya. But it shoulda been posted along with the graph, IMO.
They would avoid all this is they would just tune their models to match observation expectations.
Actually, a climb up
“Intellectual orthodoxy is a bigger threat than climate change” [link]
Yep. Doing away with the fact/value and fact/theory dichotomies seems to be the holy grail of the “new authoritarians” these days.
Whenever we get articles like this, I always say that the skeptics need to step up their game. Their ideas so far have been very poor to the extent of being easily refutable with basic science or observations. If Salby is your best effort, you are in trouble.
This crowd greeted Salby with a standing ovation. I hope you are well otherwise.
Name a better skeptic non-AGW theory then. You can’t. This is the point.
You feel a strong need for a theory describing something that we don’t understand. I feel that we should collect enough data first, then start working on a theory. You consistently put the cart before the horse.
Well, there is a theory that explains why we are 1 C warmer than pre-industrial times with a remaining imbalance and rising OHC, and so far no rivals for it. Even as far back as 1980, the significant and unprecedented warming of 0.7 C by now was predicted with this same theory.
GIANT NATURAL FLuCuAtIonS
Giant natural fluctuation models and anthropogenic warming
…While climate sceptics have systematically attacked AW, up until now they have only invoked GNFs. This has now changed with the publication by D. Keenan of a sample of 1000 series from stochastic processes purporting to emulate the global annual temperature since 1880. While Keenan’s objective was to criticize the International Panel on Climate Change’s trend uncertainty analysis (their assumption that residuals are only weakly correlated), for the first time it is possible to compare a stochastic GNF model with real data. …
Jim D says “…the skeptics need to step up their game. Their ideas so far have been very poor to the extent of being easily refutable with basic science or observations.”
What an amazing statement! We have had over 800,000 comments here, which at a guesstimate of 5 sentences per comment gives 4 million sentences of debate. Jim’s last sentence is arguably the most foolish of these millions. Any contenders for this title?
That he believes this yet spends his time here is interesting psychologically. Is there a name for this?
I’ve seen these comments which is why I am so sure that not a single semblance of a good alternative idea has come out of the skeptic side yet. I am still patiently waiting for it to show up. A good sign is that they are starting to knock down the goofy ideas now.
Your theory predicted not only a 0.7 C temperature rise, but also 0.5, 1.2, 1.5, an ice free Arctic ocean by 2016, and our children will not know what snow was. It failed to predict any pause. I won’t stoop that low.
It was already an established theory in 1980. If someone knew about it in 1950, they could have predicted a good fraction of a degree by the time CO2 had risen to 400 ppm (it was about 310 ppm then), and they would have been right despite the unprecedented nature of such a rise. Likewise today, we could say by the same principle that if it reaches 700 ppm, the climate won’t stabilize until it is 3 C warmer than now. It seems unprecedented, but there it is.
Professor Curry previously provided a link to a recent peer reviewed publication in a reputable journal which explains the observed temperature rises since the Industrial Revolution.
The authors seemed to adopt a scientific approach to the point where they admitted that they were surprised by the results of their research. However, they accepted facts. As I recollect, they tested their findings against another location, as verification. Seemed reasonable and correct to me.
Seek and Ye shall find.
Adding CO2 to air heats nothing. Removing CO2 from air cools nothing.
When I read threads with comments by JimD, it’s usually out of curiosity as to why other people read (and reply to) JimD’s comments.
Jim you are on course for a sophistry award.
Ever hear of the null hypothesis. ROFL
The null hypothesis says that the temperature today should be about the same as it was in 1800. Oops.
“The climate won’t stabilize until …” No. The climate won’t stabilize, period. It never has and it never will. Climate is an inherently chaotic system.
Exactly! This would be like a climate last seen 40 million years ago, and it might not be stable either because of all the tipping points such a large change can produce. It is a major step in climate, and there will be aftershocks going forwards.
Jim D: the skeptics need to step up their game. Their ideas so far have been very poor to the extent of being easily refutable with basic science or observations.”
that is not a true statement. The skeptic claim that climate sensitivity is lower than the median estimate in IPCC AR(3), for example, was not refuted. IPCC has itself come out with lower estimates than the median estimate of AR(3).
The skeptic claim that extra CO2 would be good for terrestrial and marine life has also not been refuted.
Some skeptic claims may have been refuted, but by no means all.
Skeptical sensitivity estimates are within the IPCC error bars, so no refutation is necessary. Skeptics talk about life and extra CO2 but don’t consider climate change, so that kind of thing is beside the point until they also consider climate change as the IPCC does.
Jim D: The null hypothesis says that the temperature today should be about the same as it was in 1800.
That’s one null hypothesis. Another is that the spectral density function of natural mean temperature fluctuations has been constant over the last few thousand years and remains the same. That null hypothesis has not been refuted.
The standard deviation is considered to be about 0.2 C (e.g. Lovejoy), so we are now at 5 standard deviations from the mean. How many standard deviations are needed to refute the null hypothesis, and say something else is happening now?
Skeptics talk about life and extra CO2 but don’t consider climate change, so that kind of thing is beside the point until they also consider climate change as the IPCC does.
You backed off from your claim of “easily refutable”.
I don’t know about all skeptics, but plenty have now written that the combined effects of 135 or so years of increasing CO2, increasing global mean temperature, and increasing rainfall have been beneficial on the whole to plant life. That is not easily refutable.
When I referred to the science, I was talking about the science behind the warming and WG1-related parts . If you want to accept the warming and go into biological/economical effects of it, I can’t comment much except to point at WG2 and WG3. I find a lot of people don’t even accept the full warming potential of BAU, so we are not there yet with even starting the discussion on WG2 and WG3, which is what I was getting at.
Jim D: . If you want to accept the warming and go into biological/economical effects of it, I can’t comment much except to point at WG2 and WG3.
You wrote that skeptics do not consider climate change. That was false.
Not at the same time as CO2 benefits. Do you see them do that? Have they said it is still a benefit when it is 3 C warmer, and perhaps precipitation patterns have shifted?
Jim D: How many standard deviations are needed to refute the null hypothesis, and say something else is happening now?
You do not have even the beginning of knowledge about the spectral density function or autocorrelation function of a time series.
Try seeing what Lovejoy says. He comes at it from a statistical viewpoint, and for him 5 standard deviations is enough, fat tails too.
Jim D again fails to grasp what a skeptic is. It’s not someone with a better or different theory than the existing one, it’s some who says the existing one doesn’t hold water.
Precipitation extremes. Two interesting conclusions. Climate models cannot resolve them since even the finest grids are a factor of 8 too course. And is no statistically reliable observational evidence for any increase. The aggregiously bad 2014 National Climate Assessment said the opposite.
The current research process in ecological studies:
1. Assume that climate change is happening in a specific location (no actual data from that location is required).
2. Plug that assumption into a model.
3. Collect your PhD.
You have nailed it. Mann made climate change !
Found that abstract to be quite interesting considering the numerous sources which ‘attribute’ the increases (observed?) of precipitation to AGW.
Then my mind went to the question (assuming the observations correct) as to what has caused the increases.
As Rud suggests, the starting point of course is is there an increase?
An interesting paper indeed, with not surprising conclusions. I’m glad that it was published.
“An interesting paper indeed, with not surprising conclusions. I’m glad that it was published.”
How do you reconcile the results with alternatives which indicate that ‘extreme events’ are attributable to AGW?
Doing precipitation/runoff modeling for the Chesapeake is valuable stuff, even when there is no demonstrable connection to climate change. Nonetheless, I suspect the climate “hook” helped get the research funded and the paper published.
I despair that we have trained an entire generation of researchers and reviewers to close their eyes to the lack of actual, supporting (climate change) data in published papers.
Figured I should actually look up the supporting grant for the paper. It was a small part of the large, multi-year NOAA support for GFDL (at $2MM per year). So the climate “hook” came pre-selected.
Second paragraph of above comment stands.
The paper’s focus was on extreme *precipitation*.
Yes. Extreme ‘precipitation’ events (rain bombs?). Alternative sources attribute, for example, increases in precipitation in central U.S. (16% if memory serves…I can locate the source later if you need) and attributes to A in the AGW. This paper seems to refute by saying ” In part because of large intrinsic variability, no evidence was found for changes in extreme precipitation attributable to climate change in the available observed record.”
My read. Yes, it’s changed but not due to CC (so therefore A in AGW).
Interestingly, if it’s changed must it not be due to ‘climate change’ whether man caused or not (presuming climate time scales)? Not expecting an answer to this, just interesting wording.
Yes, one of the problems here is ambiguity and vagueness. One person’s extreme weather is another’s sunny day at the beach.
One hopes the body of the paper (which is paywalled, so I haven’t read it) defines terms. However, the part which appeals to me — and which tends to confirm my own prior thinking — is that “extreme participation” events are so inherently noisy that it’s difficult to determine trends, much less attribute causality to anthropogenic influences. That may also be true of other types of extreme weather. Hurricanes perhaps? They seem to come and go in waves. Very “random” (but technically, chaotic). It’s my impression that it’s widely accepted that there’s essentially no evidence that tornado frequency/intensity has increased with temperature.
Things like drought intensity and duration or heat waves seem a little more robust to me. At the same time, I don’t personally have high confidence that we’re seeing unambiguous evidence of change in those patterns.
In sum, I read this abstract and take away a message that I like: Let’s not overstate the case. A lay opinion; it may take a century or longer to obtain a statistically significant positive result … assuming tightly defined criteria, of course.
If it’s easy to locate, I would like to see it. Much better to deal with direct claims directly. :)
Yes, and if I were in full apologist/advocate mode, I’d respond: Maybe they’re just not looking in the right places. And in truth, they may not be. It would be interesting to read the whole paper and see how they address the possibility of a false negative.
On this topic, I’m not much of one for “if not this, then that”. My POV is that AGW isn’t an argument that humans are 100% responsible for any and all conceivable weather events. At best, we’re having some effect on changes. Attributing our portion of it is not trivial, lots of competing methods and opinions. I tend to read such articles and say, “Ok, that’s interesting, filed for future reference.”
I guess I’d have to see the exact quote in context. I think I kinda understand what you’re getting at though.
Sorry this took so long. Internet outage. Apologies if writing appears shaky but my hands are settling down over time. :)
My memory was a bit off about percentages, but here’s the info and link: “Since 1900, average annual precipitation over the U.S. has increased by roughly 5%. This increase reflects, in part, the major droughts of the 1930s and 1950s, which made the early half of the record drier. There are important regional differences. For instance, precipitation since 1991 (relative to 1901-1960) increased the most in the Northeast (8%), Midwest (9%), and southern Great Plains (8%), while much of the Southeast and Southwest had a mix of areas of increases and decreases.”
And of course generalized attribution: “Evidence of long-term change in precipitation is based on analysis (for example, Kunkel et al. 2013) of daily observations from the U.S. Cooperative Observer Network. Published work shows the regional differences in precipitation., Evidence of future change is based on our knowledge of the climate system’s response to heat-trapping gases and an understanding of the regional mechanisms behind the projected changes (for example, IPCC 2007).”
I’ve not asked for the paper yet, but plan to shortly.
A section of the Kunkel abstract: “The causes of the observed trends have not been determined with certainty, although there is evidence that increasing atmospheric water vapor may be one factor. For hurricanes and typhoons, robust detection of trends in Atlantic and western North Pacific tropical cyclone (TC) activity is significantly constrained by data heterogeneity and deficient quantification of internal variability. Attribution of past TC changes is further challenged by a lack of consensus on the physical link- ages between climate forcing and TC activity. As a result, attribution of trends to anthropogenic forcing remains controversial. For severe snowstorms and ice storms, the number of severe regional snowstorms that occurred since 1960 was more than twice that of the preceding 60 years. There are no significant multidecadal trends in the areal percentage of the contiguous United States impacted by extreme seasonal snowfall amounts since 1900. There is no distinguishable trend in the frequency of ice storms for the United States as a whole since 1950.”http://journals.ametsoc.org/doi/full/10.1175/BAMS-D-11-00262.1
Mentally, I’m extrapolating the increase in water vapor to be due to GW although not directly stated here.
No worries, these things happen. Thanks for the refs, skimming through the quotes the overall message I’m getting is consistent with the recent paper we’re discussing: no statistically significant trends. That could be a combination of poor data quality and/or that the signal simply hasn’t yet emerged from the noise.
In terms of total precipitation, I’d expect that to be the easiest to detect. My lay opinion is that it’s the more robust of these kind of weather event predictions. It makes physical sense that more WV in the atmosphere would tend to lead to “extreme” precipitation events and/or violent storms, but my impression is (and this could be due to simple ignorance on my part) is that there is no clear consensus how to define or test for those things.
Sheesh. I didn’t include the link for the first quote. It’s here: http://nca2014.globalchange.gov/report/our-changing-climate/precipitation-change
I’m surprised only that you suggest that the NCA’s statement of :” Since 1900, average annual precipitation over the U.S. has increased by roughly 5%” as not being statistically significant especially since only the areas defined as “Southeast and Southwest” showed a mixture of increases and decreases and all others show increases up to 9%.
But other than that I can only suggest that (for lack of a better definition)
“is that there is no clear consensus how to define or test for those things.” the number of events expanding the 100 year and 500 year extreme precipitation events seem to be increasing. Maybe even a 1000 year event there in S. Carolina.
“Results show that extreme precipitation events increased for most of the CONUS with the exception of the west region.”
http://link.springer.com/article/10.1007/s10584-015-1453-8 (darn paywall)
Perhaps I should have been more clear. I think that the total precipitation data and theory about their increase are the most robust. When we start dipping into things like frequency of discrete events, my general sense is that the data don’t clearly support the case for an increase in them. That doesn’t mean that I think the theories are necessarily wrong, just difficult to define and test because weather is noisy. There’s a big difference between looking at mean annual rainfall, and rainfall outliers fitting some definition of extreme. It’s going to be easier to get a statistically significant result from the former than the latter.
Darn paywall indeed. So Wu (2015) used GHCN-D over 1951-2013, and they say, “In addition to mean precipitation, all precipitation events are divided into three categories: light, moderate, and heavy based on percentile thresholds. […] These changes were a result of both a shift in the mean state and the shape of the precipitation data distribution.”
It’s kind of difficult for me to argue that their result is flawed. I can’t even read the supplemental because it’s in an MS Word format my Linux box doesn’t recognize. Grrr.
The van der Wiel et al. (2016) abstract doesn’t tell us what historical record they use, the period of their analysis nor how they defined “extreme” precipitation. They do say:
It is shown that, for these models, integrating at higher atmospheric resolution improves all aspects of simulated extreme precipitation: spatial patterns, intensities and seasonal timing. In response to 2×CO2 concentrations, all models show a mean intensification of precipitation rates during extreme events of approximately 3-4% K−1. However, projected regional patterns of changes in extremes are dependent on model resolution. For example, the highest-resolution models show increased precipitation rates during extreme events in the hurricane season in the CONUS southeast, this increase is not found in the low-resolution model. These results emphasize that, for the study of extreme precipitation there is a minimum model resolution that is needed to capture the weather phenomena generating the extremes. Finally, the observed record and historical model experiments were used to investigate changes in the recent past. In part because of large intrinsic variability, no evidence was found for changes in extreme precipitation attributable to climate change in the available observed record.
So … it *could* be that they found differences in extreme events that were statistically significant in and of themselves, but not with good enough agreement to the model simulations for making an attribution. I don’t know how else to reconcile the seemingly different conclusions.
Bit of an older paper, but Groisman et al. (2005) has some good descriptions of the theory, considerations and challenges involved doing this kind of study.
Thanks Brandon. She responded poste haste with a copy of the work so you’ve given me something to sink my teeth in to.
Theses comments are about two different studies… one the bay and the other CONUS.
Greenland summit record cold. Typical WaPo. Greenland cooling is weather. Greenland warming is climate change. Except Greenland has been slightly net gaining ice last two years, rather than losing as previously, a fact not mentioned by the usual suspects.
Re: the drought of U.S. landfalling hurricanes.
I predicted some time ago that a hurricane landfall in the months before the presidential election would push voters to support Democrats. Prediction confirmed: Hurricane Trump has already landed.
No hurricane has ever been as destructive.
The mentioned “tsunami thread” treads much too heavily on the actual threat.
QBO – from a PhD student in Oklahoma:
Another graph from same source:
Current QBO cycle showing complete disregard to the model we know. As ——– pointed out, closest is 1960:
Current QBO cycle showing complete disregard to the model we know.
Kinda makes one wonder how these errors accumulate in the GCMs.
August 11, a very hot day in models, a record cold day in Europe:
400+ppm is good for plants.
400+ppm is good for plants.
And frost is bad.
Sorry – the picture I linked to keeps changing daily.
It’s chilly in Europe, brrrrrrrrrrrrr.
Snow in the Alps in mid August.
Demonstrating that England is not Europe, it has been very hot here during August. (for England) Today it is far too hot to work in my south facing garden and we had to abandon the beach hut as it is also south facing.
Mind you, the Met Office in London is insisting that down here in the west country today there will only be brief intervals of sunshine. Hmmm.
We are supposed to be getting some very hot weather from Spain the next few days as a result of a Spanish plume.
Tony. What’s the outlook for Brexit? (You might want to reply on the politics thread.)
‘It’s been very hot here during August.’
Say,Tony, do yer think the English climate is changing?
O/t Great English cyclists!
I did say it was hot…for England. Its been a pretty normal summer overall, some heat some cloud some rain some wind
If you have fingers of frigid arctic weather jabbing down onto the agricultural fields of the northern hemisphere during early August on any sort of consistent basis… it’s just a wonderful break from all that awful sunbathing. And those degenerates on nude European beaches will have to put their Speedos back on to keep their macaronis warm.
Same ol’ CET see-saw ,Tony …
Prof. Sanjay Srivastava’s worldview and focus appears to follow that of IPCC’s former chair Rajendra Pachauri.
More data from the past which shows this period of warmth is NOT unique.
I am going to keep harping on this to expose how misleading the AGW enthusiast are when presenting their case to the public.
Five cornerstones from where I come from on the climate issue are as follows:
1. Past history shows this period in the climatic record is not unique.
2. Past history shows that each and every time solar enters a period of PROLONGED minimum solar activity the global temperatures have responded down. I have listed the criteria (in the past) which was last met in the period 2008-2010. With that said I think there is an excellent chance of this criteria being met presently and this time the duration could be much longer.
3. There is a GHG effect but I maintain it is more a result of the climate/environment rather then it being the cause of the climate.
4. If one looks at the climate just since 1950 -present(to take a recent period of time) and factors solar, volcanic activity, global cloud cover and ENSO versus resultant temperature changes one will find a very strong correlation.
5. Temperature data of late must be met with suspicion. I maintain satellite temperature data is the only valid temperature data.
Remember if global cloud coverage should increase and snow coverage/sea ice coverage should increase in response to prolonged minimum solar conditions that would accomplish the albedo to increase. Even a .5% to 1% increase would wipe out all of the recent warming.
Albebo is hard to change and at the same time it takes very little change in it to have climatic effects.
It is similar to Ice Age conditions versus Inter-Glacial conditions; hard to go from one regime to the other but at the same time the change required is very minimal. It is a balancing act which most of the time is in balance but every so often factors conspire to throw it out of balance which we know when we look at the climatic history of the earth.
CLIMATIC HISTORY – which is totally being ignored by the AGW movement has to be kept in the forefront and I am going to do that each and every time I combat their notion that this period of time in the climate is somehow unique.
I wonder what it is going to take to get the truth out about this period of time in the climate which is by no way unique?
Agree completely. Additionally one can cite
the absence of any consistent correlation between temperatures and atmospheric CO2 concentrations over geological time scales.
the consistent failure of ‘projections’ of IPCC’s computer models of climate.
the absence of the tropical tropospheric ‘hot spot’ predicted by those same models.
the inability of those models to tell us where, when, & how much precipitation will occur.
The current warming has been ongoing for hundreds of years since the end of the Little Ice Age.
The Minoan, Roman and Medieval Warm Periods (all of which were at least as warm as the present one) occurred in the presence of much lower atmospheric CO2 concentrations (well before the industrial revolution).
Six major glaciations commenced at times when the atmospheric CO2 concentrations were greater than now.
The atmospheric CO2 concentration in the ditant past has been t least an order of magnitude greater than at present . If there were a ‘tipping point’ the Earth would have no oceans and we would not be here to discuss the issue.
Exactly gyptis 444. AGW is a scam and has caused much harm to the science of climate by gearing the studies to co2 versus the climate rather then solar/geomagnetic effects which is what is behind the reasons why the climate changes.
These factors conspire in driving the terrestrial items that control the climate to move away and toward thresholds which in turn have various effects upon the climate when these terrestrial items are pushed to extremes.
Global cloud coverage
Global sea ice coverage
Global snow coverage
ENSO /Sea surface temperatures globally
Yep. Never been any doubt that temp in our geological epoch is a dropped spaghetti. There are no lines, and the present warming bump is garden variety. The most drastic cooling and warming trends occurred between 15 and 8 thousand years ago. There is no Anthropocene.
All widely known for many decades and never challenged. The knowledge is simply suspended or obscured for the purpose of advancing an agenda. It’s there, but it’s not there. Like the man on the stair in the rhyme:
Yesterday, upon the stair,
I met a man who wasn’t there.
He wasn’t there again today,
I wish, I wish he’d go away.
6. Historical records show that warmer climates are better for humans and the biosphere.
Are CO2 emissions harmful? What’s the evidence?
Re: The Damage Function
The damage function is an essential input for estimating the Social Cost of Carbon (SCC). Without a valid damage function, SCC estimates are meaningless.
IPCC AR5 WG3 Chapter 3 mentions ‘Damage Function’ 18 times http://www.ipcc.ch/pdf/assessment-report/ar5/wg3/ipcc_wg3_ar5_chapter3.pdf . Some examples:
“Damage functions in existing Integrated Assessment Models (IAMs) are of low reliability (high confidence).” [3.9, 3.12]”
“Our general conclusion is that the reliability of damage functions in current IAMs is low.” [p247]
“To develop better estimates of the social cost of carbon and to better evaluate mitigation options, it would be helpful to have more realistic estimates of the components of the damage function, more closely connected to WGII assessments of physical impacts.”
“As discussed in Section 3.9, the aggregate damage functions used in many IAMs are generated from a remarkable paucity of data and are thus of low reliability.”
Here’s an example of an argument to overstate the damage function”
“Box 3.9 – Uncertainty and damages: the fat tails problem
“Weitzman (2009, 2011)… emphasized the existence of a chain of structural uncertainties affecting both the climate system response to radiative forcing and the possibility of some resulting impacts on human wellbeing that could be catastrophic.”
What does this mean: “some resulting impacts … could be catastrophic”? Does it mean catastrophic for a few people (well, so is a fatal car accident for the remaining family members), or catastrophic for human civilisation, or something in between? I’ve argued before that there is a lack of convincing evidence that catastrophic impacts for a large proportion of humanity is not a credible. Therefore, policy analysis should be about the economic consequences and the net costs after mitigation.
However, without a valid damage function, the analysis cannot produce credible results. As AR5 WG3 points out, the damage function is highly uncertain. Therefore, the SCC estimates, and the whole justification for mitigation policies, are not credible. No policies, other than ‘No Regrets’ policies, are justifiable, on a rational basis, on currently available evidence.
Continuing to burn fossil fuels is also a policy. Obviously you have no regrets.
Brandon R Gates
Thank you for your question
I am not sure what you mean by “no regrets”, so I’ll explain what I am advocating for and hope this answers your question.
My primary concern is to get reliable electricity supply to all people on the planet at as cheaply as possible. The cheaper the better because cheap electricity raises productivity > GDP growth rate > standard of living > human well being> life expectancy > health – communications > education, and much more. IMO, the environmental impacts of fossil fuel use are secondary. However, I don’t like wasting the precious, limited resource because it will be needed by future generations for many uses other than for generating electricity. Therefore, I people to urge advocate for policy action to undo the mass of bad laws and regulations (i.e. unjustifiable on the basis of objective evidence) that have grossly inflated the cost of nuclear power (by up to a facto of ten) and delayed its deployment over the past 50 years. If you haven’t seen it, I showed the cost blow out here: https://judithcurry.com/2016/03/13/nuclear-power-learning-rates-policy-implications/
Once we address that issue, we can return to rapid learning rates. If the pre-1970 learning rates (rate of cost reduction per doubling of capacity) and the pre-1976 deployment rates prevail from 2020 to 2040, the wholesale cost of electricity could reduce by 50% and nuclear could replace the energy equivalent of 70% of the electricity produced by coal by 2040. That could avoid 4 million deaths and 200 Gt CO2. Not a bad result eh? All we have to do is explain the benefits and get the public on side. Then the politicians can be empowered to fix the mess. :)
I should also have responded to this:
I don’t see that as a policy. I see intervention to stop it and to stop or incentivise other technologies as a policy.
The best policy is to allow competitive markets to find the least cost way to supply electricity to meet the requirements of the consumers. The more we interfere in markets the more we reduce their ability to meet demand and requirements at least cost. I am not saying no regulations. But I am arguing to remove the mass of unjustified impediments on nuclear and the irrational, hugely costly incentives for renewables. Also, do all possible to remove all incentives and distortions that reduce the ability of the market to supply power to meet all the many requirements of different consumers throughout the world at least cost.
Peter, I hear ya. But the CAGW/green crowd don’t want that. They don’t want affordable plentiful energy for humanity, period. They dress it up nicely in global warming concerns, or fossil fuel concerns, but that is the gist of it. And Nuclear power achieves exactly that, so they will NEVER support it.
Energy from “renewable” sources will make the poor poorer and the rich richer. Obviously you have no regrets.
What I meant by you having “no regrets” is that you don’t apparently see any (“significant”?) issues with burning fossil fuels which warrant urgent reduction in same. I could be mistaken … it happens.
We might agree on many of those points than you realize. I suspect, but cannot say with any confidence, that there’s a lot of regulatory red tape at the US NRC which was put there by anti-nuclear political pressure based on the fallout (if you’ll pardon a bad pun) of things like TMI.
No, not at all. We’re pretty much of the same mind.
I was of course speaking figuratively. Burning fossil fuels is, however, a collective decision that results in an action with some set of consequences. Thus in my way of seeing things, BAU is a de facto policy of no-policy.
Going back to the final paragraph of your original post in this subthread: However, without a valid damage function, the analysis cannot produce credible results. As AR5 WG3 points out, the damage function is highly uncertain. Therefore, the SCC estimates, and the whole justification for mitigation policies, are not credible. No policies, other than ‘No Regrets’ policies, are justifiable, on a rational basis, on currently available evidence.
My point is this: without a valid damage function, the analysis that continuing to burn fossil fuels is a-ok cannot produce credible results. No BAU scenarios, other than “no regrets”, are justifiable on a rational basis, on currently available evidence.
Or in short: uncertainty cuts both ways. If I’m in the dark, so are you.
Competitive markets suck at negative externalities. But that would probably be my saltwater economic sensibilities speaking.
Thank you for telling an entire, inhomogeneous, group of people what they don’t want. I can do sweeping generalization too: if it weren’t for the strawmen they create, climate “skeptics” would never win any arguments.
Probably just frustration. There’s clearly a large core of CAGW types whose behavior strongly supports this conclusion. Most of the rest don’t say a thing when they spout their crypto-socialist nonsense.
Perhaps you should open your eyes.
Current BAU includes a number of initiatives that are driving the exponential decline in the price of solar PV. The fact that PL wants to “remove the […] irrational, hugely costly incentives for renewables” illustrates this.
Of course, they aren’t “hugely costly” in absolute terms, and they certainly aren’t “irrational”. They are a perfectly rational attempt to manipulate Wright’s “law”/”learning curve” for the purpose of driving that exponential decline in the price of solar PV mentioned above.
PL has embraced “learning curve” for the sake of nuclear in the post he linked above, his refusal to admit its place in solar PV is probably the same intellectual dishonesty he’s always accusing anybody who disagrees with him of.
I suspect you mean “that continuing to burn fossil fuels is [NOT] a-ok”.
But if current BAU, including the incentives for solar PV, can solve the problem, then you don’t need either a “valid damage function” or further interference with the free market.
If we accept that dumping CO2 into the system represents a risk beyond supposed climate change, then no-regrets, or even low-regrets incentives for fossil-neutral power are indicated.
The fact that none of the risks can be pinned down well enough for a “valid damage function” just means that sweeping, high-impact reforms aren’t indicated. The sort of reforms pushed by the core of watermelons who appear to care more about socialist bureaucracy and world government than climate. (Else they wouldn’t engage in arm-waving, distraction, and name-calling when people try to talk about lower-impact options.)
I’m not sure I agree, but what I do agree with is that the type of “competitive markets” that evolved under the system of competing nation-states in Western Europe over the last few centuries do.
There’s a good reason for that, IMO: the nation-states that hosted this early “free-market” capitalism set boundary conditions which encoded the externalities they cared about (via, e.g. tariffs, subsidies, volume limits, etc.). So the traditions of competitive markets grew up within those frameworks.
What those nation-states cared about, primarily, was their own competitive position (tax base and civilian production of war resources) first vis-à-vis the Muslim world, then one another.
There are, IMO, ways in which competitive markets could be incented to deal with the fossil-carbon issue. However, IMO most economists are like the carpenter who has only a hammer, and can’t visualize using a screwdriver rather than hammering on a screw.
The loudest voices tend to be the most heard. Sometimes that’s unfortunate.
I’ve got a bag full of skeptic nonsense which goes mostly unchallenged in some fora. Speaking strictly for myself, correcting wingnuttery can be tiresome. Life’s too short. This may be a partisan failing on my part, but I’d rather reserve my energies for my opposition than nominal allies.
I’m not omniscient. Perhaps you could recognize that about both you and I once and a while.
I guess the way that I see it is without those subsides, the (true) cost decline wouldn’t have happened, and solar PV’s penetration wouldn’t have increased. However, if BAU is a troublesome term, I’m content to call it something else. Continuance of fossil fuels being the majority energy source. A bit wordy, but that’s what BAU means to me.
Perhaps. It’s really up to him to defend himself against that charge should he wish. Speaking strictly for myself, I have my own arguments lamenting the favorable treatment of solar and wind over nuclear power, and will even go so far as to say that much of the opposition to fission power isn’t rational.
It’s correct as originally written. I’m saying that if he doesn’t know the damage function, he doesn’t know that it’s ok to continue burning fossil fuels.
That’s pretty close to my own personal position, with the exception of “further interference with the free market”. I say that because I do believe that there are clear and present detriments to fossil fuel use (coal in particular) which have zip to do with CO2 emissions/AGW concerns. I argue that the market has not properly priced those negative externalities.
I don’t agree with that. I tend to see not being able to pin down risks (e.g. high uncertainty) as a risk in and of itself. I think it’s a perfectly rational decision to say, “Not knowing the consequences of this continued action (say, unfettered burning of fossil fuels), it would be more prudent to exercise caution and burn less of them”.
I’ll stipulate that there are some outspoken activists who think in these terms, and that I don’t agree with their desires. Speaking strictly for myself, a worldwide socialist bureaucracy is not an end in and of itself. That said, as a practical matter I consider it likely that a global bureaucracy will be necessary to obtain near-zero emissions before the end of the century and that wealth redistribution to poorer nations will also be required to obtain that goal.
Skipping over the European trade portion, it exceeds my macroeconomic chops.
Name a couple. I’m open to new ideas, and don’t have strong opinions TBH. Well ok, I think cap and trade is a bad idea. I’m dubious of revenue neutral carbon taxation.
I can’t agree with you. Lie down with dogs, get up with fleas.
Of course, I end up spending a lot of my time opposing people I don’t want to be tarred with (e.g. Flynn)
This was in response to your “[t]hus in my way of seeing things, BAU is a de facto policy of no-policy.”
The term “BAU”, standing for “business as usual” is one of those sources of semantic bait-and-switch that make the whole conversation so problematic.
This is my target. The “[c]ontinuance of fossil fuels being the majority energy source” is not “business as usual”. That “continuance” is in reality totally dependent on it’s being profitable. “Business as usual” is the flow of business investment in the perceived most profitable option.
If the progression of present business leads to other options becoming more profitable, investment will tend to switch to them, and things will change.
I would agree that “much of the opposition to fission power isn’t rational.” But I don’t agree that there’s anything “irrational” about the present level of support for solar PV.
Well, when it comes to non-CO2 issues with coal, I’m not interested in discussing it. It seems appropriate to me that coal-fired power should be prohibited from filling people’s air with pollutants. CO2 not being defined as a pollutant.
Well, you’d have to justify that in terms of the risk of the high-impact reforms.
Which would, in turn, have to be compared to the likelihood of solving the problem without those reforms.
The question being, are you open to solutions that don’t require things like “a global bureaucracy”?
Or is the “global bureaucracy” really part of your objective?
So you say.
OK, here’s my favorite option: set up a system where any fossil fuels burned for energy have to have a certain percentage of similar fuel completely from fossil-neutral sources included.
Let’s take gas (just as an example, and with the caveat that other types of fuel would require similar systems):
For each unit of gas burned for energy, require at least 1/1000, 0.1%, to come from sources based entirely on ambient carbon.
Each year, the percentage required increases by 25%: thus the next year it would be 0.125%, the next 0.15625%, and so on. To make it simpler, every third year adjust it to double the previous third year: thus the third year’s 0.1953125% is adjusted to 0.2%.
The big advantage of a system like this is that the actual cost, to the energy consumer, would be small, while the amount of money available to pay for this ambient carbon-sourced fuel would be much larger than the cost of fossil fuel.
When I say fuel from sources base on ambient carbon, I’m talking about entirely using non-fossil energy for the actual process. For instance, if it uses bio-gas, all the energy used in growing the biologicals would have to come from non-fossil sources. No using fossil fuels for tractors.
The advantage of this is that it would provide a high-priced market for many/most forms of fossil-neutral fuels immediately, while the volumes would be small enough that immature and experimental processes could meet them, and the impact to overall energy prices would be small.
As the required percentage increases exponentially, Wright’s “Law” would help the costs to come down with increasing volumes.
At the same time, there would be no “picking winners” by government(s). Any technology that could compete could get in the game.
Obviously, this is a very simplistic description of what would be a much more complex system of regulation. However, IMO it has the potential to provide a “level playing field” for many or most of the technologies, as well as leveraging Wright’s “law”/”learning curve” without “picking winners”.
I just spotted you misrepresnetation, strawman, intellectual dishonesty here:
The lie: “his refusal to admit its place in solar PV”. Contrary to your dishonest misrepresentation, I fully acknowledge the learning curve for solar. The learning rate for PV is is about 22% (with huge subsidies). However, after 40 years of huge subsidies it still generates a minuscule proportion of global electricity, let alone global energy. Over the same time period, nuclear reached 16% of global electricity at a far lower cost (total system cost, of course). The subsidies for solar are about 100 times higher than for nuclear.
Or frequent intellectual dishonesty doesn’t enhance your credibility.
Well, they can’t! (for reasons explained many times, but you deny the relevant facts).
The market is not free. It is highly distorted by policies that have been imposed over 50 years – including the extremely high subsidies for solar. Your attempted dodge they aren’t “hugely costly” in absolute terms is disingenuous and misleading. Solar supplies negligible energy, whereas nuclear supplies much more. A fair comparison must be on the basis of $/MWh of subsidy, like this (from EIA):
Brandonrgates in reply to AK said:
Absolutely correct. And that is the most important reason why AK and Segrests’ defense of the subsidies and advocacy that solar is viable are wrong.
The difference with nuclear is that if not for the impediment imposed on nuclear since the late 1960’s nuclear could have substituted for 74,000-190,000 TWh of coal and gas generated electricity by 2015, thereby avoiding around 74-170 Gt CO2 emissions and 4.5-8.9 million deaths.
Solar cannot do that.
brandonrgates in reply to AK
That is not a rational argument. We know that cheap energy is good for humanity. If you want to impose policies to make energy more expensive you need to justify it rationally. You could apply your argument to anything: e.g. if you don’t the damage function, you don’t know that it’s ok to continue building communications networks.
AK said and you said it’s pretty close to your own personal position:
What other risks. CO2 is a huge net benefit if you exclude the hypothesized climate damages.
That’s a route to sustained poverty, or at least a slowing of the rate of improvement.
The alternative that has demonstrably improved human wellbeing fastest is not socialism. It is lightly regulated markets, fair competition, reward for incentive and commercial risk takers. Wealth is spread from rich to poor nations by globalization, multinational corporations, free movement of labour and skills, and free trade, not by principally by socialist driven aid.
The needed incentive is to remove the protectionism and market distortions that are blocking progress.
Well, I like dogs well enough that fleas is just the price I have to pay. Which is not to say that when it gets bad enough I don’t haul out the flea medicine.
I understand the instinct. At the same time, I find it perfectly acceptable to tell my interlocutor, “Don’t associate me with those bottom-feeders, I don’t agree with them and they don’t speak for me.” Guilt by association arguments are fallacious. A rhetorical gift if you will.
Ok. I’d be the last person on the planet to argue for 99.9999% of energy generation on this planet to be done at a loss.
Well yes, that’s how markets work.
I wouldn’t have predicted you for holding that POV. See, stereotyping gets even me in trouble from time to time.
Bearing in mind that you’re not interested in discussing it, about the only way I see of accomplishing that is to sequester everything which comes out of the flue. I suppose that could be done in such a way that the CO2 comes out. OTOH, filtered CO2 could be handy as a feedstock for algae. Of course, upon burning the algae, carbon neutrality goes away … so I’m left with thinking coal is just not a good idea for the future. It had a good run and did a lot of good. I think we can, and should, do better.
Which is ultimately all guesswork. The other alternative is to simply try stuff on a small scale and see how it works, or doesn’t. That is essentially what’s happening already.
That would be my preference.
Only if I’m running the show. Which isn’t likely. So no.
My side of this argument has generally been saying big things need to happen yesterday since the 1990s. How well has that actually worked out?
Ok, I’ve read your idea. Applied across the board it would bring us to carbon neutrality by 2048 if the program started next year, which I simply cannot dislike. OTOH, because it’s an exponential function, 48% of the replacement happens in the final three years. An accelerating function makes sense, but that one is pretty extreme.
As you note, the regulatory requirements would be complex. There would need to be substantial penalties for non-compliance or cheating. What concerns me is that energy providers would simply pass the cost of fines on to consumers.
I’m mulling over the level playing field. First thing which occurs to me is that the electrical grid in the US is already at 13% renewable, and 19.5% nuclear. Clearly, not all utilities generate at those ratios. However, the ones which are generating above the target are rightfully going to argue that the power they’re selling consumers puts them into compliance, therefore they don’t need to feed any of their coal or gas plants a carbon-neutral fuel. You see where I’m going with this; companies are going to want to buy and sell swaps, which would essentially turn into cap and trade.
There are pluses, the main one being that it lets the market more or less figure out for itself how to meet the mandate. I definitely don’t hate it.
It’s a logical argument based on the premise that *neither* you or I know the “true” damage function.
And we also know that some have fewer detriments than others. The difference between coal and nuclear should be a no-brainer.
Nope, all I need is desire and power.
No. It is what it is regardless of what you or I include in or exclude from the analysis.
Is “climate change” falsifiable?
If “everything” is explained as being caused by “climate change”, can that even be a scientific theory? Or is that just confirmation bias of political expectations? Can “climate change” be defined sufficiently to be testable?
What if we define it as: majority anthroprogenic catastrophic global warming over more than 30 years?
i.e. that > 50% of the observed global warming over a period of for more than 30 years is quantitatively due to anthroprogenic causes?
We now have > 36 years of data over the satellite era since 1979.
The tropical tropospheric temperature has been described a the “anthroprogenic signature”. John Christy’s Feb 2nd 2016 testimony shows that the mean of global climate model warming rate is “only” 300% of actual measured satellite and balloon temperature rise.
Have global climate models now been falsified?
What predictive tests would need to be met to restore global climate models to being scientific theories?
David L. Hagan,
To me the term “climate change” is meaningless. The climate is always changing, so it is a fact that it changes. It’s proven by overwhelming evidence. Discussing it is a waste of time and research resources – a diversion from what is relevant.
I’d urge that the focus be redirected to researching whether or not GHG emissions are a significant threat at all. Some argue human caused GHG emissions pose a threat to human civilisation. Other’s argue the consequences will be less catastrophic but could still be catastrophic for some populations and regions. Others argue that the damages without mitigation polices to cut emissions will be a small impost on global GDP. And still others doubt that GHG emissions are doing more harm than good. (I am moving, slowly, towards the last of these.) Still others argue we can handle the problem by adaptation, improving human wellbeing, increasing the GDP of the world, improving resilience to all risks. We can do the latter best without UN and government command and control policies.
Therefore, I’d urge that most of the research effort should be on objective research to develop a sound estimate, with acceptable uncertainty, of the costs and benefits of increasing GHG concentrations in the atmosphere.
The lack of “anthropogenic” is my biggest problem with the label. But in the grand scheme of things, these kind of semantic niggles are the least of my concerns.
What are you main concerns? Do you rate the lack of convincing evidence that GHG emissions are doing more harm than good a key concern?
*Potentially* false complacency is one. Tragedy of the commons is another. Free rider problems is yet another.
More the psychology that we need to see “convincing evidence” (whatever that really means, but it sure *sounds* rational) of damage before doing anything meaningful to prevent it from getting worse. But “everyone” knows that adaptation to come what may is cheaper than emissions reduction, so I’m probably just being a worry-wart again.
Your comment suggests to me you believe GHG emissions are dangerous, or doing more harm than good. Am I correct? If so, what evidence causes you to believe that? (refer to my comment here: https://judithcurry.com/2016/08/13/week-in-review-science-edition-50/#comment-803352 )
I also interpret (perhaps misinterpret) that you feel governments should implement policies to reduce GHG emissions, rahter then enable polices that will aloow that to happen as well as doing a wehole host of other good – like the policies I suggesrted to remove market distortions. Am I correct? If so, what makes you believe policy intervention to drive the polices certain groups advocate is better than removing impediments that are blocking progress to achieve the same goal?
sorry for the typos, I pressed “send” to soon
I think unmitigated GHG emissions are a *potential* hazard … an as-yet largely unrealized risk. One of my biggest concerns is that because of the perception that there’s no clear-cut net damage from GHG emissions to date, that we’re being lured into a sense of false sense of security. Your question itself concerns me … I take it that you wouldn’t be motivated to calling for urgent mitigation actions unless damage was clearly already happening. I don’t think it’s prudent to be reactive … there’s a lot of “inertia” in the climate system and stepping on the brake after the collision has begun in earnest may not be the wisest course of action.
I’d always be for doing the optimal solution. Problem is knowing what’s optimal. I doubt there will ever be unanimous decisions on that point.
I take it you’re a nuke advocate. As it happens, so am I. I also think wind and solar are becoming increasingly viable options, and that all options need to be on the table.
No worries, I think I parsed everything correctly.
You misunderstand what I am saying. I am not saying that because there haven’t been net-negative consequences of warming to date (in fact increasing GHG emissions and warming have been massively beneficial so far) the trend will continue indefinitely. I am saying:
1. The planet is near the coldest it’s been in 540 million years
2. Life thrived when warmer
3. Climate changes abruptly; always has and always will
4. Life thrives through rapid warming (some species suffer and many others benefit)
5. The planet can warm a long way, much more than projected, without catastrophic consequences for life
6. Life and humans adapt to warming faster than the climate changes (especially with humans ability to engineer changes to improve productivity of food growth)
7. Sea level rise is an insignificant cost given the rates of sea level change and infrastructure development and replacement
8. Much more land will become productive as the planet warms.
9. The planet is unlikely to get out of the current ice age until North and South America separate. That could be tens of millions of years.
As far as I know there is a lack of evidence to support what are, in reality, assumptions that GHG emissions are dangerous. The damage functions are not supported by convincing evidence. They are largely assumptions that warming is bad. If you can provide links to convincing evidence that warming would be significantly damaging overall, please provide it. Not just cherry-picked bits and pieces. We need the whole of planet economic impact for health, agriculture, sea level rise, freshwater, storms, ecology, etc. (e.g. see Richard Tol Figure 3 here: http://link.springer.com/article/10.1007%2Fs10584-012-0613-3 , free version here: http://www.copenhagenconsensus.com/sites/default/files/climate_change.pdf )
The three charts and my comments here are one line of evidence: https://judithcurry.com/2016/07/12/are-energy-budget-climate-sensitivity-values-biased-low/#comment-796768 These suggest to me there is no rational or objective basis to believe there is a significant threat of catastrophic or dangerous consequences of GHG emissions. Therefore, it’s just an economic cost-benefit analysis. So far the cost-benefit analyses suggest mitigation, other than No Regrets policies, would be a major mistake http://anglejournal.com/article/2015-11-why-carbon-pricing-will-not-succeed .
There are many others lines of evidence but let’s discuss that one first. Can you show my interpretation is substantially wrong (with valid evidence, not assumptions and innuendo about “warming must be bad”.
Exactly! Optimal is the economically rational approach based on best available evidence. The rational policy response is No Regrets policies. That is they must be economically beneficial without taking assumed (but unsupported) climate change impacts into account.
I do not believe solar and wind are economically viable or ever will be (except for niche fringe applications). I doubt they are likely to ever supply a large proportion of global energy. However, I agree all options should be on the table, without distortions, production subsidies, and other incentives that favour one type of technology over others. We need to remove the masses of interventions that are preventing the world from having electricity supply that meets requirements at lowest cost.
Good to know.
… as well as a numbered bullet list of reasons why you think increasing CO2 isn’t dangerous and/or may be a net benefit. For the umpteenth time I’ll note that none of your reasoning is accompanied by the kind of detailed cost-benefit figures you frequently demand others to show you.
I doubt my Internet search skills are superior to yours. If you’ve not found what you’re looking for by now, you’re simply not going to be convinced. And let me be clear, it’s *your* decision whether to be swayed by evidence or not. There’s not a darn thing I can do to force you to accept whether or not some body of evidence is convincing to *you* or not.
Slapping a different label on the policy doesn’t make them any more or less “correct” a decision.
No. I believe that there are too many unknowns and that the domain is too complex for me, one lay person, to confidently say that you are substantially wrong. I can say that the onus is on you to demonstrate that you are substantially correct, and that you consistently fail to present the kind of detailed evidence and analysis you demand of others.
See again what I said about the problem of knowing what is optimal. If I’m in the dark so are you.
Also good to know.
That sounds nice, but it presumes that there’s some objectively non-distorted view of things from which to base decisions.
That’s nonsense. I have provided them. You obviously don’t bother reading them or don’t understand (probably both).
Apparently you don’t like evidence (supported by reasoned argument and evidence) that doesn’t agree with your beliefs.
Then why do you need me or anyone else to give you further information?
[snork] Yeah, right. I *want* to believe that we’re facing potentially catastrophic effects of a rapidly changing climate unless we urgently engage in a challenging and expensive globally-concerted effort to wean ourselves off fossil energy. I desperately want to believe it is true. I’ll die of disappointment unless I’m right about it.
David L. Hagen,
No, but it can be a strawman.
Is global warming testable by:
In particular, global climate models have predicted an “anthropogenic signature” of strongly enhanced tropical tropospheric temperatures.
John Christy shows the global climate predictions as 300% of actual since ~1980. Does that disconfirm GCW?
What else is predictable and testable?
The very last link. After reading the piece, I’m beginning to wonder if I am a visionary since I came to precisely the same conclusion about my majors, psychology and sociology, 50 years ago. The more things change, the more they stay the same.
Piecing together the Arctic’s sea ice history back to 1850
I’ve seen these charts, but the most striking feature of all of them is:
“State of the ice Unknown”
Is with temperature reconstructions, infilling is an unverifiable assumption.
“Is with temperature reconstructions, infilling is an unverifiable assumption.”
take Infilling temperatures over greenland.
For the most part all the stations are at the edge. So, you have
3 methods to infill
What you DONT know is there are actually records from the interior
of greenland. They are hard to find, but if you know where to go.. you can get them.. Then you check your infill against these records
And with krigging you effectively do the same thing with cross validation.
As for the ice?
Just wait.. there are even MORE RECORDS being digitized.. and photos as well so the current effort at infilling will get tested… silly TE
Why infill at all? Do we have to pretend that we have data when we don’t have them?
What you DONT know is there are actually records from the interior
of greenland. They are hard to find, but if you know where to go..
Super secret hidden data?
But even if there were QC-able super secret data, that means it’s not infilling.
There whaling charts have big holes of unknowns ( unkown, assume icy ).
“But even if there were QC-able super secret data, that means it’s not infilling.”
Do you even understand?
‘Is with temperature reconstructions, infilling is an unverifiable assumption.”
That it’s Unverifiable.
There are two ways to interpretate that.
Unverifiable in principle, or in practice
Lets take a Simple ‘infilling”
First what is “infilling” Infilling is estimating values in locations
where you dont have data using data from measured locations.
The most famous example of infilling is CET
So you have an area with a few known temperatures at a few locations
and you predict or estimate that all of the “unsampled” locations have the same values.
That is the Simplest form of infilling. Krigging is more complicated, so
lets just stick with Simple averaging. The area described by CET is given a single value, even though there are infinitely many spots not sampled.
Is this Infilling unverifiable in principle or practice?
Well first undertand what the infilling means.. what is means is that the best estimate of the unsampled locations is the “average” That is easy to test if you dont believe it.
1. propose a better estimate
2.Go to 100 locations that are not sampled and sample them. Then compare the actual measured to the predicted averae and to your proposed improvement.
And what you will find is that it is a pretty dang good estimate.
And your worries that some super secret cold areas squirreled away
in unsampled locations was just plain nuts.
Or if you THINK AHEAD
1. Take the sampled locations and split them into multiple piles
2. Using each pile construct your infilled feild
3. using the held out data test your infilling
Or get Lucky
After you finish your infilling, you find data nobody archived before.
And the data is not secret.
and no links for you, go do your damn search
TE, CG — Infilling is common in spatial problems and had proven its value over and over. Ore estimation, contaminant hydrology, hazardous site remediation, mapping,. Just remember, as has been noted numerous times in the past, it is an estimation problem first and foremost. It is also worth keeping in mind that essential the same underlying game–interpolation–is found in finite difference and finite element methods underpinning not a little bit of technology. Not comfortable? Don’t fly, don’t go into buildings, don’t drive a car, don’t wander into the wilderness… ;o)
As for cross validation…it might help to look here:
In approach cross validation reminds one of the jackknife.
However if Greenland’s data are predominantly on the coast then cross validation is will not speak to the interior. However, my assumption is Mosh was noting the existence of some interior data. It is also worth noting that such data would have to pair with other locations at ranges less than the colletion function/variogram range parameter. I haven’t looked for a while but if memory serves me right that for BE is over 1000 km.*
* FYI I also think that this range is actually too large for other reasons but that has been brought up before and is not critical in the present context.
Mosh: repeat after me, ” k-r-i-g-i-n-g”. Glad to see you up and about. Go with the Afrikaans pronunciation.
“However, my assumption is Mosh was noting the existence of some interior data.”
Yes. There is some that we use in our recon ( CRU lacks this) and some that I have to do testing..
Also, some more I recently found.. mostly short series… a few years here and there, but enough to test.
Getting access to it is the trick
Getting access to it is the trick
Funny thing getting data from some organization/people. Also a funny thing sometimes getting access to a country to take measurements.
Glad to see you are able to stir the embers on the BEST side.
TE, CG — Infilling is common in spatial problems and had proven its value over and over.
Yes, and when there is a basis for believing a homogeneous distribution, it may even be accurate.
But when I examine high resolution land surface temperatures, and high resolution land surface temperature trends, I find large INhomogeneity.
Further, Greenland is a pretty good example. Interior surface trends should be lower than peripheral trends, because the interior is 1 to 3 kilometers higher than the coastal peripheral stations. And Arctic warming from sondes and satellite indicates decreased warming with height.
Yes, and when there is a basis for believing a homogeneous distribution, it may even be accurate.
Well you know NUGGETS, sample support, etc. are not exactly new news–particularly in some the fields I mentioned. There are ways to approach these problems but at the end of the day the results are still estimates. Now discontinuities–real or perceived–are interesting….
Also CG asked, Why infill? To me a big consideration is to get around the very big problem of biased samples, though I think that there is some potential utility in working with them when limitations are thoroughly explored. (Thoroughness also applies to different interpolation schemes.) In a nutshell I would explore a smorgasbord of approaches in an exploratory phase fully expecting iteration.
Basically skinning any cat is a messy job…how messy depends on the cat, the skinner, the skinner’s knowledge of cats, and the skinning technique used. That is to me you demand to much. Finally in the case of global temperature data we have a very, very big unwieldy cat.
Further, Greenland is a pretty good example. Interior surface trends should be lower than peripheral trends, because the interior is 1 to 3 kilometers higher than the coastal peripheral stations.
One interpolates temperatures and not trends. However as an aside, you are implicitly touching on an interesting and important topic–the time dependence of the spatial correlation function or variogram. In due course we will get to that point.
Sorry about the bold. Just imagine it off after NUGGET.
…because the interior is 1 to 3 kilometers higher than the coastal peripheral stations.
This gets to another interesting point for me. First in the case of BE (and in the US data which I have examined) one approaches the estimate by first removing to a degree, for me 2nd order the latitude and elevation dependence of the temperature producing a surface. The partially takes care of the elevation question. Why partially? Well elevation can entail mountains and these complicate matters–think drainage winds, orogenic precipitation, inversions, etc. Refined infilling will eventually have to incorporate things like this.* The other big factor is, again, time dependence of the correlation. There is still a of interesting work useful to be done.
*A reasonable next step is maybe to look at things in terms of physiographic regions. I know for the USHCN data [annual average temps at locations by year] the different regions significantly different correlation functions (variograms).
Anyway you might be happier to view the glass as half full rather than half empty.
MWG, thank you.
I do note the automated stations around Greenland:
These date from the early nineties – not far back enough to be of use for longer term trends.
I’m also somewhat curious why they aren’t included in GHCN or public quality controlled data sets. Perhaps because it’s so difficult to keep them clear of accumulated snow.
“Getting access to it is the trick.” Science is all about not sharing.
“In part because of large intrinsic variability, no evidence was found for changes in extreme precipitation attributable to climate change in the available observed record.
Not sure the units in Fasullo’s paper make sense. From the chart JCH posted, which appears in the press release but not in the paper, it seems simulated thermal expansion from the start of 2005 to the end of 2013 was 14 mm (going from 23 to 37mm in the vertical axis). This is essentially double the per-year figure from Llovel 2014: 0.77mm/year and 0.64 w/m2, over the same period.
The results from Llovel are backed up by a very recent paper which finds 0.71w/m2 over 2006-2015, but as total imbalance, not just ocean heat uptake.
Additionally, figure 2 in Fasullo appears to show simulated ocean heat content at the start of 1999 remained 20 zettajoules (joule * 10^21) below the pre-Pinatubo level. Maybe I’m misunderstanding something because this appears to contradict the graph in the press release, which shows by then the contribution to sea level from ocean heat was well above the 1991 value (as do most real-world datasets).
The ratio between heat and sea level also seems off. 1w/m2 is about 16 zettajoules over a year. The ratio in Lovell, for 0-2000m, was 1.2 mm/year per w/m2. In other words you’d need 1 / 1.2 = 0.83w/m2, or 13.3ZJs, to change sea levels 1mm. In his figure 2 it seems the energy needed is less than half: 6ZJs per mm.
I’m not an expert in this stuff so maybe I botched a number somewhere. Also, it seems water near the surface needs less energy to expand, and in the 18 months or so Pinatubo affected climate the decline in heat would take place mostly there.
Ok scratch the last part: water near the surface needs MORE energy to expand, not less. At least going off Llovel, the ratios are:
Full ocean (not 0-2000m as I said): 1.22mm/year per w/m2
Below 2000m: 1.66mm/year per w/m2
From the PR explanation for the graph:
The graph shows how sea level rises and falls as ocean heat content fluctuates. After volcanic eruptions, the Earth cools and, in turn, the heat content in the ocean drops, ultimately lowering sea level.
The solid blue line is the average sea level rise of climate model simulations that include volcanic eruptions. The green line is the average from model simulations with the effect of volcanic eruptions removed, and it shows a smooth acceleration in the rate of sea level rise due to climate change.
It then goes onto say:
The blue line between the start of the satellite record and present day makes a relatively straight line — just as we see from actual satellite observations during that time — indicating that the rate of sea level rise has not accelerated. But in the future, barring another major volcanic eruption, scientists expect sea level to follow the gray dotted line, which is on the same accelerating path as the green line below it …
Figure 3: Sea level rise associated with ocean heat storage and the sum of all contributions estimated from LE budgets and cryospheric contributions
Yes, the press release graph and figure 3 of the paper both seem to indicate that by the end of the 1990s the Pinatubo effect on ocean heat had almost disappeared. But figure 2 seems to indicate that more than half of the Pinatubo plunge still remained (3mm out of 5) at the beginning of 1999. Maybe I’m misinterpreting figure 2.
It also seems to me the altimetry record (red line) in figure 3 debunks the whole paper: it had caught up with the simulated, non-Pinatubo rise by the beginning of 1994. In other words, the Pinatubo effect had disappeared in about 30 months. This seems in line with the temperature record, which recovered the 1990 level by 1993-94.
I’m busy getting ready for my weekly visit from one of the best double bass players in the city… adding three new songs. We have fun.
My hunch is Figure 2 presumably washes out ENSO/natural variability, which likely caused the upward trend in the red line in figure 3. Figure 3. does not include the GMSL just prior to Pinatubo.
There are also graphs in the supplemental material.
Interesting. From link –
“We try to show that this temperature value is likely substantially higher than currently observed global temperatures,. . . “
There’s obviously an opening for someone to adjust current temperatures upwards, from actual to modelled. Who needs to read thermometers anymore?
Michael Mann pointed out that we don’t really need models, either –
“What is disconcerting to me and so many of my colleagues is that these tools that we’ve spent years developing increasingly are unnecessary because we can see climate change, the impacts of climate change, now, playing out in real time, on our television screens, in the 24-hour news cycle,”
Just watch 24 hour news TV, and obey your local cult leader. This is science?
More like delusional psychosis!
Should scientific research be funded publicly?
Some basic research should be publicly funded. it should be awared funds on the same basis as all other research areas. And funding greater than that should be properly justified on the basis of evidence that GHG emissions are doing or likely to do harm in future. We should fund research to determine the damage function because that is essential for determining if CO2 emissions will do harm, and if so how much per degree of global warming and per degree of global cooling avoided.
What would it take to achieve the Paris temperature targets? [link]
The analysis is based on ECS = 2.5K per CO2 doubling. What would the answer be of ECS is 1.7 K or 2 K?
If I am interpreting the climate sensitivity frequency plot in Figure S4 correctly, it shows the mode is around 3.5 C per doubling.
Figure S4. An illustration of the parameter distribution for ISAM for the Parameter Constrained Ensemble used to assess the likelihoods of exceeding climate targets.
Piecing together Arctic history to 1850
(emphasis on piecing – an environment that can change radically in an hour)
I think the Arctic size debate is a cultural pathology.
Like other size debates, bigger is better. :)
Seriously, I don’t see how this issue gets resolved
I assume we would panic if there were a 12% increase per decade.
If we achieve purity the Arctic will settle into a state of Grace.
Alarm is permanent,
It feeds the machine.
Like the perpetual state of war we find ourselves in.
One way to stop living in fear is just to decide not to.
Frig ‘climate change’ and the neurotics who preach it.
As for the scientific method and truth stuff: my favorite candidate so far to illuminating these questions resides around Howard Hunt Pattee’s ideas about what he called “semantic closure”.
There’s a research gate account:
A recent post in one of the climate blogs (No, I made no notes, so I can’t give a reference) posted a graph on sea ice that was supposed to be part of the first IPCC assessment report (reference inks were also provided). The discussion was that the first satellite data ice extent measurements began in the early 70’s. In 1979, or thereabouts, with new satellites, the technology changed to that currently in use.
The basic story was that from the earliest data until 1979, Arctic sea ice extent was rapidly increasing. This was a significant factor leading to the claims, calls for action, articles, and books saying that another glaciation cycle was beginning. After 79 the ice extent stopped growing and soon began declining again. The graph seemed clear enough that at the beginning of the earlier satellite data, Arctic sea ice extent was at least as low as currently.
By the second assessment report the early satellite data was discarded from IPCC consideration, sea ice estimates beginning only in 1979. There seems to be no indication that such pre 1979 data, if it actually exists, was used in that new 1850 to present compilation. Does anyone know if there is any such data?
Steven Mosher probably has an idea. An earlier comment suggest that it is being look at and we should see some publication soon.
Tony Heller has blogged on this. Akasofu wrote a peer reviewed paper on it in 2010. See essay Northwest Passage for a review of some of the pre-satellite era historical information. There appears to be a quasi periodic Arctic ice cycle of about 60-70 years. Thirtynincreasing, thirty decreasing.1979 was coincidentally an absolute high, and 2007 a likely low. 2012 was weather. Northabout having circumnavigation difficulties at present shows ice recovery as part of this cycle.
Thank you for your nod to the plight of the Chesapeake.
Save the bay.
Don’t forget to vote.
Ok, Fasullo’s additional information says: ‘GMSL contributions from OHC are estimated based on the conversion of 3•10^22J = 5 mm provided in Ref 38’, which is this paper:
Indeed they give 6ZJ/mm for 0-700m, which is what matters for Pinatubo. From 700 to 3000m it’s 14.3ZJ/mm and below that it’s 11ZJ/mm. I cannot find a figure for the full column.
The ratios in Llovel 2014 were 13.3ZJ/mm for the full column and 9.7 below 3000m. Perhaps someone can make sense of these figures – it seems to me the Church and Llovel papers are at odds.
The Briggs Essay (“There is no scientific method”) is awful. I’ll consider a single quote here: Every working scientist, including the most rabid empiricist, uses mathematics, and mathematics is not scientific in any common sense of that word.
“common sense” of a word? When has “common sense” in any of its forms produced new public, reliable knowledge of the shared world? Round Earth? Heliocentric planetary system? Medicines that work? Vaccines?
There is no short list of attributes or exhortations that unequivocally separates science from other attempts at epistemology, but the scientific methods include:
1. measuring what is talked about (and improving measurements);
2. writing down observations in as near to “real time” as possible, instead of later from recall;
3. describing research methods in as complete and accurate detail as possible to permit (indeed encourage) others to replicate the procedure;
4. deriving logical conclusions from statements of (proposed) knowledge and testing whether the subsequent research disconfirms them;
5. compiling large amounts of empirical results (“periodic table”, “Balmer’s Series”, etc);
6. seeking organizing principles for compiled empirical results: “induction”, “abduction”, hypothesizing, etc.;
7. public presentation of descriptions of methods and results, (i.e. avoidance of secrecy and cults), including patents, and public retractions of errors;
8. mathematical modeling of quantitative relationships in empirical data (a particular subset of 4, perhaps);
9 open debate of competing/conflicting theories and evidence.
I’ll note in passing that is a criticism of the essay, not of William Briggs. His new book in the Springer series (Uncertainty: The soul of Modeling, Probability and Statistics) has five 5-star ratings on Amazon. I am sort of at odds with the idea that probability and statistics provide good models for all of uncertainty; at best (which happens), they are good models for random variability, where “random variability” is the variability that is non-predictable and non-reproducible. What makes probabilistic and statistical modeling possible and useful is the fact that random variations often fall out in reproducible distributions. But I don’t have a book. Briggs’ book can be investigated here: https://www.amazon.com/Uncertainty-Soul-Modeling-Probability-Statistics/dp/3319397559/ref=sr_1_3?s=books&ie=UTF8&qid=1471189082&sr=1-3&keywords=william+m+briggs#customerReviews
The essay by Massimo Pigliucci (Must science be testable? A defense of Popper ) contains this gem: it is unlikely to the highest degree that anyone will ever be able to come up with small sets of necessary and jointly sufficient conditions to define ‘science,’ ‘pseudoscience’ and the like.
So there you have it! I must be right
Science must be funded, not be testable …
MM, You might like the intro and first chapter of The Arts of Truth. I tried (without drawing bright lines) to shade from true and false (= a true nottrue) thru grey ‘untrue’ intermediates like pseudoscience to outright prevarication. Draws on a lot of philosophy beyond just Popper. Rest of book is fact illustrations in various big theme categories of ‘truth’ difficulties. Except last substantive chapter, which is a global wrap of the various ‘truth’ themes using CAGW as an overarching example.
Matthew, Hadn’t you noticed that Pigliucci was there paraphrasing Larry Laudan and then criticizing his use of his (likely correct) judgment as ground for a rebuttal of Popper’s mature thoughts on the demarcation problem? You make it sound like someone is a relativist here. But both Pigliucci and Laudan are strongly critical of relativism. They seemingly disagree on their assessment of Popper’s achievement regarding the demarcation problem.
Pierre-Normand Houle: Hadn’t you noticed that Pigliucci was there paraphrasing Larry Laudan and then criticizing his use of his (likely correct) judgment as ground for a rebuttal of Popper’s mature thoughts on the demarcation problem?
I had merely selected out of the essay a passage that I largely agree with. I like the whole essay.
You make it sound like someone is a relativist here.
I do not know what you mean in this context by a “relativist”. Scientific methods can be used to determine whether propositions are accurate, with respect to publicly debatable standards of accuracy; or useful, with publicly debatable standards of utility; but not strictly speaking whether they are “true” or “false”. For example, is it “true” that for every “action” there is an equal and opposite “reaction”? Every? It has worked really well so fat. Is it “true” that the measured speed of light will always be independent of the relative velocities of the light source and the measuring instrument? It has worked really well so far.
Something for the the 21st century weathertellers to consider is found in the article, There Is No Scientific Method:
With no advent to billions of government funding I predicted this back in January:
20 inches of rain falls, hundreds rescued – The Daily Advertiser
The best models of the Climate Change Establishment are incapable of such precision of such prediction accuracy. We all should be 97% certain by now that wasting billions of dollars more on filing cabinets full of GCM pseudoscience — based on giving CO2 mystical powers that cannot be observed in nature — is more than a waste of time… it’s indulging modern-day fortunetelling at public expense and making a science out of academic flimflam.
“Sizing up the tsunami threat”
Interesting article, but I was curious why it didn’t evaluate the potential for a cross-Atlantic tsunami. I researched online the present thinking about the possibility of a megatsunami originating from the Azores. While I too believe the probability of a 1,000 foot tsunami is extremely small as recent papers indicate, it was curious this article didn’t mention it.
Recent research seems to indicate the eight large collapses in this area that sedimentary evidence seems to support were actually composed of multiple sub-units, a series of smaller collapses. If this is so, that would seem to increase the probability of a disastrous event happening in our lifetimes, while reducing the chances of a Mega-disaster. I’m not sure it’s comforting to know the Boogie Man isn’t fifty feet tall, but that there are a whole clan of them hiding in my closet.
Read up on the Cascadia fault. Risk dwarfs Azores. And is geologically overdue.
One big Boogie Man or a clan of 50, they are still imaginary and therefore not scary.
‘While climate sceptics have systematically attacked AW, up until now they have only invoked [giant natural fluctuations]’
Notice the references: none. He cannot a find a single instance of a sceptic who “only” invokes natural fluctuations to explain the warming we’ve seen since the XIX century – apart from Keenan. It’s not even clear if Keenan himself thinks this, i.e. that the warming is entirely natural. Lovejoy’s entire argument is an attack on a strawman.
Anyway, the debate is somewhat moot as even assuming all the warming is manmade, with volcanoes and solar as the only natural forcings because they’re what we can measure, gives sensitivity values below 2ºC. (Not to mention that there are lot of things to the climate debate besides sensitivity). Yes, one can get higher values from other methods but the thermometer record, which is what Lovejoy is discussing, is not relevant for these other methods (paleo, GCMs).
Painting with too broad a brush. Skeptics come in various types. The smart ones do not disagree the CO2 is a GHE, because it is provable that it is. They do not assert only natural variability. But they question the exclusion of natural variability from the temperature rise from~1975-2000, point to increasingly obvious model deficiencies, point to observational sensitiviti s about half of models, point to greening, and conclude that CAGW is actually just aGw and no problem.
Ristvan my reply.
The smart ones look at the historical climatic record and see that this period of time in the climate is in no way unique and conclude the whole AGW movement is a scam.
This will be proven before this decade is out.
The climate is controlled by solar/geomagnetic factors not CO2.
The GHE being a result of the climate.
Solar/ geomagnetic factors then influencing the items that control the climate to move them in a direction which would either warm or cool the climate.
Cloud Coverage Globally
Sea Surface Temperatures Globally
Atmospheric Circulation Patterns
Snow Coverage Globally
Sea Ice Coverage Globally
From the article:
Giant tarpaulins are being installed in the Italian Alps to help prevent glaciers from melting. Researchers say they can reduce the sun’s temperature by as much as 90 percent. (Aug. 12)
Sanjay Srivastava is fuccked. Sorry Prof. your profanity can’t cure your erectile dysfunction
Everything is fuccked (Not the foul mouth psychology class)
The PDO is positive… new rules:
Should be August 14:
Tiny hints of upwelling off South America, La Niña odds can still fog a mirror… PDO and AMO look solid:
Niño 3.4’s journey to a La Niña event keeps derailing:
RE: What would it take to achieve the Paris temperature targets?
These guys live in virtual reality of their climate models. The Paris temperature targets are fantasy. The world has already warmed by 1.5 C since 1750
The Paris agreement will save the world!
RE: Piecing together the Arctic’s sea ice history back to 1850
“there is no point in the past 150 years where sea ice extent is as small as it has been in recent years.”
Mr. Fetterer, have you heard of the Little Ice Age that ended around 1850? Look at the Greenland temperature in the past 10,000 years. See those warm periods. What do you think happened to the sea ice? Mr. Fetterer, are you sure you’re a climate scientist not a tennis player?
Arctic sea ice vanishing! Believe me I’m Fetterer the climate scientist
NINO 3.4 hits levels not seen since early 2013.
Click on the image to get the latest chart.
> What thermometers do & don’t yet show!
Summary : real thermometers fail to match models, therefore they are wrong.