by Nic Lewis
The two strongest potentially credible constraints, and conclusions.
Part 1 of this article the nature and validity of emergent constraints on equilibrium climate sensitivity (ECS) in GCMs were discussed, drawing mainly on the analysis and assessment of 19 such constraints in Caldwell et al. (2018), who concluded that only four of them were credible. An extract of the rows of Table 1 of Part 1 detailing those four emergent constraints is given below.
|Name of constraint||Year||Correlation in CMIP5||Description|
|Sherwood D||2014||0.40||Strength of resolved-scale mixing between BL and lower troposphere in tropical E Pacific and Atlantic|
|Brient Shal||2015||0.38||Fraction of tropical clouds with tops below 850 mb whose tops are also below 950 mb|
|Zhai||2015||–0.73||Seasonal response of BL cloud amount to SST variations in oceanic subsidence regions between 20-40°latitude|
|Brient Alb||2016||–0.71||Sensitivity of cloud albedo in tropical oceanic low-cloud regions to present-day SST variations|
Two of the those four constraints, Sherwood D and Brient Shal, were analysed in Part 2 and found wanting. In this final part of the article I discuss the remaining two potentially credible constraints, Brient Alb and Zhai – which have much higher correlation with ECS than do Sherwood D and Brient Shal – and formulate conclusions.
Brient Alb is based on the correlation in CMIP5 models between ECS, and the relationship of shortwave (SW) reflection by low clouds over tropical oceans (TLC) with SST. The authors found that estimates of the strength of that relationship derived from either deseasonalized or interannual variability correlated better with ECS than those based on seasonal or intra-annual variability. They used deseasonalized variability, which is primarily driven by large-scale phenomena such as El Niño-Southern Oscillation. The observational constraint, from CERES data over 2000-15, is quite tight. All but one of the models from NCAR, GFDL, GISS, UKMO and MPI, perhaps the best-known modelling centres, are ruled out by the Brient Alb emergent constraint.
The paper involves sophisticated statistical methods, as one might expect with Tapio Schneider  being joint author. Not many climate science papers involve concepts such as Kullback-Leibler divergence. The authors use this divergence measure to weight models, as they are doubtful that fitting a linear relationship between a proposed emergent constraint and ECS – as is done in many emergent constraint studies – is appropriate. Brient & Schneider make the important point that:
procedures that … first infer a linear relation (regression line) between ECS and variables … from models and then use that linear relation to constrain ECS given observations… can be strongly influenced by ‘‘bad’’ models that are not consistent with the data but exert large leverage on the inferred slope of the regression line. If the slope of the regression line is strongly constrained by bad models, such a procedure can, misleadingly, yield very narrow ECS estimates that could not be justified by focusing on ‘‘good’’ models, which are broadly consistent with the data. By contrast, our multimodel inference procedure assigns zero weight to models that are inconsistent with the data.
Regression of ECS on the strength of the relationship between TLC reflection variability and SST with models weighted by how “good” they are, using either Brient & Schneider’s model weightings or those derived directly from model likelihoods given the observational probability density, explains almost none of the intermodel variance in ECS. That justifies their rejection of the usual linear relationship assumption.
However, Brient & Schneider’s Kullback-Leibler divergence based weighted model-averaging approach makes the assumption that uncertainty in the model and observational estimates of the TLC reflection–SST relationship should be identical. The divergence measure penalises differences in the widths (and shapes) of the observation and model derived TLC reflection-SST relationship estimates as well as differences in their means. In my view the equal uncertainty assumption is not valid and accordingly the justification given for using the divergence measure does not stand up.
Aside from statistical issues, model-averaging is unsatisfactory given that many models are closely related to and/or have similar characteristics to other models, but their weightings are not reduced to reflect that. For instance, the method gives substantial weightings to both IPSL-CM5A-LR and IPSL-CM5A-MR, two high-sensitivity models that differ only in their resolution.
More fundamentally, if a constraint is satisfied by one or more low sensitivity models and one or more high sensitivity models, how can it be considered to give useful information about ECS? The models are not a random sample. In such a case, whether there are more models with high sensitivity than low sensitivity that satisfy the constraint depends arbitrarily on development decisions by modelling centres, and their choices as to which offshoot model variants to include in CMIP ensembles. Accordingly, the fact that more of the constraint-satisfying models have high ECS than have low ECS constitutes very weak evidence that ECS is high. Brient Alb is such a case: both IPSL-CM5A-LR and IPSL-CM5B-LR, respectively a high (4.1°C) and low (2.6°C) ECS model, closely satisfy the observational constraint. Moreover, the later, lower sensitivity, CM5B model has improved representation of the convective boundary layer and cumulus clouds, of tropical SW CRF and of mid-level cloud coverage.
Brient & Schneider’s results in fact change little if a more reasonable method of weighting models that does not penalise differences in the uncertainty between the model and observational estimates of the TLC reflection–SST relationship is used. However, if in addition the IPSL-CM5A-LR model is replaced by a second copy of IPSL-CM5B-LR – giving a 2-to-1 weighting in favour of IPSL’s more advanced, improved CM5B model over the earlier CM5A version rather than vice versa – the resulting weighted CMIP5 model ECS uncertainty distribution differs little from the raw unweighted distribution, apart from ECS values below 2.5°C being less likely.
It seems doubtful that the Brient Alb actually provides much constraint on ECS. It does suggest that current models with an ECS of below 2.5°C are poor at simulating the observed TLC reflection–SST relationship, but that may be unrelated to their lower than average sensitivity. These conclusions are consistent with those of the study’s authors.
The Zhai constraint is very similar to that in Brient Alb, except that it uses seasonal variability in the extent of marine low clouds rather than deseasonalized variability in their total SW reflection. Zhai et al. also studied the sub-tropics (20°–40°S and 20°–40°N) rather than the tropics (30°S–30°N) and identified low cloud regions using a different method. Also, the ECS for the NorESM1-M model is significantly wrong in the Zhai scatter plots, which suggests that their regression estimates, at least, might be somewhat inaccurate.
Moreover, the linear relationship that the Zhai study fits between the seasonal variability derived relationship of low cloud reflectivity with SST and ECS is dominated by “bad” models that are inconsistent with the observational constraint. This is exactly the problem that led Brient & Schneider to eschew fitting a linear relationship. However, it appears that the constrained best-estimate for ECS that Zhai et al. derive is simply the unweighted mean and standard deviation of ECS values for the seven models having seasonal variability derived relationships of low cloud extent with SST that are consistent with their observational estimate. That method makes much less allowance for uncertainty than does Brient & Schneider’s methodology, and accounts for the Zhai constraint on ECs being narrow.
Worryingly, the assessment of consistency with the observational estimate of the relationship of low cloud extent with SST based on seasonal variability differs greatly between the Zhai and Brient studies for several CMIP5 models. If Brient & Schneider’s assessment of consistency with the observational constraint had been substituted for Zhai et al.’s for the four models for which they differ radically, the resulting constrained ECS range would have a virtually identical median to that of their CMIP5 model-ensemble.
The Zhai methods have more shortcomings than those used by Brient & Schneider for their very similar emergent constraint, and the radical difference for four CMIP5 models in the two studies’ assessment of consistency with the observational constraint from seasonal variations is a major concern. Brient & Schneider found that models were not very good at reproducing the seasonal cycle in low cloud reflection, and the correlation with ECS for their seasonal variability measure was relatively low – much lower than Zhai et al found. These issues, taken together, severely dent the credibility of the Zhai et al. constrained ECS estimate.
Summary and conclusions
It is fairly clear that all potentially credible emergent constraints on ECS in climate models that have been investigated really constrain SW low cloud feedback (Qu et al. 2018). Even the Cox constraint, which is based on fluctuation-dissipation theory, is strongly dominated by SW cloud feedback. That is also likely to be the case for emergent constraints that are proposed in future, since low cloud feedback is the dominant source of inter-model variation in ECS.
The fairly detailed Caldwell 2016 review identified four emergent constraints that were potentially credible, although it did not investigate them in detail. Of these, more detailed examination casts doubt on the credibility of the Sherwood D and Brient Shal constraints, which in any event each explain only about 15% of the ECS variance in CMIP5 models.
The Brient Alb and Zhai emergent constraints are very similar; they both involve the variation of low cloud SW reflection with SST. Zhai makes much less allowance for uncertainty than does Brient Alb, hence its narrow constrained ECS range. However, in Brient Alb seasonal variations – as used in Zhai – were considered to produce a less satisfactory constraint than deseasonalized variations, since models are relatively poor at reproducing the observed seasonal cycle. There are also several cases where the two studies’ assessment of consistency with the observations of a model’s seasonal variations in low cloud reflectivity differ radically. Substituting the Brient Alb assessment for Zhai’s in those cases would bring the median Zhai constrained estimate Therefore, the Zhai constraint seems unreliable.
The main implication of the Brient Alb emergent constraint is that the relationship between deseasonalized tropical low-cloud SW reflection and SST in the low-sensitivity inmcm4 and GISS-E2 models is far from that observed. However, it is not clear that has much to do with low ECS as such, since the observed relationship is well matched by the MRI-CGCM3 and IPSL-CM5B-LR models, which both have bottom-quartile ECS values.
There is ample reason to doubt that the response of low-level cloudiness to environmental conditions, and hence low cloud feedback, is realistic in CMIP5 models. A recent review of shallow cumulus in trade-wind regions, a major contributor to low cloud feedback in CMIP5 models – which parameterize clouds – had this to say:
In models with parameterized convection, cloudiness near cloud-base is very sensitive to the vigor of convective mixing in response to changes in environmental conditions. This is in contrast with results from high-resolution models, which suggest that cloudiness near cloud-base is nearly invariant with warming and independent of large-scale environmental changes.
It goes on to question:
whether the strongly negative coupling between low-level cloudiness and convective mixing in many climate models (as shown in Sherwood et al. 2014; Brient et al. 2015; Vial et al. 2016; Kamae et al. 2016) may be a consequence of parameterizing the convective mass flux in a manner that does not sufficiently account for its link to the mass budget of the subcloud layer.
This concern is supported by Zhao et al. (2016), who found that by varying the cumulus convective precipitation parameterization in the new GFDL AM4 model they could engineer its climate sensitivity over a wide range without being able to find any clear observational constraint that favoured one version of the model over the others. The fact that developing aspects of a model can leave its satisfaction of observational constraints unaltered but drastically change its sensitivity seems to fatally undermine the emergent constraint approach, at least in relation to all constraints for which this can occur.
More generally, Qu et al. (2018) point out that any proposed ECS constraint should not be taken at face value, since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.
There is another serious problem with emergent constraints that do not involve the response of the climate system to increasing greenhouse gas concentrations over multidecadal or longer periods. It is that climate feedback strength in GCMs depends strongly on the pattern of SST increase, particularly in the tropics, one reason being that tropical marine low clouds respond to remote as well as to local SST. In simulations by atmosphere-only CMIP5 models driven by evolving SST patterns (AMIP simulations), if the observed historical evolution of SST patterns is used feedback strength is much greater (climate sensitivity is lower) than if the historical evolution of SST simulated by coupled CMIP5 models is used. And feedback strength in AMIP simulations driven by the evolving SST pattern from simulations involving an initial abrupt increase in CO2 concentration is even lower.
The dependence of sensitivity on the SST warming pattern, in GCMs at least, implies that even if a valid, strong emergent constraint on ECS in coupled GCMs were found, and there were no shortcomings in the atmospheric models of GCMs that satisfied the constraint, that would be insufficient to constrain real-world ECS. Doing so would also require establishing that the long-term evolution of tropical SST patterns in coupled GCMs forced by increasing greenhouse gas concentrations is realistic, notwithstanding that CMIP5 historical simulations do not match the observed warming pattern.
It is interesting that Tapio Schneider, the joint author of the Brient Alb paper, with considerable mathematical/statistical abilities, advocates caution regarding emergent constraint studies. Such caution is amply justified. It seems doubtful that emergent constraints will be able to provide a useful, reliable constraint on real-world ECS unless and until GCMs are demonstrably able to simulate the climate system – ocean as well as atmosphere – with much greater fidelity, including as to SST warming patterns under multidecadal greenhouse gas driven warming.
 An emergent constraint on ECS is a quantitative measure of an aspect of GCMs’ behaviour (a metric) that is well correlated with ECS values in an ensemble of GCMs and can be compared with observations, enabling the derivation of a narrower (constrained) range of GCM ECS values that correspond to GCMs whose metrics are statistically-consistent with the observations.
 Caldwell, P, M Zelinka and S Klein, 2018. Evaluating Emergent Constraints on Equilibrium Climate Sensitivity. J. Climate. doi:10.1175/JCLI-D-17-0631.1, in press.
 The four studies involved are:
Brient, F., T. Schneider, Z. Tan, S. Bony, X. Qu, and A. Hall, 2015: Shallowness of tropical low clouds as a predictor of climate models’ response to warming. Climate Dynamics, 1–17, doi:10.1007/s00382-015-2846-0, URL http://dx.doi.org/10.1007/s00382-015-2846-0.
Brient, F., and T. Schneider, 2016: Constraints on climate sensitivity from space-based measurements of low-cloud reflection. Journal of Climate, 29 (16), 5821–5835, doi:10.1175/JCLI-D-15-0897.1, URL https://doi.org/10.1175/JCLI-D-15-0897.1.
Sherwood, S.C., Bony, S. and Dufresne, J.L., 2014. Spread in model climate sensitivity traced to atmospheric convective mixing. Nature, 505(7481), p.37-42.
Zhai, C., J. H. Jiang, and H. Su, 2015: Long-term cloud change imprinted in seasonal cloud variation: More evidence of high climate sensitivity. Geophysical Research Letters, 42 (20), 8729–8737, doi:10.1002/2015GL065911.
 SST: sea-surface temperature.
 Tapio Schneider developed RegEM, an impressive ridge-regression based algorithm for infilling missing data. A less satisfactory variant of RegEM has been much used by Michael Mann for paleoclimate proxy-based reconstructions.
 Kullback-Leibler divergence is a measure of relative entropy, which can be used to measure how similar two probability distributions are.
 Brient & Schneider’s method thus down-weights a model whose estimate has a larger or smaller uncertainty than the observational estimate even if the model’s and the observational mean estimates are identical.
 Brient and Schneider justify using a divergence measure based on the similarity of the model and observational estimate PDFs on the basis that “they are estimated from time series of the same length L so that their sampling variability can be expected to be equal if a [statistical] model is adequate”. But the observational estimate uncertainty includes measurement and related errors that are not present in the model estimate uncertainty (although these appear to be relatively unimportant in this case), while only the model estimates sample decadal/multidecadal climate system internal variability, which very possibly affects the TLC reflection–SST relationship. Moreover there is little overlap between the periods used to estimate the TLC reflection–SST relationship from model simulations and observations, and there were three major volcanic eruptions during 1959-2005 but none during the 2000-2015 observational period. Volcanic eruptions have complex, major effects on atmospheric circulation and may well temporarily disrupt the TLC reflection–SST relationship. More fundamentally, whether a GCM realistically simulates the actual TLC reflection–SST relationship and whether it simulates climate system internal variability (which will impact the uncertainty in the estimate of its TLC reflection–SST relationship) are two quite different matters, the second of which appears to have little relevance to the emergent constraint involved Therefore, there is no reason to expect the uncertainty of the model and observational estimates of the TLC reflection–SST relationship to be the same, and comparing model and observational estimate PDFs (as opposed to just comparing their central measures) does not seem to me appropriate here.
 Tapio Schneider has often acknowledged this point.
 Weighting models by the likelihood of the observed TLC reflection–SST relationship at the model’s best estimate (mean) of it, widening the observational uncertainty to allow for the average uncertainty of the model estimate means, is a more reasonable approach. Their raw (unweighted) model ECS uncertainty 17-83% and 5-95% ranges of 2.4–4.25°C and 1.85–4.8°C can be closely matched by assigning each model’s ECS a standard deviation of 0.5°C. If one then weights the models on the proposed basis, the uncertainty ranges become 2.9–4.45°C and 2.3–4.9°C, very close to Brient & Schneider’s weighted ranges. However, if IPSL-CM5A-LR is replaced by a second copy of IPSL-CM5B-LR, the 17-83% and 5-95% ranges become 2.65–4.4°C and 2.15–4.85°C. Moreover, the median weighted ECS estimate is 3.6°C, well below the weighted posterior mode of 4.05°C that Brient & Schneider point to, and not much different from the raw model median ECS of 3.45°C.
 Tapio Schneider, personal communication, 2018.
 Although in Brient Alb the correlation of seasonal TLC SW reflection variability with ECS was relatively low, the seasonal cycle is stronger in the sub-tropics than in the tropics. However, much of the marine low cloud regions are situated in the 15°–30° zones, so Brient Alb should have captured most of the seasonal variability in overall tropical and sub-tropical low cloud reflection.
 Also, the model numbering differs slightly between Zhai et al. Table 1 and their Figures 2 and 3, which raises the possibility of models having been mixed up in their calculations.
 In particular, CSIRO-Mk3-6-0, MRI-CGCM3, HadGEM2-ES and NorESM1-M.
 A crude revised central estimate is 3.4°C, being the median ECS of the 7 models (CGCM3.1, HadCM3, CanESM2, IPSL-CM5A, MRI-CGCM3, NCAR-CAM5, NorESM1-M) whose seasonal variability lies within the uncertainty range for the observational estimate, after substituting the Brient & Schneider consistency assessment for the 4 models where if differs radically. CGCM3.1 is a CMIP3-only model; its ECS is 3.4°C but excluding it would increase the constrained median ECS to 3.5°C, in line with the unconstrained median ECS for Zhai’s CMIP5 models (including HadCM3, which is both a CMIP3 and CMIP5 model) of 3.45°C.
 Qu, X., A. Hall, A. M. DeAngelis, M. D. Zelinka, S. A. Klein, H. Su, B. Tian, and C. Zhai, 2018: On the emergent constraints of climate sensitivity. Journal of Climate, 31 (2), 863–875, doi:10.1175/JCLI-D-17-0482.1.
 Cox, P. M., C. Huntingford, and M. S. Williamson, 2018: Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature, 553, 319.
 Vial, J., Bony, S., Stevens, B., & Vogel, R. (2017). Mechanisms and model diversity of trade-wind shallow cumulus cloud feedbacks: a review. Surveys in geophysics, 38(6), 1331-1353
 Zhao, M., Golaz, J. C., Held, I. M., Ramaswamy, V., Lin, S. J., Ming, Y., … & Guo, H. (2016). Uncertainty in model climate sensitivity traced to representations of cumulus precipitation microphysics. Journal of Climate, 29(2), 543-560
 Gregory, J. M., and T. Andrews, 2016: Variation in climate sensitivity and feedback parameters during the historical period. Geophys. Res. Lett., 43: 3911–3920.
Zhou, C., M. D. Zelinka and S. A. Klein, 2016: Impact of decadal cloud variations on the Earth’s energy budget. Nature Geoscience, 9, 871–874.
 Except during the first two or three decades of abrupt CO2 increase simulations.
 See Tapio Schneider’s illuminating blog post on 24 January 2018 at climate-dynamics.org/Statistical-inference-with-Emergent-Constraints/
Moderation note: as with all guest posts, please keep your comments civil and relevant.
Thanks for this, Nic–very clear and accessible.
I guess you can apply these constraints to the Roman and Medieval warm periods and explain why those upper bounds were constrained and why those warm periods ended and how these constraints played a part.
If not, throw them all out and look for something else, like, it snows more when oceans get warmer and more thawed and the more snowfall increases ice volume and more ice weight increases ice flow and the increased ice extent limits the upper bound of temperature and causes cooling.
A Roman or Medieval or Modern warm time is a time with less ice extent. A cold period in between warm periods is a time with more ice extent. This Emergent constraints posting does not address how the ice extent is influenced by the constraints. It looks like Nic Lewis has plenty of problems with the Emergent constraints without me saying anything.
What is an emergent constraint? Is that something new that did not exist before the increase in burning of fossil fuels?
Past climate cycles were bounded, nothing new is needed to bound future climate cycles. Whatever has worked, will work again!
Reblogged this on Climate Collections.
Emergent constraints don’t work as Nic has shown. The models themselves don’t work. And cannot because computational intractability forces large grid cells, which forces parameterization of important phenomena like convection cells, which forces parameter tuning to best hindcast (for CMIP5 explicityly YE2005 back three decades to 1975), which drags in the attribution problem. See a 2015? guest post at WUWT , The Trouble with Climate Models, for details and illustrations.
I see Nic’s West Country neighbor Chris has invited ristvan to umpire his latest game of mathematical solitaire
Unfortunately it’s far too wet and cold to play cricket over here at present. Oh for the days of constant warming, something we here in the UK have not known this century.
RS, not umpire. Alternative calculation routes providing more confirmation for Monckton’s basic error finding. We had been in previous rather sharp disagreement (see for example my post here on his ‘Irreducably Simple Equation). We are now in agreement on 1.45-1.55 or thereabouts for ECS.
Lewis and Curry 2014 worked strictly from AR5 IPCC values. Gave an IPCC ‘approved’ way to use the energy budget ECS method to show CMIP5 modeled ECS was ~2x high. The important point for Monckton’s method was they also calculated the ‘IPCC value’ TCR. The resulting ECS/TCR ratio gives the ‘not yet evident’ 1.25x that IPCC cannot argue with—their own data, the energy budget method is well established with several,papers since Otto 2013 reaching similar conclusions. Lewis and Curry also varied time frames somewhat to insure their result was robust, and even gave the uncertainty intervals around their mode estimates. All posted here previously.
Lewis 2015 also posted here simply reran that analysis using more up to date aerosol forcings from Stevens. 1.5.
My contribution in addition to remembering those things was just decomposing the AR4/AR5 forcings and remarking that several papers had observational GCM precipitation ~2x modeled, which has obvious wvf consequences. See for example Wentz, Science, 2007, Dai, J. Climate 2006, and Wynant GRL 2006 for CMIP3 precipitation comparisons cited in the Climate Chapter of ebook The Arts of Truth as prepub reviewed by Richard Lindzen. Or see essays Humidity is Still Wet and Cloudy Clouds in ebook Blowing Smoke for AR5 equivalents. Regards.
Also, you might provide the links to the three original WUWT Monckton posts so denizens can see the quasi realtime peer review that took place, plus the Spencer/Monckton reply posts over at Roy’s. If you want to play, provide the original source material links for denizens here rather than vvattsupwiththat sideshow commentary.
Wanted to personally thank Nic Lewis for these three guest posts, and Judith for hosting them. I had noticed that ‘observational constraint on models’ were mostly trying to bolster the high end of the IPCC ECS range. Automatically suspect. But I had neither the chops nor the energy to figure out what was really going on. Now, thanks to you both, I ‘got it’ and know where to refer as engage others elsewhere on key CAGW idea ECS.
Bottom line, its warmunist game over and all that is left scientifically is mop up operations on fringe outlier counters. As here.
Thanks for your comment, ristvan. Glad you found the three articles useful.
The last time I read something important about the GCMs was in 2009. Nothing has changed. Please read the following reference, it is verygood, and very apropo.
Lahsen, M. (2005). Seductive simulations? Uncertainty distribution around climate Models. Social Studies of Science 35 (6): 895-922. DOI: 10.1177/0306312705053049
Lahsen, M. (2005). Seductive simulations? Uncertainty distribution around climate Models.
Istvan, do let us know when you move on, from quasi realtime peer review, better known as bloggerell, to actually doing actual science in really peer-reviewed scientific journals .
As you have some tens of thousands to choose from, y folks are beginning to notice your failure to publish anything in any of them.
Happy Easter from Rabbett Run !
Suck it up, Russell!
Why would anyone other than an academic employed by a university or govt lab bother to publish in the pal-reviewed scientific journals?
“…bother to publish in the pal-reviewed scientific journals?”
Recognition and acclaim. No different than people in Hollywood; politics; or mass school shooters. “Listen to me!”
Science, as you know is a process which does not confer nor deny recognition. Its a tool, used sometimes to scratch a curious itch. People of all flavors can use the tool, most, not well. “The poor workman always blames his tools.” They remain: a poor workman.
The “pal-review” literature acts like a newspaper clipping, suitable for scrapbooks, to be brought out at family/tribal occasions, having meaning for oneself and maybe a few others; mostly, a “yes”, an “ah hah”, an “oh”, didn’t you look cute, at that time, and the voices trail off. Occasionally, at elevated tones: “the sound and fury signifying nothing.”
The charades and games played for pal-reviewed papers: is it number of publications? appearing in “quality” journals? cited many times by others? doesn’t prevent fraud like: Chen, at the Ohio State University Cancer Center, manufactured data, bad outcomes left out, etc. 18 years of NIH cancer research up in smoke, open to question.
Boy, does this remind me of the Climate cabal.
Why would anyone…bother to publish in the pal-reviewed scientific journals?
Because , Professor Curry, you and the rest of your department rightly taught them to take what they learned from your courses , and apply it to advancing the state of the art.
At least that’s what they teach in these parts.
Did they also teach to hide your methods and data?
curryja: Why would anyone other than an academic employed by a university or govt lab bother to publish in the pal-reviewed scientific journals?
To reach a wider audience.
Besides the journals, academic publishers also publish edited volumes and monographs, which are displayed at professional conferences and listed in their mailed and emailed promotions.
Judith is right. Overall literature quality is poor and bias prevalent
I’m not an academic and I would publish in a “pal” reviewed journal.
I dont know about you but my pals always always push me harder than I push my self. Dang I cant tell you how many times zeke has pushed me to look at X, look at Y, look at Z. Same with Nick Stokes, robert way,
Cowtan, Victor vennama.
In general though published work should be REVIEWABLE. That means data as used and code as run. Rud doesnt do that.
He publishes words and then refers to these publications as the final word on a matter.
Contrast this with Nic who actually publishes his code and data.
One is science, the other is masterbation
I will let this one through, just once, with my comment. Data analysis is one aspect of science; anyone presenting a new analysis of data should of course make their data (and preferably code) available. A higher level aspect of science is building a model that explains data, and makes predictions. Yet a higher level is synthesis and assessment of data and models in the evaluation of theories and hypotheses and even posing new hypotheses (no, this is not cut and paste science, unless you think the IPCC is doing cut and paste science). This latter is what Javier does, what Rud does, what I (mostly) do, what the IPCC does. This is the difference between Climate Audit and Climate Etc. At Climate Audit, the focus is on the data and analysis methods. At Climate Etc, there is more of a focus on synthesis and assessment.
Science is a process, and in highly politicized fields such as climate science, the official peer review process can get in the way of scientific progress with ‘gate keeping’. There is too much careerism and politics that is motivating many individual scientists and the institutions that support them (including editors and publishers of scientific journals). This is why many of the best medical doctors eschew ‘evidence based medicine’ that is driven by consensus meetings of physicians that decide what treatments and medications are insurable for specific symptoms/illnesses (largely pressured by the pharmaceutical industry). There are too many unknowns, and these other physicians are pushing the knowledge frontier and treating the whole patient.
Peer review is rather a joke, it is the rare reviewer that spends more than 2 hours reading and reviewing a journal article. And too many scientists are too wrapped up in defending their own papers and pet ideas, and are hardly objective reviewers. How many papers have you seen published in Nature Climate Change that did not survive the first week of their press release owing to post-pal review from the larger community of scientists (including blogs)?
Science is an ongoing process. Social media has opened up scientific dialogue so that a scientist is not just getting feedback from their immediate collaborators and the casual reviews mandated by the journal editor, but from a very wide range of perspectives. Social media is breaking down silos in the evaluation of scientific research.
Criticizing people for not joining in data ‘masterbation’ when they are synthesizing and assessing a body of research is just pointless. So please avoid making such comments about people who post here.
Judith gives herself too little credit as an educator.
If Georgia Tech’s extension school could teach Jimmy Carter enough remedial calculus to run a nuclear submarine, surely its atmospheric science courses can benefit those of her commentariat in greatest need of them.
She need but assure them that their final exams will not be pal reviewed.
Peer review is rather a joke, it is the rare reviewer that spends more than 2 hours reading and reviewing a journal article.
Not in my field. Not in my experience.
I wonder how we will respond to a critical A.I. review of science papers?
A.I. is replacing the paralegal profession. Paralegals spend most of their time building or reviewing legal documents and the process is very similar to the peer review process.
If nothing else it will really speed up the process:
“A new study by LawGeex pitted 20 experienced lawyers against the LawGeex AI, which was designed to solve legal problems.
The competition was to see who could spot errors in contracts with the greatest speed and accuracy. Welcome to the riveting, high-stakes world of contract law, people.
The lawyers and AI were given five non-disclosure agreements (NDAs) and asked to find the errors.
Needless to say, humanity got rekt.
The human lawyers found 85% of the errors, while the AI found 94%.
And while the best human lawyer did manage to equal the AI in terms of speed, overall it was an absolute washout.
It took the humans an average of 92 minutes to get through the task. The AI breezed through in 26 seconds.
Also, the lawyers drank 12 coffees, to the AI’s zero.
Steven, can you recommend one of those Pay for Play Journals of Last Resort that accepts anything for publication that has a check for $1500 attached? We have forgotten the name of the one BEST used, after being snubbed by the Team journals.
I am confident that mosh has submitted his work to lots of legitimate non pal review scientific journals that don’t require payment
He will pop up anytime now and tell us which ones then you will feel really humbled
My recollection is that they didn’t have to pay, Tony. It was a freebie, to get the famous journal’s initial issue off the ground.
Don’t knock masterbation, Mosh (it’s sex with someone you love)…
This reverential tone regarding science and peer review is very naive. All the prestigious house organs of science now acknowledge serious problems. I did a very careful exposition at SOD’s. Only those who are young and naive have any excuse for not knowing that there is rampant positive results and more importantly selection bias in most of science. It’s a result of deep cultural flaws, one of which is an ideological bias that science is truth. Nonsense.
Particularly sarcastic non-scientists are particularly culpable for their ignorance.
Pat Cassen, My experience agrees with Judith’s. It’s rare for reviewers to take the time required. I’ve reviewed a lot of papers but got tired of papers being accepted despite my recommendations. The worst of it is papers that don’t co ntrast their work with that of others and provide balance to their selling of their work.
judith makes a claim about how much time people spend on a private unauditable process. perhaps her 2 hour estimate is merely personal.
all the reviews i read took more tyan two hours. I take more than 2 hours.
While I hate to self-reference, I don’t want to copy the important comments either. https://scienceofdoom.com/2017/12/24/clouds-and-water-vapor-part-eleven-ceppi-et-al-zelinka-et-al/#comments
Here’s a succinct sample: Here’s another very good read on preregistration of trials. Pull quote: “Loose scientific methods are leading to a massive false positive bias in the literature.”
There are perhaps a dozen others with references.
Steve: Your experience differs from mine. Reviewing is unaudited and uncompensated and takes time away from your own research. Industrial people usually can’t take time to do it. You will only do it if you think you can learn something important or if you already know the work well (much in the literature is repetitive). Most papers are very small improvements on well known work or are essentially marketing for the researchers, their models, codes, or theories.
That’s a long comment thread you point at with rather inexplicable pride.
Which bits are of interest here?
The parts where you repeatedly demonstrate an inability to follow basic norms of behaviour?
Or perhaps where you dive straight into a boilerplate critique of a paper without bothering to actually understand it?
Or are there some other parts of your commentary you are particularly proud of?
Do help – even your most avid admirers may baulk at the suggestion of wading through your entire oeuvre in search of pearls.
More stuff from the medical area, which cannot translate to climate science unless a case is made, and there has been no case made.
If Ioannidis were to actually look at climate science, I sort of suspect he would spend most of his time on bad skeptic papers. Like some of the sea level papers out of Australia.
VTG reverts to consensus enforcement again and focuses on tone rather than substance. Ankle biting is not going to convince anyone.
I point to that comment thread because it contains many long comments with references to positive results bias and the deep cultural dysfunction of the scientific community. I don’t have the time to repost them here.
JCH, If you actually read the material, you would see that this material discusses science and academia generally and describes a culture where overselling and overconfidence are almost required. It is perfectly applicable to the vast amount of climate science that uses GCM’s to produce findings. I would estimate that at least half of climate science papers rely on them for their conclusions.
So what? You’ve established nothing. The examples are mostly medical science, and the rewards and motivations there are starkly different.
I am with JCH, on this one. Medical science is one of the hard sciences. They know how to properly use thermometers. No adjustments necessary.
When 100s of millions are riding on thermometer readings (results,) people will cheat. In many cases, nobody is watching. It’s a different world. They also get protection from congress: because money talks. In the clinical trials example, they went from nobody is watching to somebody is watching. Apparently made a big difference.
In climate science there are often multiple groups working on the same patient. People are watching. Makes it harder to cheat.
There is no Ioannidis in climate science. Only one real attempt has been made, and you mock it: Berkley Earth.
You already know what answer you want, which makes you corrupted and pointless.
JCH, I know something about the medical literature and its getting better because people acknowledge the problems and its a field where consensus enforcement barely exists. Climate science is much worse because there is denial of the problems and strong consensus enforcement and politically motivated stealth attempts to shape the science.
My pal JCH is correct. When a lot of money, fame and glory is on the line, some people will be tempted to cheat. Of course when people are motivated by an irrational fear that CO2 is going to burn up the planet, it will not effect their honesty or contribute to confirmation bias.
Everybody knows that the climate sigh-in-tists are way more altruistic and less greedy than the over-priced medical researchers, who actually produce stuff that saves lives and consequently generates profits.
He protests that I mock Berkeley Earth. Just having a little fun with my pal Steven. It’s the arrogant establishment climate science Team that gave BEST the stiff arm.
Don Montfort, I’m not sure you are disagreeing with me. Whether its noble cause or political viewpoint corruption of simple venality, the effect is the same — bad science.
In any case, there is a critical difference. Medical science is actively taking steps such as pre-registration of trials (one of my comments was on this) and improved statistical methods. Climate science is in total denial that there is a problem (at least in public — Climategate shows that they are pretty 2 faced about this denial) and they seem to refuse to rely on independent statisticians. That’s a huge problem.
The thread at SOD is instructive on many things aside from the acknowledgement by the most prestigious house organs of science that there is a serious problem. Prof. Dessler showed in my view a remarkable ability to refuse to respond to relevant criticisms (sourced from the literature) and to fall back on his own authority. It was disappointing. I would not let a vendor or University professor get away with that in my arena. When its a matter of public health or safety, we need to be insistent and persistent and expose the issues.
JCH, You are just hurling insults. You responded to nothing specific or the scores of references on the subject at SOD or the reference I quoted here in this thread. You should try harder to be persuasive. I can’t meaningfully respond to vacuous arguments.
Ioannidis was not one of my dozen or so references. He is largely irrelevant and just a small part of the attempt to address the problem. Positive results and publication bias have been well documented for at least 20 years. The evidence is becoming irrefutable.
When Mosher wrote “I dont know about you but my pals always always push me harder than I push my self. Dang I cant tell you how many times zeke has pushed me to look at X, look at Y, look at Z. Same with Nick Stokes, robert way, Cowtan, Victor vennama.”
He’s describing the kinds of peer reviews that happen on blogs, not those that happen in peer review for papers in journals. For one, he’s naming the reviewers… but I suspect more importantly he interacts closely with those pals on issues. None of that is how “peer review” works.
So your work was pal reviewed?
She doesn’t have any pals on the Team. They took her chair.
No, Professor Curry resigned.
Carling Hay took on the IPCC in 2015 and won. They will be reducing 20th-century SLR to ~1.1 mm/yr from the Ivory Tower’s ~1.9 mm/yr in the next IPCC report. Professors Curry and Koonin have opposed that on the internet. Koonin probably opposed it in the recent courtroom tutorial.
The opposition to good science can be overcome.
Cooling the past. Reducing SLR history. Keys to success. Has a nice symmetry to it. If the future doesn’t suit your purpose, change the past.
She has been bullied, shunned and libeled for straying from the reservation. You know that. You are part of it.
A blog article on climate is better understood as a review article. Technically it doesn’t need peer review as there is no new science – what science there is has been peer reviewed. One intention is to communicate to a larger or smaller audience – or just yourself – and in greater and lessor complexity – paths to new hypotenuses about nature and the universe.
Each of us is capable of it. Including a trained environmental scientist and engineer like myself. Environmental science relies on teams with multi-disciplinary knowledge and skills from all walks of life. At its heart there should be the richness and promise of science and technology in a fruitful resolution of wicked problems.
You seek to get Ristvan into your own circles, your games, so you can impose your rules on him for attempted control then silencing.
Science does not work that way . It usually works through progressive improvements to hypotheses. No part of the scientific method demands peer reviews, though they often worked well until their disgusting bastardisation as revealed in climategate. Shame for not correcting the miscreants and bringing peer review back to a useful state.
I can relate to such undercurrents as you play, through similar experiences to those we have seen from Ristvan. You reach a stage of seniority whereby picking up your marbles and leaving is more effective than playing the loaded game. Geoff
The article starts with “The two strongest potentially credible constraints, and conclusions.”
This is the wrong approach. No constraint is better than another and none should be just ignored. They are all useful because they all hold models up to observations. In fact, the more constraints the better to discount the poorer models, and promote the better ones. It needs to be a collective approach to emergent constraints rather than a singling out one, so this whole piece went in the wrong direction by trying to find a single magic bullet.
There is no right and wrong. 2+2 can equal whatever pleases. And you get a gold star for imagining what you wish it could be.
That is disarmingly succinct and correct jim2.
Discounting evidence is wrong. You only end up misleading yourself. Take all the constraints together as evidence.
Jim D wrote, “Discounting evidence is wrong.”
Is that because all evidence is equal?
Evidence is evidence. Science relies on evidence.
Re-defining peer-review: A coconut is a coconut. Science relies on coconuts. We get to decide what’s coconuts and what ain’t.
Jim D wrote, “Evidence is evidence.”
If it doesn’t fit, you must acquit!
Laughable evidence – like the evidence the cool phase of the stadium wave (AMO) is coming to save the libertarians from the progressives.
I am saying, emergent constraints are all evidence. Don’t just throw them out. Use it all to get a more robust measure. Physics is full of examples of finding more constraints for values of quantities, and not eliminating them.
“Laughable evidence – like the evidence the cool phase of the stadium wave (AMO) is coming to save the libertarians from the progressives.”
Nature is coming to save the libertarians. What we’ve needed saving from is not nature but the progressives using nature as some mutant version of itself.
The progressives wish to save us from mutant nature while the libertarians just want to get along with it. Nature is doing Okay. Accepting nature is like accepting government and progressives. We can fight all three or get along with them.
A nice outcome would be that progressives would learn to accept nature. But that’s like a libertarian expecting them to be more capitalistic.
Reblogged this on I Didn't Ask To Be a Blog.
One emergent constraint on climate sensitivity is Christopher Monckton.
Don Monfort | April 1, 2018 at 3:51 am
Re-defining peer-review: A coconut is a coconut.
Science relies on coconuts.
We get to decide what’s coconuts and what ain’t
philsalmon | April 1, 2018 at 8:59 am
One emergent constraint on climate sensitivity is Christopher Monckton.
Has anything known to science ever constrained Christopher Monckton?
Thanks for the publicity, rustle.
Pingback: Weekly Climate and Energy News Roundup #310 | Watts Up With That?
Pingback: Weekly Climate and Energy News Roundup #310 |
Heaven help the republic that credences intelligencers as preposterous as SEPP.
I have not read the paper that Nic Lewis is reviewing here and I have only quickly read his analysis and thus I must attempt to suppress my inclination to jump to conclusions of my own. I do, however, see some general patterns running through both the climate related papers being analyzed/critiqued by Nic and Nic’s analyses and critiques of these papers.
I would first have to ask whether the methods of emergent constraints are used commonly outside of climate science. It would appear to me that such a method could be applied with something less than a prior objective and physically sensible rules in the selection of metrics to use. More subjective rules, I would suppose, could be composed to obtain a desired end result. Nic’s analysis here seems to bear this out.
What I see as a reoccurring situation in these analyses is that a more detailed look at the results with sensitivity testing or at least from a sensitivity viewpoint, can (too) often show problems with the authors’ selection of methods or data used in their papers. It is almost as though these authors sometimes are convinced of the correctness of the answer to a question on climate and all they are required to do is find support for that answer. Under such tendencies it would be easy for authors to “stop” looking once the preferred conclusion is reached.
It would indeed be interesting to see if methods used in climate research are common to any other disciplines.
OTOH, what other field trying to computer simulate a hugely complex open dissipative heat engine with multiple negative and positive feedbacks – and is given endless truckloads of cash to do so.
Another question is – if an “emergent constraint” emerges from simulation that correlates with temperature, could/should not this constraint/correlation be tested against high quality historic instrumental data?
Both the results of testing – and silence on the subject of such testing – would be equally informative.
My Google search with emergent constraints referred most often to emergency restraints for patients reacting in psychotic episodes. The only model related references were for climate science. I do not find that result humorous but rather it made me wonder further what other disciplines are using the methods as described here by Nic Lewis. Does anyone reading here have an example?
The decline in tropical low cloud cover should be a negative feedback to the weakening of indirect solar since the mid 1990’s.
Trying to make these radiative models work is a waste of time.