by Judith Curry
The National Academies has published a new edition of its book:
ON BEING A SCIENTIST
A GUIDE TO RESPONSIBLE CONDUCT IN RESEARCH
The scientific enterprise is built on a foundation of trust. Society trusts that scientific research results are an honest and accurate reflection of a researcher’s work. Researchers equally trust that their colleagues have gathered data carefully, have used appropriate analytic and statistical techniques, have reported their results accurately, and have treated the work of other researchers with respect. When this trust is misplaced and the professional standards of science are violated, researchers are not just personally affronted—they feel that the base of their profession has been undermined. This would impact the relationship between science and society.
On Being a Scientist: A Guide to Responsible Conduct in Researchpresents an overview of the professional standards of science and explains why adherence to those standards is essential for continued scientific progress. In accordance with the previous editions published in 1989 and 1995, this guide provides an overview of professional standards in research. It further aims to highlight particular challenges the science community faces in the early 21st century. While directed primarily
oward graduate students, postdocs, and junior faculty in an academic setting, this guide is useful for scientists at all stages in their education and careers, including those working for industry and government. Thus, the term “scientist” in the title and the text applies very broadly and includes all researchers engaged in the pursuit of new knowledge through investigations that apply scientific methods.
In the past, beginning researchers learned the standards of science largely by participating in research and by observing other researchers make decisions about the interpretation of data and the presentation of results and interactions with their colleagues. They discussed professional practices with their peers, with support staff, and with more experienced researchers. They learned how the broad ethical values we honor in everyday life apply in the context of science. During that learning process, research advisers and mentors in particular can have a profound effect on the professional and personal development of beginning researchers, as is discussed in this guide. This assimilation of professional standards through experience remains vitally important.
However, many beginning researchers are not learning enough about the standards of science through research experiences. Science nowadays is so fast-paced and complex that experienced researchers often do not have the time or opportunity to explain why a decision was made or an action taken. Institutional, local, state, and federal guidelines can be overwhelming, confusing, and ambiguous. And beginning researchers do not always get the best advice from others or witness exemplary behavior. Anonymous surveys show that many researchers admit to engaging in irresponsible practices or have witnessed others doing so.1
Furthermore, changes within science have complicated efforts to ensure that every researcher has a solid grounding in the professional codes of science. Though support for research has grown substantially in recent years, exciting opportunities have continued to multiply faster than resources, and the resulting disparity between opportunities and resources has further reduced the time available to researchers to discuss professional standards. As research has become more interdisciplinary and multinational, it has become more difficult to ensure that communication among the members of a research project is sufficient. Increased ties among academic, industrial, and governmental researchers have strengthened research but have also increased the potential for conflicts. And the rapid advance of technology—including digital communications technologies—has created a wealth of new capabilities and new challenges.
In this changing environment of the early 21st century, a short guide like On Being a Scientist can provide only an introduction to the responsible conduct of research. Readers are thus encouraged to use the “Additional Resources” section of this guide, which lists many valuable publications, Web sites, and other materials on scientific ethics and professional standards, to find further material that explores this discourse. The challenges posed particularly by the increasing number of global and multinational ties within the science community will be further addressed in a subsequent publication of the National Research Council.
Established researchers have a special responsibility in upholding and promulgating high standards in science. They should serve as role models for their students and for fellow researchers, and they should exemplify responsible practices in their teaching and their conversations with others. They have a professional obligation to create positive research environments and to respond to concerns about irresponsible behaviors. Established researchers can themselves gain a new appreciation for the importance of professional standards by thinking about the topics presented in this guide and by discussing those topics with their research groups and students. In this way, they help to maintain the foundations of the scientific enterprise and its reputation with society.
As these practices are frequently questioned here, and sometimes strong assertions made of this or that practice (eg someone actually posted recently that it isn’t obligatory to offer an alternate hypothesis after demonstrating falsification of a current one!), any thoroughgoing starting point on this topic is welcome.
I will read with interest.
Did I misread what you wrote or did you actually say that a theory with proven errors must be accepted as correct until a better theory is presented to replace it?
I’m of the belief that a proof is weak if it asserts errors without alternate hypothesis lacking these errors.
Doesn’t make the original correct — what is a ‘correct’ hypothesis?
Makes the so-called disproof inadequate, and leads to serious question of the demonstration of error.
So a math error pointed out by a mathematician or an analysis program coding error pointed out by a programmer should be ignored because they cannot provide an alternate theory? I’m sure you cannot mean that. Are you meaning that their input should not be considered unless it is presented in the form of an alternative theory to the one they detect a process error in? That seems unlikely.
Or perhaps, you are meaning that folks that detect errors should remain silent until they can get their comments published in a peer reviewed magazine?
Supplying an alternative theory only makes sense when the underlying components of the original theory have no errors but only the interpretation of those components is in question. If a theory has errors, it is busted. All it takes is the error to exist to make it wrong.
Fair examples, and fair comment.
Of second-rate analyses.
Mathematical errors, such as are found all over in“Slaying the Sky Dragon” and transposition errors such as are found in a single pair of digits to have snuck into a section of a report of the IPCC on glaciers and propagated to a faulty conclusion attributed to Pauchurian, programming errors that pass compilation yet lead to invalid results or design errors that create statistics through experimentation but do not meaningfully reflect the hypothesis, all the little quibbles of technique ought of course be thoroughly beaten down in the fires of skeptical inquiry, if not by their originator then certainly by decent critical readers.
Errors in propositional logic, errors in reasoning, Type I an Type II errors in statistics when revealed by better datasets and more precise methods, the list is endless of what could falsify a hypothesis on technical grounds, coming from someone without the wherewithal to offer a sound hypothesis in the alternative.
And yet you go too far by a little.
I do not offer an opinion for the sake of courtesy or to be completely logically correct (though I might satisfy myself by mere logic and be a useless lump). I offer for your consideration a lesson I was taught in the utility of a decent critic, to improve on what comes to one and shielding oneself from error somewhat too.
In all logic, you need not accept this pearl.
If these decent readers above cannot also offer a hypothesis that explains without succumbing to these errors (which obligation some take up eagerly, I believe) and by standards as strictly applied to their own pet hypothesis as to the ideas of others, nor an explanation of why no hypothesis ought be offered under the given conditions, although they ought not remain silent when they spot mistakes, yes, it does diminish their accomplishment and credibility.
If I find errors and cannot imagine an alternate conclusion that might work to explain observations, then I seriously question whether I’ve done my error-checking correctly, and retrace my steps, and check them with others who I do as little as possible to bias toward my views until they separately confirm them.
Many valid theories began chock full o’ errors of technique (and grammar, and spelling, and historical references), and a decent skeptic treats every syllable of a proposal as an opportunity to uncover the next error not yet found, I believe.
Any explanation is made better by the fiercest examination.
Which standard simple technical analysis fails to meet, if it is not equipped to either better the explanation, or better the understanding of why there is none.
“Irreducible complexity,” for example, may be a valid alternate hypothesis. “Too little data yet to know,” might be another. Anyone can’t at least carry these two hypotheses around to apply to a problem, they’re not warming up my trust muscles.
You still have not defended your assertion that it is unethical to find errors in a theory without having an alternate theory at hand. It is a ridiculous idea.
If you’ve never seen even more ridiculous claims of error stacked many layers deep heaped on stronger and better argument, if you’ve never wished there was a mechanism to help winnow out the feeblest of the objections from the strongest to improve the dialectic, if to you every claim of falsification stands on its own as valid just by being put forward, then you must be right.
A critic is there to criticize.
A critic of a play does not write an alternative play.
A critic of a novel does not write a new novel.
And a critic of AGW is not required to explain why the fallacies and false assumptions that have gripped the climate science community and led to its CO2 obsession with a new theory of climate.
Your side failed to show how climate has done anything outside the range of historical variability. Your side chose to use Mann, Briffa, hide the decline and suppression of competing theories even to this day.
Skeptics are not required to show why you have failed any reasonable test of your hypothesis.
I have a side?
I’m skeptical of all equally.
I’ll point out with the best hypothesis extant the faults in other hypotheses offered by the same standards applied evenly to all, even if there are weaknesses in all.
If it happens that a Mann or Briffa AGW hypothesis remains stronger despite the very obvious and acknowledged weaknesses, how much does that say about the strength of the counter argument?
Usually we’re all wrong in some way.
Bart R: This sounds like your personal aesthetic in science, but I’ve never heard of this as being part of science and neither has anyone else here.
Please quote or link an authoritative source backing your claim.
Quote a link or authoritative source backing an ethic?
Name the Nobel prize in science won for falsifying a claim without offering a superior alternative hypothesis.
By Leavitt’s Rule of 48 (sourced in science fiction, as good a source of ethics as any), any mathematician can overthrow a physicist with arithmetic or simple counting; a computer can crush a sociologist with calculation; a small child can reveal the emperor to have no clothes; a preacher can reveal a theory to be at odds with his scripture.
Wait a second, back up there. What’s that last one?!
That last one is in part why there’s an obligation to offer an alternate hypothesis.. Why do you suppose there’s people offering Intelligent Design? What would you think of the situation where they could simply say “Evolution got it wrong,” without their idea being held up to skeptical review and their bias thus revealed?
It is in my experience obligatory for a scientist practicing good science to offer an alternate hypothesis in a formal falsification argument, in part to show that the analyst has the necessary expertise to reach appropriate conclusions, to confirm the thought processes used to develop the disproof, to help further understanding of the discussion in the reader, in part to allow the reader to judge the context of the claim of falsification; this obligation in no way is superior to the obligation to disclose contrary evidence such as bad counting, bad math, or bad typing; this obligation does pretty much crush such claims of ‘falsification’ as, “it just doesn’t feel right to me,” or “I don’t like him, so he’s wrong.”
A hospital has an obligation to protect patient confidentiality; this in no way overrides the obligation to give decent medical care.
Arguing against one obligation because there’s another that can conflict is simply shirking.
No one here has heard of it, huh?
How sad is that?
“a preacher can reveal a theory to be at odds with his scripture.
Wait a second, back up there. What’s that last one?!
That last one is in part why there’s an obligation to offer an alternate hypothesis.”
That last part was a non-scientific statement, and has no implications on the proper practice of science.
You are confusing:
a) the proposition of hypotheses within science , with
b) the proposition of alternatives to science (e.g. religion)
This occurs because you are taking as a postulate a statement from sicence fiction, and pretending that it is science. Fair irony in that.
“Why do you suppose there’s people offering Intelligent Design?”
To provide a non-scientific response to the demand of people like you for an alternate hypothesis. This is one of those ‘two wrongs dont make a science’ scenarios.
“What would you think of the situation where they could simply say “Evolution got it wrong,” without their idea being held up to skeptical review and their bias thus revealed?”
I would think that behaviour is not scientific, but only because ‘simply saying evolution is wrong’ does not falsify evolution. Whether or not they offer an alternate hypothesis is absolutely irrelevant to science.
A bald claim of disproof is not scientific, whether an alternate hypothesis is offered or not. A supported claim of falsification, or a demonstration that a hypothesis is not falsifyable, or has not been tested is scientific, whether an alternate hypothesis is offered or not. Offering alternate hypothesis is only relevant to your non-scientific aesthetic, and the science fiction ethos that you apparently draw it from.
And when you use it as you do, to disregard genuine scientific criticism of your favored hypothesis, it goes from being non-scientific to anti-science.
“That last part was a non-scientific statement, and has no implications on the proper practice of science.”
Which is thrown into sharp relief and illustrated amply when the non-scientific alleged hypothesis of Intelligent Design is held up to the light of skeptical inquiry.
How else does science dispose of these objections? Can a scientist just claim, “I’m a scientist and you’re not because of my Ph.D. being in a science and yours being in something else?” What of Creationist physicists who argue against Evolution on scriptural grounds, or worse, who argue against Evolution while hiding the origin of their speciousness?
Where is your cut-off for what is an allowable basis of falsification? Is the ‘scripture’ of a Keynesian any less doxological than that of a Methodist?
What method do you commend to apply to ideas to distinguish when it’s science and when not, and who is the scientist and who isn’t? Aren’t you then judging the person, not the idea?
I’ve set my cut-off at the support of an alternate hypothesis, due my experience, which has informed my opinion.
If it is not the obligation of the person who found the technical error, then it certainly resides somewhere in science to reconcile what to do in the outcome.
The problem with obligations shared by all is that they get owned by no one.
Certainly the originator of the hypothesis is put under obligation when disproved; they, however, may be long dead by the time their error is found.
And what is with treating obligation as dishonorable, or punitive, or discouraging in any way? That sounds like the ethics of the scurrying and fearful, not the bold attitude and values of inquirers adventuring into discovery.
I don’t know the name of the first person to wonder if Newton maybe was universally applicable and claim there would be an error in assuming his was the only way there could be, but I know Albert Einstein’s name, and the names of others who proposed viable non-Newtonian frameworks. This doesn’t make Newton less impressive.
It does make those feckless questioners who didn’t formulate anything to match Newton a little less memorable, even though they weren’t wrong.
Likewise, many questioned the plum-pudding model of the atom, whose names I cannot recall; I do remember Rutherford’s name.
If dying can be considered falsification, then many falsified early theories about diabetes before Banting & Best. I don’t remember the names of the doctors treating those who suffered prior to the discovery of insulin’s role.
“Which is thrown into sharp relief and illustrated amply when the non-scientific alleged hypothesis of Intelligent Design is held up to the light of skeptical inquiry.”
Evolution is not proved by falsifying ID, anymore than evolution is falsified by asserting ID.
“How else does science dispose of these objections? ”
Perhaps you should look into that. It does not involve demanding that creationists provide an alternate hypothesis. It cannot. Presence or absence of hypothesis A has no bearing on the falsification of hypothesis B.
“What of Creationist physicists who argue against Evolution on scriptural grounds, ”
That is religion, not science. If they want to argue that evolution does not fit within their religious paradigm, they are perfectly entitled to. This has no bearing on the status of evolution within science.
“… or worse, who argue against Evolution while hiding the origin of their speciousness?”
Not a problem. If they can falsify evolution without appealing to non-science infrmation and methods, by only using scientific data and methods, then that is perfectly good. More power to anyone who can operate effectively within both worlds. Most of our fundamental science was the work of religious people. There is not necessarily any problem with that.
“Where is your cut-off for what is an allowable basis of falsification? Is the ‘scripture’ of a Keynesian any less doxological than that of a Methodist?”
Neither is the basis for scientific falsification. You are confusing many types of non-science (including but not limited to, religion, science fiction, blog argument, etc) with science.
“What method do you commend to apply to ideas to distinguish when it’s science and when not, and who is the scientist and who isn’t? Aren’t you then judging the person, not the idea?”
“I’ve set my cut-off at the support of an alternate hypothesis, due my experience, which has informed my opinion. ”
You can inform your opinion any way you like. The way you have chosen is not science. You have no warrant to complain about others, such as creationists, who also choose to inform their opinion in a way that is not science. Neither of you should be operating under the impression that what you are doing is related to science.
Demanding an alternate hypothesis as a precondition for falsification of a hypothesis is not scientific. It is not even logical outside of science.
Imagine if a man were arrested for murder, and answered the charge by proving that he was out of the country at the time of the crime and thus could not have committed it. Falsified the accusation.
Under your … uh … ‘logic’ … of obligation, the poor fellow would be stuck in the frame unless and until another suspect was identified. And if (as is quite often the case) another suspect was never found, well then you’d just execute the guy, alibi or no.
It certainly would make police work easier. If a crime is committed, just go out and arrest whoever you like. Or better yet, arrest whoever you dont like. Then stop looking. Dont accept any proof that the accused could not have committed the crime. Tell them that if they want to prove that your accusation is wrong, THEY will have to find the real killer.
Sorry, but no. Falsification does not rest upon assertion of an alternate hypothesis. It rests upon facts and logic and whether or not those are consistent with the hypothesis.
Pardon the elisions following.
Someone supplies a hypothesis that better fits the data than Evolution, no matter the source, I’d be anti-scientific to not seriously consider it and re-examine Evolution.
Not, of course, that ID meets this standard, and indeed every version of ID proffered gets regularly trounced on logical bases, and then usually Evolution is held up as the alternative hypothesis. That it’s become so routine as to be perfunctory and redundant to actually come out and say it doesn’t suspend that it’s right there as the answer to the IDists’ gape-jawed and amazed question, “Then what do we believe?”
Multiple competing hypotheses permit Bayesian treatment by statistics. It’s true this approach is only almost-universally accepted, and it’d be unethical to not use every best tool and practice to arrive at the best understanding of knowledge and theory as it stands, but sure, you could assert that the presence of Rutherford’s theory of the atom ought have no bearing on the falsification of the previous theory. Rutherford thought differently, but hey, he’s only Rutherford using Rutherford’s approach and values and beliefs and systems. What are Rutherford’s ethics compared to JJ’s personal ethical view? He’s not the boss of JJ.
As JJ has the divine power to determine where religion ends and science begins, JJ pretty much ought be the one we all bring our ethical and scientific questions. What is the last atom before the troposphere ends? What is the exact end of the visual spectrum? If the Pope stubs his toe on a fold in a red carpet and expostulates, “Damn rug!” is it held damned in Heaven?
This is not meant as mockery, but Reducio ad absurdum, a rhetorical device employed with the greatest respect.
Your equivocations do nothing to dispel the increasing obfuscation descending onto your ethical model due your ethical preconceptions, JJ.
You seem to want ethics to be easy, simple and logical; it is anything but. You want to have an ethic where no two parts compete, so one ethical claim conflicting with another means one or the other doesn’t exist. That’s not how ethics work.
Ethics. It’s hard.
Science up and take it.
It’ll do you good.
Oh, and as it was left unsaid:
Police Hypothesis: “You’re the killer.”
Null Hypothesis: “Innocent until proven guilty.”
Police Evidence: “We just know it was you!”
JJ, you just know these ethics you embrace are the right and only ethics, and all others are not real. That’s all the evidence you’ve demonstrated, and credit for more depth of apprehension of the field.
Most scientists can name off the top of their head their favorite reference, whether it be a nice old hardbound copy of the CRC Handbook of Chemistry and Physics, or a well-thumbed Statistical Thermodynamics.
Name the ethics text that leaps to mind for scientists. The NAS pamphlet? Their University’s Mission Statement? The Hippocratic Oath?
Lawyers (perhaps ironically) have whole ethics libraries; the only course at Harvard Bill Gates flat out failed before he quit was Business Ethics.
What does science have that goes toe to toe with the ethical investment of any other field, considering that it is science that is at the front edge of human endeavor, not business or law, medicine or veterinary service, plumbing or (sorry) engineering?
The school ethics committee? Please.
You are approaching a topic with which you have not demonstrated the technical competence to reach the conclusions you are called upon to make.
Quote a link or authoritative source backing an ethic?
Bart R.: Yes, seriously. And you can’t.
Fairness, honesty, and openness are ethics too. The NAS text we are discussing mentions them on p.3. as ethical values on which scientific research is based.
But no mention of your claim here, there or anywhere I suspect. Quoting science fiction and waving your hands a lot doesn’t count either.
What reference to authority or link does the NAS text use to support fairness, honesty or openness?
Indeed, I will offer in support of my claim the same authority, top of page 2:
“Over many centuries, researchers have developed professional standards designed to enhance the progress of science and to avoid or minimize the difficulties of research. Though these standards are rarely expressed in formal codes, they nevertheless establish widely accepted ways of doing research and interacting with others. Researchers expect that their colleagues will adhere to and promote these standards. Those who violate these standards will lose the respect of their peers and may even destroy their careers.”
Start with the below, noting the year and publisher and work outward from there. (And yes, I get that by itself this could be viewed by some merely as a reference about Bayes Theorem with no ethical imputation whatsoever, to which I say Leavitt’s Rule applies.)
Jerzy Neyman, Egon Pearson (1933). “On the Problem of the Most Efficient Tests of Statistical Hypotheses”. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 231: 289–337.
Bart R: This is just more handwaving.
Quote a real and specific passage that supports your claim that there is a recognized “ethic” or “obligation to offer an alternate hypothesis” in science.
There is no obligation for a technician to offer an alternate hypothesis.
If you want to be a technician instead of a scientist, that’s fine with me.
If you want me to support people to succumb to ethics by the letter of some law, then you’re barking up the wrong tree here.
Bart R: Again, no quote, more handwaving.
If the “obligation to offer an alternate hypothesis” existed in science — like the obligation to provide information sufficient for reproducibility — it would be easy for you to quote a passage from wiki or some other obvious place.
But it doesn’t and you can’t because this “obligation” only exists in your head, not in science.
End of discussion.
Nice defense of the low ground you make.
Please, feel free to keep it.
“It is in my experience obligatory for a scientist practicing good science to offer an alternate hypothesis in a formal falsification argument,”
Dont say ‘experience’ when you mean ‘opinion’. You are speaking of your opinion here, and your opinion, insofar as it relates to practicing good science, is factually incorrect.
“in part to show that the analyst has the necessary expertise to reach appropriate conclusions, to confirm the thought processes used to develop the disproof, ”
This is not science. A scientist does not judge the man, he judges the facts and the reasoning. A scientist does not determine if conclusions are appropriate by evaluating the ‘expertise’ of the concluder. A scientist judges conclusions on the merits of the facts and reasoning presented.
“to help further understanding of the discussion in the reader, in part to allow the reader to judge the context of the claim of falsification;”
If the reader is a scientist, he does not judge the ‘context of the claim’, he judges the claim on its own merits.
Furthermore, if one were to wish to consider ‘expertise’ for any non-scientific purpose, presence or absence of an alternate hypothesis in no way establishes expertise or lack thereof. To the contrary, genuine expertise often causes a scientist to refuse to offer a hypothesis, understanding that the status of the knowledge on the subject does not support one.
On the other hand, others often try to gin up the appearance of expertise by pretending to know more than they do via the assertion of hypotheses. They often then ‘prove’ their hypothesis by pointing to the lack of alternate hypothesis, then appeal to themselves as authorities as the only ones with sufficient expertise to have a hypothesis. This is not science, but it might win you the Stanley Cup.
” this obligation in no way is superior to the obligation to disclose contrary evidence such as bad counting, bad math, or bad typing; ”
Agreed. The ‘obligation’ you refer to is not an obligation at all, and is decidedly inferior to scientific reasoning.
“this obligation does pretty much crush such claims of ‘falsification’ as, “it just doesn’t feel right to me,” or “I don’t like him, so he’s wrong.””
“it just doesn’t feel right to me,” or “I don’t like him, so he’s wrong.”” are not scientific claims of falsification. Your ‘obligation’ to provide alternate hypotheses is not necessary to ‘crush’ this nonsense. The scientific method does not permit such ‘claims’.
Nor does it permit demands for alternative hypothesis as a precondition to acceptance of demonstrated falsification. What you are offering with your ‘obligation’ is an anti-science solution to a non-scientific problem. Please stop pretending that any of that has anything to do with the proper scientific practice.
“A scientist does not judge the man, he judges the facts and the reasoning. A scientist does not determine if conclusions are appropriate by evaluating the ‘expertise’ of the concluder.”
And.. I look back over the evidence of Climate Etc. itself, and find data in the actual posts here of people who talk about science and claim to be doing so ethically by some measure, and find counterexample after counterexample to your claims.
“A scientist judges conclusions on the merits of the facts and reasoning presented. “
Which reasoning is generally stronger with an alternate hypothesis present.
Granted, some reasoning doesn’t need strengthening in and of itself. I can clearly see 1+1=3 is unreasonable, and if the alternate hypothesis 1+1=2 is for some reason not proffered, then it may be a mere curiousity.
If however I concur with the critic who tells us 1+1=3 is wrong, only to find later the same critic saying, “because everyone knows 1-1=3,” then I find myself in the position of supporting someone who is right for the wrong reasons. IF they then went on to say, “And Bart R supports me,” (more fool they), then you may see cause for concern.
If someone finds errors in hypotheses out of habit or boredom or for simple obsession with mathematical correctness, or because they happen to have noticed them first, that’s great.
If they have an agenda in the form of an alternate hypothesis and do not disclose it, that’s not so great.
Look at the esteemed Oliver Manuel; he is to be lauded for full, complete and detailed disclosure of his point of view from the outset whenever he criticizes another’s theory. Or says hello.
“And.. I look back over the evidence of Climate Etc. itself, and find data in the actual posts here of people who talk about science and claim to be doing so ethically by some measure, and find counterexample after counterexample to your claims.”
Then be sure and remind them that they are not being scientific, as I am painfully doing with you.
“Which reasoning is generally stronger with an alternate hypothesis present.”
No, that is not the case.
“Granted, some reasoning doesn’t need strengthening in and of itself.”
No. No reasoning may be strengthened by the mere existance of another hypothesis.
“If however I concur with the critic who tells us 1+1=3 is wrong, only to find later the same critic saying, “because everyone knows 1-1=3,” then I find myself in the position of supporting someone who is right for the wrong reasons.”
Only becuase you see science as ‘supporting people’ rather than ‘evaluating propositions with facts and logic’. If you “… concur with the critic who tells us 1+1=3 is wrong,” for any reason other than because you checked the math yourself, then you are not being scientific.
“If someone finds errors in hypotheses out of habit or boredom or for simple obsession with mathematical correctness, or because they happen to have noticed them first, that’s great. If they have an agenda in the form of an alternate hypothesis and do not disclose it, that’s not so great.”
Nonsense. Errors are errors. If said errors falsify a hypothesis, they do so whatever the motive of the person who found them had for finding them. They dont cease to be errors, just because you dont happen to like what the guy who found them stands for.
Motive and bias can have an effect on the way that science is practiced, but it does not trump facts and reasoning in the determination of the falsification of a hypothesis. Nor is any of this dependent upon or improved by the anti-science demand for alternate hypothesis.
“Then be sure and remind them that they are not being scientific, as I am painfully doing with you.”
I painfully remind you that you are not being scientific, if you reject data out of hand without rationale proportional to the data offered, and further prescribe action absent full treatment of the description of the data.
It’s true I was too glib and quick in my earlier reply to you, but blogs tend to have that effect.
You avere a “ scientist does not judge the man, he judges the facts and the reasoning. A scientist does not determine if conclusions are appropriate by evaluating the ‘expertise’ of the concluder. A scientist judges conclusions on the merits of the facts and reasoning presented.”
I cite in partial answer from the below:
Jeroen P. van der Sluijs | February 10, 2011 at 3:51 pm | Reply
For comparison: here you can download the Code of Conduct for Scientific Practice of the Association of Netherlands Universities:
The section on Scrupulousness, page 5, Best Practice I.9 notes admirably, “A scientific practitioner ensures that he maintains the level of expertise required to exercise his duties. He does not accept duties for which he lacks the necessary expertise. If necessary, he actively indicates the limits of his competence.”
Such codes of competent practice are repeated in profession after profession. To dismiss their application as some version of ad hom on the part of the concluder demonstrates a fundamental and broad ignorance of the topic of ethics and the meaning of expertise.
I’ve already addressed that “too little data to know” is by my lights a valid hypothesis, to address your sophistic “..expertise often causes a scientist to refuse to offer a hypothesis, understanding that the status of the knowledge on the subject does not support one.”
Your whole screed is, while it contains some approximately true statements, unsupported by these measures. By this measure, are you attempting to gin up the appearance of your expertise in ethics by trying to sound like you know anything of the field?
I apologize if this sounds like an attack on you, JJ. It isn’t. There’s nothing personal in it. I don’t know you, and for all I know you’re a great person. Your ethical method as expressed simply sucks. It’s the method that I’m discussing, not you. This is not ad hom. This is about sloppy and second rate application of mere logic when science as a discipline is more demanding than you pretend.
Facts and reasoning may be manipulated. The scrupulous scientist will choose openness to disclose their own hypothesis if they have one, fairness to venture their own hypothesis if they are competent to formulate one — even if it is “too little yet is known” — and their honesty to say when they see an error but lack the competence to meet the obligation to fully address the context of that error.
If you want this in the terms expressed by the NAS.
So you are just hand waving and hoping no one will notice.
Do you really think it will work?
Until the ethic particle is discovered under a microscope, or the morality wave equation solved for the general case, all ethical arguments reduce to handwave and hope.
I do hope people notice.
That’s how ethics works, by notice.
I’m personally glad to have had offered to me the stirring and detailed defense of what I had always thought of as shoddy, partisan behavior, and now know to have a slacker ethical basis. Say what you will about slack, at least it’s an ethos.
Yes, hand waving. You are good at it.
In this melieu, and considering the source, very high praise indeed.
That particular transposition of digits was an “error”? If you believe that, you’ll believe anything.
If I’m to ask that question of Pauchurian, then I must ask it of Johnson and Coffman, by the same standards.
Decorum of the blog house rules precludes, I think, such an insinuation.
Science deals only with the imperfect. Nothing is proved in science. Theories are only more or less useful. To fault a scientist for not proving their case commits a blunder since proofs, in this vale of tears, only show up in mathematics.
We use Newtonian physics all the time and are the better for it. Science doesn’t reveal eternal truths: it improves our lives.
Nothing is proved in science? Here you do a great disservice to mathematical and theoretical physicists who actually have mathematically derived (proved) many things. You seem to confuse the limitations that come with a particular framework with the proof itself. The result of the proof is subject to any conditions / limitations implicit or implied in terms of its applicability, but it is still mathematically proven to hold under the specified conditions.
S/B explicit or implied
Inductive proofs are the domain of Science.
” under the specified conditions.”
Like the appearance of a black swan.
I was thinking more along the lines of homogeneous matter, intertial frame, …
I believe you to be fundamentally wrong about the “alternate hypothesis” assertion. We have a theory, climate change, and it makes lots of specific predictions. These predictions are the basis for policy prescriptions. It is perfectly valid to point out that certain of these predictions are a) typos or made up numbers (take your pick), like the Himalayan glacier vanishing act, b) subject to wide disagreement between models, c) not supported by the data, like Hansen’s 1988 model forecast, d) other. Also, it is quite worth pointing out that certain assumptions of the models are unsupported experimentally (like climate sensitivity) or are shaky (like the input aerosol forcing). This is all about the details, not the general concept. None of the scientist critics debate the general concept that some warming should occur. To point out these problems does not require an “alternate” theory–the alternate theory is simply that the science is to shaky for the purposes for which it is being tortured, and many things need refining.
You’ve posted, I think, the most sensible comment on my opinion, and I thank you.
It’s relevant, explicit, gives detailed examples, and makes clear your alternate hypothesis and position. You don’t hide anything, don’t appear to be taking personal profit, and open yourself to honest and full debate of the validity of your claims.
Which is no more than I ask when I say an alternate hypothesis is obligatory when a claim of error is made.
So much better than, “that’s just handwaving.”
I point out that “the science is shaky for the purposes for which it is being tortured,” is a variation on both of the acceptable hypotheses I offered above (“Irreducible complexity,” or “Too little data yet to know.” )
So it seems we are in complete accord in every detail, except what to call it.
I’m fine with that, too. Names never bother me.
I would challenge your assertion that ” an alternate hypothesis is obligatory when a claim of error is made”. Rather I believe that in science, as in life, there is no requirement to provide a viable alternative hypothesis when one demonstrates an initial hypothesis is incorrect. I would argue that to require that a researcher come up with a viable alternative hypothesis prior to publishing data to disprove a current hypothesis is inherently “unscientific” and in my personal opinion selfish.
Science is about sharing ideas and advancing knowledge as a community. Holding back critical knowledge from your peer community simply because you are unable to come up with a viable alternative-hypothesis would appear, on the surface, to be counter to ethical science. By holding back your results you hold back the advance of science while knowingly allowing your peers to waste valuable time and resources answering a question that has already been answered.
Since you ask for examples I will simply point out that entire fields of physics are split between theoreticians and experimentalists. I used to share a coffee room with experimental physicists who developed models and tests to help support or disprove hypotheses generated by the theoreticians. They would argue, quite strenuously, that they were engaged in science and they certainly did not include alternative hypotheses with their results. Rather they shared results within the peer community and allowed the entire community to work together in the development of improved hypotheses.
You would be right to challenge my position, given what you understand to be my position, based on what you say about it.
Can someone publish without fulfilling the obligation I assert?
Sure. Why not?
Indeed, the obligation to point out errors, offer additional significant data that supports or throws into doubt a theory (now, in my personal experience, I know of several dozen researchers who routinely suppress publication of data they know would be important until such time as they can use it themselves, which practice I deem highly unethical, because of the pressure to publish original work — which means original data) is far stronger than the obligation to offer an alternative hypothesis, in my view.
That said, the offer of data, counter-argument, error-checking, auditing, commentary or critique coming from someone with a hidden agenda would be the opposite of ethical; those practices of ambush and sniping while keeping one’s actual hypothesis under cover form, to my mind, dishonesty.
Do you disagree?
I will admit at this point to being mildly confused by your reply. I agree with most of what you have written and we seem to be on the same side of the primary issue but I disagree with your idea of a “hidden hypothesis”. I would suggest that regardless of one’s motives data trumps all. If the data and the underlying transformations used on the data are presented then one’s hidden motives (or not so hidden motives) don’t matter. The commentary may be biased but the data itself will tell true. As is evidenced by the bun fight between Dr. Steig and Dr. O’Donnell, because the data was fully presented, at the end of the day the truth will out regardless of the motives of the actors. It is only when data is hidden that science is not served.
At the risk of being even more confusing..
There’s a well known demonstration in Game Theory of the impossibility of democracy.
It starts with a handful of statements that must be true of a democratic decision process, and then proves that for any combination of these necessary conditions, if all but one of them have been shown true, the remaining one must be false.
The subject of Agenda Management is what makes democracy mathematically impossible.
Select the same data and decisions to be made (underlying transformations) in different orders to a set of parties coming to the table with different interests (be it experiment, reviewers, analysts), and you will get different outcomes.
Science has to have an unmanaged agenda to remain consistent.
Just the data alone will not trump all.
Motives will sway outcomes if order of presentation is allowed to be managed.
In this case, presentation either includes alternate hypothesis or it is more susceptible to agenda management and what seems like inescapable logic to the parties caught in the agenda trap is merely the outcome the manager chose (perhaps innocently) for them to come to.
What good reason could there possibly be to exclude the context an alternate hypothesis from the professional opinion of a scientist offers, that balances the risk of compounding an already fraught communication with ambiguity?
Ethics ran into the problem of pious fraud centuries ago.
What to do to safeguard oneself against committing an error of bias so profound that one doesn’t even notice when one is actively fabricating false evidence?
What to do to safeguard against a trusted and distinguished member of the community who has fallen into this trap?
One builds seemingly pointless or illogical-sounding steps into procedures, that act as self-checks.
Once again I must disagree. While I understand where you are coming from I would argue that if the original data AND the transformations placed on the data are presented then the reader has enough information to make an informed decision regardless of any hidden motives by the authors. Frankly, the choice of the transformations and their order would reveal most biases in any case. The requirement for an alternate hypothesis needlessly complicates the process. When I was a young researcher we described it as research to develop the “process knowledge” necessary to establish hypotheses. Admittedly, much of the research was flawed as we had insufficient understanding of the processes being examined to design the right experiments, but they were flawed in an understood manner because we documented what we were doing.
With respect to the field of climate change, I will readily acknowledge that in those cases where only the transformed data are presented, then the risk of agenda management is high which is why it I so important that the raw data and order of transformations be documented. In those journals where this documentation is not required, I will readily concede that you are correct and an understanding of the “motives” of the authors is a requirement.
On a different front, I will take the time to look into the concepts you suggested in Game Theory. My education in game theory is woefully out of date (I last read a meaningful paper on the subject in the early 1990’s). My personal opinion is that if the subject of agenda management is supposed to make democracy mathematically impossible then someone had better do more research on agenda management. It is a safe practice that when a theory conflicts with reality then it is the theory that has to adapt not reality and while our western democracy is not perfect it does function and is therefore not “impossible”.
Fair enough, and where a different illogical-sounding, pointless-seeming series of steps is applied with the equivalent self-checking power of the alternate hypothesis, though I think mere declaration of original data and methods not enough to qualify.
I wouldn’t trust most advanced professionals like lawyers, doctors or advertisers if all they could do was regurgitate data and cite a method.
There is an ineffable quality in the synthesis of something new in one’s craft, be it a photograph or a sculpture or an hypothesis, that reveals the mind behind the crafting hand.
Sounds unscientific, I know.
But what is more unscientific than ethics, after all?
Let me include some relevant words of my hero Feynman:
These brilliant minds
Were bound in poli-finance chains.
May they have freedom.
I just scanned the first ~20 pages.
I am as disappointed as Kim (above).
Climategate exposed deceit and data manipulation in science.
NAS condoned the practice, issuing reports critical of those who reported or discussed the misuse of government science.
The problem that NAS faces today – lose of trust in government science – will not be solved by issuing slick brochures on responsible conduct.
Oliver K. Manuel
As an economist, I’ve lived in this way. As a government policy adviser in Queensland (Australia), I discovered after having had my promotion blocked several times that I was considered a threat by the most senior officers because of my “honesty, integrity, intellect and analytical rigour.” So I empathise with Steve McIntyre et al. The Feynman approach is not always career enhancing, but is the only credible and creditable one.
Great quote. If only this was culture of climate science. It is terribly discouraging to see researchers hide failed verification statistics, truncate data so they don’t have to explain the Divergence Problem (AR4 still presents truncated data!) and all of the gymnastics that go on trying to pretend the surface temp record is reliable. If I was a climate scientist or dendrochronologist, I would be speaking out on these issues publicly.
The chapter on “RESPONDING TO SUSPECTED VIOLATIONS OF PROFESSIONAL STANDARDS” looks interesting, but appears to coddle fear when confronting violations of scientific standards. I think scientists should be fearless when condemning practices like truncating data or splicing data to “hide the decline.”
For a scientist, as for anyone, a good reputation is vital.
Feynman was concerned about being correct, not about being published. An example from the referenced book http://books.nap.edu/openbook.php?record_id=12192&page=10 has a basic question. My answer to the question posed is that the data is included. My standard for excluding a single data point is if you have a specific identifiable cause of the data point being corrupted which you have documented. This is done for each suspect data point. In this example, nobody knows which data points are corrupted, so either use them all with very specific information about the observed problem or throw them all out. I think Feynman’s position would support this approach.
There is a theory in sociology that shows that every sub-society within a society is the reflection of that society. This means that any profession will have its part of crooks, thieves, drug addict, exemplary citizen, etc. Whether it be the profession of lawyer, scientist, nurses, police, military, etc.
You can teach what being a good scientist is and means. But most likely people will just give the answer that are required to pass and forget it as soon as possible.
One thing I’ve learn in my course on history of science is that debate in science are rarely civil. In fact they are usually settle after the death of those who were the most invest in it. The story of the Piltdown man and the theory of Wegener about tectonic plates comes to mind.
However, one of my science heroes is the physicist Robert Millikan, who was certain Einstein’s particle theory of light was incorrect. Millikan spent years attempting to disprove Einstein.
Millikan failed and he failed and he failed to disprove Einstein, but he was always honest about it. Fifty years later he wrote that his work “scarcely permits of any other interpretation than that which Einstein had originally suggested, namely that of the semi-corpuscular or photon theory of light itself.”
Note that Millikan did not go into a huddle with his buddies and scheme to delete inconvenient data or refuse to share his own data or prevent other people from being peer reviewed because it was inconvenient for his scientific convictions.
We need more Feynmans and Millikans today.
I agree. I had the good fortune to know a few other great scientists: Carl Rouse, Raymond Davis, Paul Kazuo Kuroda, Hannes Alfven, Stig Friberg, Glenn Seaborg, Barry Ninham
I particularly admired the late Nobel Laureate Raymond Davis for sticking to the principles of science despite enormous pressure to find the “right number of solar neutrinos.”
Dr. Davis continued to report that there were just too few solar neutrinos for H-fusion to be the main source of solar luminosity:
The Solar Neutrino Puzzle
Near the end of Professor Davis’ life, a large team of researchers (~178 coauthors) reported that solar electron neutrinos oscillate away before they reach our detectors.
I expect Professor Davis’ results will still be in textbooks long after oscillating neutrinos are removed as a passing mistake.
With kind regards,
Oliver K. Manuel
There are Feynmans and Millikans around these days. Maybe no the same stature but most scientist are good scientist. They can’t be all like the realclimate gang of thugs.
Judith Curry and Roger Pielke sr are great example of good scientist. Most are like them, except that in the last 20 years of climate science it is the bad breed of scientist that grabbed the spotlight. The bad thing is that those scientist are pretty young so they will be around for a while. So the debate is not about to be getting more civil any time soon.
In the theory of constraints one is encouraged to ponder what stands between where one is and where one wishes to be. To identify the constraints against your progress toward your goal. Further it encourages devising ways to remove that constraint.
So here is the goal: Life, liberty, the pursuit of happiness, a chicken in every pot, and a car in every garage.
* The constraint: Integrity
* The solution: Lose it
How hard was that?
Indeed it does. In a meritocracy, which is what the scientific establishment is best described as, it’s more often the man (or woman) with the sharpest elbows who makes it to the top.
Thanks, Professor Curry.
From my experiences, science is a journey of continuous “truthing”: Seeking a better understanding each day and knowing that you will never have the whole truth!
That is also the only sane way that I have found for living. What a joyful way to live: Live a life of continuous discovery! Why not?
The requirements are a.) Rigorous honesty, b.) Admission of powerlessness, and b.) Time spent each day in quiet meditation or reflection.
The fruits of continuous “truthing” are: a.) Humility, b.) Merging of Science and Spirituality, c.) Joy of Living, d.) Acceptance of Death, e.) Open-mindedness, e.) Reduction of ego, f.) Concern for others, and f.) Reverence for the power that controls the universe.
It is late. I appreciate the opportunity to share what I have learned On Being A Scientist.
With kind regards,
Oliver K. Manuel
I read the first paragraph and concluded this was missed by pretty much everyone in WG1
Politics and science make a toxic combination. This doesn’t appear to directly address that issue (“Competing Interests, Commitments, and Values” should cover this, but they talk more about financial conflict of interest), but if you allow people to wear scientist hats and activist hats at the same time, you’re going to get serious problems. They need to address this head-on.
I also found the section on “Competing Interests”, p43, of some interest in view of the dollars associated associated with AGW. I believe our society holds those individuals with MBAs and their employers to higher standards than those with PhDs and their institutions. Perhaps it’s time to require financial accountability statements from scientists and universities who choose to lobby and advocate.
Perhaps it is a blind spot for folks in the academic world. Internally, that world is highly political. Focusing on financial conflict of interest misses a major factor in human interaction within groups. All people are not uniformly motivated by a desire to amass wealth. There is a status/security factor that often overrides, to some variable degree, the desire for wealth. Some folks are most strongly motivated by a desire for status. Their goal is to be at the top of whatever social structure they identify with. Others are motivated by a desire for security. They will choose a more secure path than one with potentially higher financial rewards.
The combination of status seekers and security seekers in a group can sometimes lead to a toxic combination. Status seekers attack others who they perceive to threaten their current status or potential rise in status. Security seekers will avoid antagonizing the status seekers. In corporate environments, this combination is often exemplified by an attitude that successful completion of a project is someone else being blamed for the failure.
I see in the climate research community a strong presence of the status/security dynamic. Those who feel they are on top of the field appear to attack those who might threaten their authority. Those who simply want to do their science keep their heads down and avoid conflict with the leaders.
This is a pervasive dynamic in human interaction. Limiting discussion of conflict of interest to that of financial gain may be a little short sighted.
A booklet of standard practices is a good thing. Gives the Team a checklist to tick off as they break them.
The fact that such a booklet is necessary only points to the greater deficiency, and that is the near absolute lack of training that scientists receive in the philosophy of science. At best, most learn the same cook-book recitation of ‘the scientific method’ that we teach eigth graders.
Precious few are introduced to the philosophical underpinnings. Epistemology? What? Hume, Wittgenstein, Popper, Godel. Who? Occaisionally you get a whiff of Kuhn, but it is very much slanted to practice, and what generally comes from that is the notion that ‘whatever it is we scientists are doing is science, and whatever we scientists claim is fact(and I’ve got the puck so lets go write some software that will make a stick and get this game on)’.
Leads people to counting NAS member signatures, looking for ‘multiple lines of evidence’, and demanding alternate hypothesis as a precondition to invalidating the one they assert as fact without test. And calling it all ‘science’. Knowledge is precious and hard to come by. If you dont understand why that is, you dont realize it when you’re just making shit up.
As the expression goes, if you can’t give understanding, give instruction.
Or, “Recti cultus pectora roborant,” in the hopes formulaic instruction and hard work instill a better philosophy than lip-service can.
Unfortunately, they left out the instruction on what to do when you find the other instructions difficult to follow, or an inconvenient obstacle to your goal:
Consult the philosophy of science, so you will understand that you are giving up science, and understand what it is you are giving up when you do that.
That instruction was left out of this booklet, just as it is left out of the rest of the education the vast majority of scientists receive.
This gives us ‘scientists’ doing Team Play jousting with ‘scientists’ doing PNS, and everybody claiming ‘science’ all the while.
Philosophy of science? When you’re planning research, it pretty much strategy and tactics integrated into an overall methodology (sorry academics if I bring an industrial view to it). Better to have a broad stable of tactics (and come up with new ones), so you have a broader range of implementable research strategies.
Girma | February 9, 2011 11:03 pm
The Native American “dog soldiers” of the plains had an unusual form or integrity. They would spear a sash to the ground, the other end they attached to themselves. There they would fight. To run would be dishonorable.
I have not yet read the links to this article. I shall, but it is late night. On brief review, however, it appears to speak to the individual. That is necessary, but will not be sufficient unless Girma’s conditions are met.
The entire chain of command, from the individual on up, should share the honesty, incorruptibility and integrity of the scientist at the cutting edge. (In a spear, the haft is as important as the blade; the blade has no power without the haft.)
A project has three major factors : Quality (meeting requirements), Cost and Schedule. Short-change one and the others deteriorate.
I see it includes a brief discussion on the ethics of using tricks to hide declines.
Requirement #1 : scrupulous honesty.
Anyone found compromising this should be thrown out of the profession. That would include, at a minimum, the Climategate Crooks.
The booklet appears to be a good first introduction to proper scientific practices for students, but somehow I expected more.
As an example the chapter “The Researcher in Society” fits on one page, and one page of this booklet is short (excluding the case study on Agent Orange, which adds little). Being so short it cannot even really touch the issues discussed here in several chains in great length.
Similarly the presentation ends on every point without reaching issues that go beyond almost self-evident. On the other hand I know that reaching a deeper level is really demanding and would make the booklet so heavy, that its present purpose would be defeated. This guide has its purpose, which is not to discuss the more difficult issues.
I actually expected a much longer document. But this is really helpful for educating graduate students.
I would think all students, faculty, nay humans period should be educated about honesty.
In some ways I expected more too. However, the brevity reinforces the message that scientific conduct is based on commonsense and common ethics:
I don’t see how the behavior of the top scientists in Climategate measures up to these standards at all. Likewise, the overwhelming response of most of the scientific community to ignore, dismiss, or excuse this behavior.
It speaks to me of a corruption that goes deep in today’s science.
I agree that the the brevity gives correct emphasis on the facts that general principles are essential. It is essential that:
– Scientists are honest and open about all their findings, both those supporting their conclusions and even more importantly those that arise doubts.
– That they do not adjust the data to agree better with the hypothesis in any fashion. This does not mean that outliers could never be left out, but it means that this must be done only through a controlled procedures that are described and that even then the changes are listed. Similar requirement apply to all other methods that are used in handling data, when there are uncertainties concerning factors affecting the empirical results.
– Everybody must be fair and just in all actions affecting other scientists (and also non-scientists).
Little more than these three principles is really needed, when their implications are understood and they are adhered to rigorously.
It is more about principles than about detailed rules. In pure science the peer control lessens the need for detailed rules. In applied science the specific rules are more important. Here “applied” means everything that affects directly the world outside science even when the same work is simultaneously basic science.
It can’t be too specific; one thing researchers like to do is be in control of how they do research. I don’t fault them for this – it’s nominally reasonable and almost necessary. Some things, however, are pretty much universal, like getting statisticians to develop your statistical approach…
“Details that could throw doubt on your interpretation must be given, if you know them.” Richard Feynman
Let me follow Fenman’s advice:
1) Presentation with “value-added”:
2) Another presentation of the same data:
3) Still another presentation of the same data:
“In 2002, the editors of the Journal of Cell Biology began to test the images in all accepted manuscripts to see if they had been altered in ways that violated the journal’s guidelines. About a quarter of the papers had images that showed evidence of inappropriate manipulation. The editors requested the original data for these papers, compared the original data with the submitted images, and required that figures be remade to accord with the guidelines. In about 1 percent of the papers, the editors found evidence for what they termed “fraudulent manipulation” that affected conclusions drawn in the paper, resulting in the papers’ rejection.”
This is clearly unacceptable. But would you join the campaign to withdraw all drugs and medical treatments until all these graphs are fixed (yes I’m sure some would).
Only those drugs influenced by the corrupt practice.
In climate science the answer was to ignore the corrupt practices, ignore those who suspected the corrupt practices, and to blame anyone who pointed out the corrupt practices.
i would and i’m sure the FDA would join in very loud requests for any drugs who’s testing results have been manipulated to not only be withdrawn, but for full audits/criminal investigations/legal procedings to be started immediatley.
Would you be happy with the same for climate science?
First, however, we need to actually investigate the evidence, and clean off the cheap layers of whitewash.
Complying with FOIA requests and ending stonewalls in Virginia comes to mind. Actually reading the e-mails and investigating comes to mind, as well. Getting Steig out of the peer review loop until his recent actions are well understood would be a good idea as well.
Personally, I would extend this to all new drugs that do not exhibit much improvement compared to older drugs in use (the generic one, for example) or to the “no drug” baseline.
An example is the flu vaccines for the last pig flu panic : Given that I have very rarely been really sick from the flu, I was hesitant. The fact that some controversy surrounded the proposed vaccine, with severe secondary effects reported then infirmed makes me hesitate more.
Then I got the statistic on mortality attributed to actual pig flu. At this time, force would have been needed to make me accept the vaccine, even if it was proposed for free… Even if most studies pointed to the efficiency of the new vaccine for preventing pig flu…
The situation regarding cAGW is quite similar….A lot of noise, scare tactics to promote costly (for the public) policies, and then, digging into theory, you find that deltaT/deltaCO2 sensitivity is quite uncertain, that it is not certain that CO2 is the main driver, that proposed measures will not change T much even with sensitivities in the upper range of predictions, and finally that ill effects of increased temperature are not well defined and certainly very inhomogeneous, with winners and losers. Equivalent to an expensive, uncertain vaccine which target a virus with a mortality that at this point is uncertain but seems, all other risk considered, very low. No thanks.
Ah! but this was just the findings from 2002. And where is the evidence that such practices have been rooted out.
Till we look further back we have to assume that the standards for reviewing were equally incompetent and therefore unsafe. Billions are spent on drugs that have probably never been tested properly. Money that could have been better spent on important research like climate science.
Why aren’t there hundreds of active blogs investigating medicine?
“This is clearly unacceptable. But would you join the campaign to withdraw all drugs and medical treatments until all these graphs are fixed (yes I’m sure some would).”
Sentence one – declare something unacceptable.
Sentence two – make exaggerated straw man personal attack against those who would not accept it.
It does look really good. However, you have not provided the relevant information.
For this post, the NAS corruption and context is obviously important information. Otherwise, you could leave the false impression that the world’s climatologists — as opposed the the American National Academy of Sciences scientists — are what you think is the important audience. I’ll assume this was not your intention.
The context is that NAS has spent the past couple of years cleaning itself up. These guidelines are targeted at NAS scientists and the corruption in NAS– not the world’s trained climatologists.
NAS is mostly about providing reports for the U.S. government for the purpose of policy making. That is why it was created, and that is still its main purpose. It is public knowledge that NAS has identified that its scientists have a pro-petroleum sector bias, a preponderance of conflicts of interests related to direct financial ties (read—not necessarily direct funding, so please don’t get into that issue with me again, it’s about ties, associations, lobbying, influence and PR) to companies or industry groups with direct interest in the outcome of the study. Almost half of the panels examined had scientists with readily identifiable biases, not offset by peer review.
This problem with NAS science is public knowledge, and it is acknowledged by NAS. Anyone can read the publicly available CSPI review of NAS. This situation is what NAS is mostly responding to with these published guidelines.
CSPI, not NAS, is what motivated the production of these guidelines. CSPI is an independent public interest advocacy group that advocates sound science.
Almost ¼ of NAS scientists were found to have direct conflicts of interest. This is not denied by NAS or anyone else. Main ties have been to the petroleum and traditional energy industry.
NAS does not employ the world’s climatologists. On the contrary, in recent times, NAS non-climatologist scientsits have been dupes for the Bush and other governments that have used their heavily industry biased ‘science’ as ‘evidence’ that AGW is not happening. This previous NAS reporting and ‘advice’ to government and the public, has unfortunatley been clearly shown to be out of sync with objective science, and significantly corrupted.
In 2010, post NAS cleanup of itself, its position on climate change (after what it describes as its objective, unbiased review of the science) is that ’97–98% of the climate researchers most actively publishing in the field support the tenets of ACC (Anthropogenic Climate Change) outlined by the Intergovernmental Panel on Climate Change’, that AGW is real, occurring, and poses significant risks.
The newly accountable NAS says that at best, 3% of climate scientist are not in agreement with the current science.
It’s not possible to praise NAS for promoting quality science practices with their badly needed new accountability, and also dismiss their most current statement on the science and IPCC reporting of the science and recommendations. Not with any consistency.
Martha, I take particular issue with your statement “It’s not possible to praise NAS for promoting quality science practices with their badly needed new accountability, and also dismiss their most current statement on the science and IPCC reporting of the science and recommendations. Not with any consistency.”
I am not praising NAS. Rather i am saying that i like the document on responsible conduct, which was prepared by the NAS COSEPUP committee. I like rather less their statements about IPCC and climate science. I am interested in arguments and statements and documents, i am not particularly interested who makes the statements. However, when institutions consistently say things that i think are ill advised or incorrect, I will comment about that institution. In this post, I am not making a comment about the NAS, but rather this document.
Almost ¼ of NAS scientists were found to have direct conflicts of interest. This is not denied by NAS or anyone else. Main ties have been to the petroleum and traditional energy industry.
I suppose the other 75% use no energy at all? I don’t care if someone has a conflict of interest or not, all I care about is the quality of the work and its interpretation. BTW, ego is a big problem (for industry and academia) as well, but you NEVER see that as a disqualifying factor.
The biggest screw up I ever personally saw was driven by an internationally recognized technical expert’s ego. He just wouldn’t admit he didn’t know.
Contains ad hominum
rubbishstatements. Kindly provide references to documented proof of such statements and percentages. It would also help if you could define “climatologists” in your view. (Sigh.)
“…when I use a word it means exactly what I intend it to mean nothing more or nothing less”. –Humpty Dumpty to Alice, Lewis Carroll in “Through the Looking Glass”
The lack of ringing comparisons about how well the ‘team’ complies with these basics is telling.
I just received a memo from NOAA, they are preparing their integrity guidelines, the provided these links:
Remarks by Jane Lubchenco, NOAA administrator
President Obama’s memorandum
John Holdren’s memorandum
Consensus science does not listen to or engage anyone who disagrees.
That does not fit with your description in “On Being a Scientist”
Dr. Curry, you are trying to not be like that and that is commendable.
But, you have not responded in any way to any of my posts.
The climate models and climate theory have not properly accounted for the albedo effect of Arctic Ocean Effect Snow . The cold and snow last winter and this winter was not properly predicted.
This has not been touched on in your threads, at least not in any that I have read.
I would like for you to comment on this Report
Alex, i did a whole post on the winter weather
this topic may be worth another post in terms of warmer weather =(?) more midlatitude snow
“The scientific enterprise is built on a foundation of trust”
Right out of the chute, the propaganda starts.
The scientifc enterprise is built on evidence.
Science is built on evidence. The scientific enterprise is built on trust.
OK. Give me an example of a specific scientifc enterprise that is not based on evidence.
Actually we see that the scientific enterprise is built on wringing large grants from funding bodies, usually government, and by playing politics.
Excellent observation Andrew! Another comment I saw somewhere, “Nature is not the final authority, It is the only authority.”
In modern science, the model is the final authority. Nature is typically too messy, and natural observations are either explained away or adjusted to match the model.
I think the statement is correct in the sense that this is an early hurdle to pass, but not the only one. In the end, there would be no peer review (or data / analysis presentation) needed if trust were the central tenet of scientific progress. Just write your conclusions and publish.
I would argue that bring trust into the equation is ultimately detrimental to scientific progress. Trust is always liable to be misplaced, therefore it can’t be foundational to any scientific inquiry.
Trust may be useful at some point, but it’s for individuals to decide when and how much, using their own judgement.
Agree about the research role of trust (none). It comes into play in the broader arena, when trying to convince people (who aren’t your peers or scientifically oriented) of something. By definition, this part is a political effort.
Two point, yet again,
First, the expectation that the National Academies (which represent several fields of study, not just physical science) is going to take time out to highlight possible research malpractices they did not find in their own hearings is pretty unrealistic.
More than that, having taken a research ethics course required with a NIH fellowship, I’ll let you know that research misconduct is a HUGE problem in biological sciences. Every session the lecture would repeat the basic mantra ‘don’t cheat’, sometimes in more uncertain terms than others. There are pretty significant stakes in this field (large, private grants and valuable patents) and so many government/university PI’s are not engaged in public research, but have their own for-profit start-up companies looking to make a buck.
Also, biological sciences/technology research has one of the fastest growing research budgets of any fields as personal healthcare continues to be at the forefront of scientific application.
So for an across-the-board research basics perspective piece, I’d expect that climate science, where the VAST majority of scientists are working within good research standards.
If there were a quibble I would take with climate science research is the continuing blurring of lines between objective scientist and political advocate. It would seem that if the National Academies (a Congressionally mandated scientific advising organization) were to take up such a call, however, they themselves would have a hard time walking such a fine line.
Second, it seems ironic to me that some contributors are not only willing to accept the imperfection of science, but play up such imperfections as understandings for how we can’t make specific statement with respect to a human induced increase of the atmospheric greenhouse effect while still expecting perfection from a national, scientific organization.
How does that make sense?
The process of doing science is imperfect in its more rigorous form. How can we expect the process of as assessing an imperfect process to be anything but imperfect. Imperfections will include all of the same imperfections that organizations like National Academies endure, ranging from ego-centered self preservation to just honest mistakes.
That should be the expectation. And when mistakes are identified they should be sent to such organizations under the auspices that these mistakes are quite honest and can be taken care of suitably. Otherwise, the whole assessment process comes to a stop…as we’ve seen with respect to climate science the past couple years.
So for an across-the-board research basics perspective piece, I’d expect that climate science, where the VAST majority of scientists are working within good research standards.
One of my favorite topics, “How bad is good?”. If it’s bad methodology, but you like the results, is it good? If the conclusions are overgenerlized / under documented in their limitations, is that still good? If the data / process / methodology is not widely available for detailed scrutiny, is that still good?
Good is good, everything else isn’t. The researchers themselves should be the first to point out any shortcomings no matter how apparently small they are – they are basic limitations on interpreting the work. What appears to be inconsequential may turn out to be crucial.
‘The researchers themselves should be the first to point out any shortcomings no matter how apparently small they are – they are basic limitations on interpreting the work.’
Again, I find it ironic that the fundamental trust we have in the outcome of the scientific method is being challenged in such a fundamentally circumstantial way without any evidence for doubt on the main thesis of my comment.
Can you provide evidence that basic limitations being pointed out doesn’t/hasn’t happened for the VAST majority of climate scientists?
I mean, Climategate was pretty interesting, but that was a sampling of at most 12-15 researchers in a field of over 10,000. Not only that, those 12-15 were also some of the most outspoken, politically motivated researchers.
As far as sampling goes, I’d say there is a sufficient argument that Climategate highlighted a group of researchers who are systematically different from the VAST majority of climate/earth science/atmospheric science researchers. The fact that Pielke Sr. can find MORE papers from the climate science community with which he agrees define the uncertainties, draw backs and faults of current research methods might also be a good indication of this fact. Or maybe even publications like the journal Climatic Change taking an entire issue to highlight the very glaring problem of ‘divergence’ in paleoclimate.
Most research in every field is good research. Some of it is bad and unfortunately some bad apples can ruin a basket. Even without good reason it seems.
I find it ironic that the fundamental trust we have in the outcome of the scientific method is being challenged in such a fundamentally circumstantial way
Who trusts? Science isn’t about trust. Being an analyst isn’t about trust. Decision making isn’t about trust either.
In case you hadn’t noticed (and apparently you haven’t), a comprehensive discussion or disclosure of limitations is almost never published in any paper in any experimental science field. The first thing you have to do if you’re doing a technical analysis of the state of the science is go through the papers in detail, and add in the unwritten limitations which can be identified paper by paper.
every scientist trusts that the results published in a paper or presented at a conference is not fraudulent. If we didn’t, why would we read papers or go to conferences and spend time with a bunch of people we already knew as liars?
This fact doesn’t mean scientists are skeptical of results published in papers or presented at conferences.
Moreover, your comment,
‘In case you hadn’t noticed…, a comprehensive discussion or disclosure of limitations is almost never published in any paper in any experimental science field.’
shows me you’re likely basing your opinions either on one or two papers you’ve read or just regurgitating something you’ve heard someone else say. Just about every paper I’ve read recently points out facets of the research story line that the data cannot directly speak to. If those statements in the original draft, they are usually insisted in inclusion by reviewers.
Instead of attacking this process from afar, why don’t you get into the fray and give it a shot?
Maxwell, you said-
‘In case you hadn’t noticed…, a comprehensive discussion or disclosure of limitations is almost never published in any paper in any experimental science field.’
shows me you’re likely basing your opinions either on one or two papers you’ve read or just regurgitating something you’ve heard someone else say. Just about every paper I’ve read recently points out facets of the research story line that the data cannot directly speak to.
Likely, perhaps, but not in actuality. I spent some time on the technical committee for an international conference in one of my areas of expertise, and was a reviewer for several years. As for how many refereed climate articles I’ve read, that’s probably only 100 or so. It isn’t important to the publication for ALL the limitations to be in the papers, but it’s very important in understanding how to use the results.
As for pointing out facets of the research line, that isn’t the same as considering all the limitations. Rather than writing a tome on where limitations can creep in in general and with climate research in particular, I’ll just point to the epidemiology field, which has a pretty good handle on most of them (and still has lots of real errors and limitations of unknown origin).
As clarification, I’m not even saying the CRU work is fraudulent (or any other). Ignoring other commitments, the biggest problem I have with getting more involved in the area, as opposed to doing periodic checks of the quality of the work (and here I no doubt have a very different definition of quality from most) is wrestling with what question is desired to answered by all this work. Outside of developing science knowledge, I see no point in pursuing an answer to an ill formed (or, in this case undecided) public policy question. Additionally, I see no indication that the question(s) relevant to public policy will be answerable in any particular time frame, if at all.
‘Outside of developing science knowledge, I see no point in pursuing an answer to an ill formed (or, in this case undecided) public policy question.’
That’s fine. In fact, I think I agree with that statement. I also think based on that fact you shouldn’t do such research.
Others, however, have made the decision for themselves that such research is worthwhile either monetarily or morally. I don’t think it’s up to the National Academies to highlight this fact as problematic to being a good scientist as long as it’s not adversely affecting the quality of the research itself. I see way no that we can make sure a conclusion at this point myself, which is why I don’t see the principled problem you might see.
I also think you’re stuck on this idea that papers should provide the avenue for assessment of research methods. In some cases this is more true than others as there are several journals I frequently read on techniques in spectroscopy and laser physics, reviewing assets and drawbracks to different techniques.
Most of the conversations on flaws in research, as you should know as a conference organizer, happen at conferences and meetings where fellow researchers can scrutinize the findings and methods of the community at large or specific researchers. Very little of that information becomes public knowledge because conferences are expensive to attend and publications related to conferences usually only provide abstracts on presented research.
So yes, it can be hard for an outsider to gain insight into the limitations on specific research in any field. Most of the discussions on limitations are ad hoc, informal discussions between scientists working as collaborators or even professional meetings not easily accessible to the public.
But even from that point of view, I don’t think it’s worthwhile to speculate that because such limitations are not presented in published form (journal articles) SOMETIMES, they are not discussed at all or that researchers are unaware of them. That’s pretty spurious reasoning and the antithesis of the scientific process.
Believe it or don’t, I think we are close to agreement (and have been). Yes, the most heated discussions of limitations and interpretations tend to occur in the halls at conferences. While on the technical comittee, I viewed this as not being an organizer per se, but was called on to do certain aspects of organizing. I think that’s symptomatic of where the bulk of our disagreement derives – I’m using words slightly differently than how you take them.
As for my point about limitations, I mentioned in a post that it isn’t necessary or practical from an experimental science practice point of view to put the details in a published paper. On the other hand, when it comes to making important decisions, the limitations come into play. This is where I think the transparency and detail of known and potential limitations comes in. I’m unsual in this, even for industry – hence my nickname “Missouri”. If I was using the results in making a decision, I would require all details including raw data and metadata. I preferred to have the code and intermediate results as well. It isn’t that I thought the researches were committing fraud, incompetant, lax, etc. I frequently deal with possibility spaces, and it’s central to understand which possibilities have been conclusively ruled out, which ones are almost ruled out (some of these will be due to limitations of the work), and which ones are still untouched.
The conference hallway discussions tend to center around certain types of issues / limitations (interesting ones, especially ones that could refute a paper). Not whether the analysis is more indicative of EDA vs by design, for instance. This isn’t a broad critique of climate science or science in general, just an example of a boring but potentially significant point for decision making. So we’re discussing two different parts of the role of scientists. For my part, my research teams always made all information broadly available since around 1990 (raw data all the way through the research report – computers make this easy now ). Prior to that date it was archived in hard copy.
Damn, this is going to hurt sales of my upcoming book, “All I Really Need to Know About Scientific Integrity I Learned in Kindergarten.”
Much reading, neatly bureaucratized. If you are serious about ethics and science you should read Sam Harris “The Moral Landscape: How Science Can Determine Human Values.”
Quotes from my hero, Nikola Tesla (10/07/1856 – 07/01/1943)
On climate models:
“Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.”
Address to the IPCC at Cancun:
“The scientists of today think deeply instead of clearly. One must be sane to think clearly, but one can think deeply and be quite insane.”
The most dangerous to a scientist is a cherished theory or belief. This is the source of the temptation to not report negative results, hide declines, shade upticks in warming with strong red colors, and remove “outliers” without justification. If one is simply fascinated with the phenomena and with the data itself, there is much less temptation to go down these slippery slopes.
‘The most dangerous to a scientist is a cherished theory or belief. This is the source of the temptation to not report negative results, hide declines, shade upticks in warming with strong red colors, and remove “outliers” without justification.’
Yes, and yet we’ve figured out a way to get around this problem for a solid few hundred years now.
I understand the push of a few contributors to make every single scientist a proxy for ‘the team’, but it’s really off base here. I mean, Craig, couldn’t we just take your quote and turn it back onto your work. You’ve been just as enveloped in pushing divergence (correctly as far as I can tell) as some have been with the ‘hockey stick’. When the shades of grey start to show up in these situations, how can we tell in the moment who has fallen prey to demon of your comment and who is truly ‘right’?
I did not say it is inevitable to find corruption where one has a big idea, only temptation. This temptation needs to be tempered by Feynman’s advice to try to prove your own idea wrong. Ultimately, the decision about big ideas (theories) is fought out between those with competing ideas. The problem today is that politics and saving the world have made it very uncomfortable for those who challenge certain ideas.
Have you read someone doing that?
Here it is:
that’s a good one. One person engaging in possibly fraudulent behavior must mean that all persons of that group are fraudulent.
I mean, that’s not a fallacy or anything…
How about one rule “Thou shalt not lie.”
It’s worse than I thought! In the past I had looked for scientific standards ( http://socratesparadox.com/?p=178) and when I could not find much, I thought that I just hadn’t looked hard enough. Consider the following paragraph from “On Being as Scientist.”
“Science is largely a self-regulating community. Though government regulates some aspects of research, the research community is the source of most of the standards and practices to which researchers are expected to adhere.”
Would anyone of us be happy with SEC, FDA or any other government regulatory agency taking this approach? Well, some of us might, but most of us wouldn’t.
Sorry, science is about discovering new knowledge and it is very easy to be wrong. When legal authorities have tried to get involved in scientific misconduct, sloppiness gets viewed by the FBI as forgery (check out David Baltimore postdoc case, can’t remember her name). It should not go that far.
B. Kindseth: Science itself is not a regulating agency. It is, as the quote says, “largely self-regulating” just as our society is largely self-regulating. Both work effectively and efficiently because there is strong foundation of trust between the individuals involved.
Without trust every time someone said or wrote something on which we needed to rely, would have to drop what we were doing and go verify their claim and almost nothing would get done.
Of course we can’t totally rely on trust, therefore we do have regulatory agencies and we also check other people’s work, but that said, modern society, including science, could not exist without trust.
When I worked as an engineer, one of the managers had a saying, “In God we trust, show me the data”. President Regan stated, “Trust but verify.” The climategate affair is a display of abuse of trust. If the “scientists” would have been willing to share their data and methods, it wouldn’t have happened.
Check out the link below to the Netherlands Universities Code of Conduct. http://www.vsnu.nl/web/show/id=120790/langid=42
But if we need to verify (a non-trust relying acitivity) something, science is the process that gets us there. We as a society are constantly checking on each other, to verify stuff. Without this perpetual mistrust, society doesn’t work, either. Note our overcrowded prisons full of the untrustworthy. And date/timestamps. And Bank Statements. SS numbers. Certificates. Photos. Fingerprints. Usernames and passwords… etc. We thrive on mistrust at least as much as trust.
Bad Andrew: To be sure, as I indicated earlier, verification is a necessary part of science and society.
My point though is that we rely on trust far more than verification, and to the extent that we can, this allows we can be much more efficient and effective.
Trusting and being trusted are more than moral ideas; overall they make life much easier at every level. This is why it’s important for scientists and citizens to be trustworthy. This is why it’s bad when scientists and citizens abuse trust.
I agree that earning and maintaining a trust in each other is very good.
And that betraying and exploiting a trust is very bad.
Fortunately, the scientific process is about evidence.
The scientific process is about the cumulative evidence. The scientists have in general trust that the others do not cheat, but they do not trust that the others are right. Being skeptical of every result that has not been verified thoroughly by several independent studies (unless the result is easy to prove) is a basic feature of the scientific process. This general skepticism makes any specific type of proof to have less significance. Therefore the principles are most important, detailed rules help by making the process more efficient, but they are not as essential.
A corollary of my previous comment is that very much of the knowledge used from climate science is not yet at the level of well established scientific knowledge. Even for results, which have not been contested, the evidence is often too thin. This is not a problem for science itself as it is really an unavoidable part of science. It is, however, a big problem for the use of the results of climate science in decision making. The climate science has produced very much preliminary knowledge waiting for better confirmation. This preliminary knowledge is not worthless and is often rather strong, but using it is problematic, as its reliability is difficult to estimate. This is a situation of PNS. Science proceeds in parallel, not undisturbed, but proceeds anyway.
For the NAS the same PNS type problem appears in many fields of research. All research related to health issues is very severely influenced by these problems. When the results of research are used in decision making the need of formal rules becomes important. Then the conflicts of interest must be taken seriously and steps taken to reduce their improper influences. In climate issues the direct conflicts of interest are less common, but political ideologies have taken to some extent their role. There are, however, no comparable possibilities of declaring political views of scientists.
“The scientific process is about the cumulative evidence.”
“cumulative evidence” is still evidence.
“The scientists have in general trust that the others do not cheat”
Which works great until someone starts cheating.
Without trust every time someone said or wrote something on which we needed to rely, would have to drop what we were doing and go verify their claim and almost nothing would get done.
Trust is crucial in everyday life and business, as you point out. On important points, however, there is “due diligence” – verification that what you have been told is indeed the case. Not to follow due diligence can be grounds for a suit, and it’s certainly grounds for losing your job. But there is no point or obligation in verifying things which don’t matter much, except perhaps if you’re trying to help establish the trustfulness of someone’s other information which you will be relying on in important matters.
I know a scientist who gets his research funding from the public sector. He is a climate skeptic but he admits that in order to get his annual funding, he always finds a way to link his research to climate change. He is a principled scientist, straightforward and honest. He is a a vocal skeptic but he admits that he links his research to climate change, otherwise he’d be out of a job. This is reality folks. I wonder how many scientist are out there do the same thing?
I do not find anything untoward about that. Why shouldn´t a sceptic recieve public money to do research linked to climate change? If he is vocal about his scepticism then he isn´t hiding anything or pretending otherwise.
Maybe he can come up with some useful information and do us all a favour.
For comparison: here you can download the Code of Conduct for Scientific Practice of the Association of Netherlands Universities:
Rules of the game are such that instead of evidence and falsification based science, business as usual and anti-science is encouraged. There are multiple positive feedbacks at play, increasing corruption. And as we know, it is difficult (for most humans impossible) to get a man to understand something, when his salary depends upon not understanding it.
Rules of scientific conduct should promote and encourage falsification as the sharpest tool of scientific method. Scientist’s salary should depend upon it.
Scientist, prove thy hypothesis wrong!
Can this be used as a check list to see if the Team follows it?
On a completely different note:
Oh, you mean _this_ problem with ethics?
I have a sense that this thread exhibits too much consensus on the basic principles of scientific conduct. There is a need to stir the pot.
Here is an alternative view. It’s based on my vague recollections of science history, and so it would be unfair of anyone to demand that I show actual proof.
First, I’m a huge admirer of Feynman, but regarding his admonition to scientists to be as objective as possible, to find and disclose every possible reason why one’s conclusions might be wrong, and to observe all the other niceties he prescribes, all I can say is that he probably made those statements right after he smoked his first joint. (Incidentally, some people claim that marijuana has no permanent effect on cognitive performance. However, Feynman won a Nobel Prize, tried pot, and never won another Nobel after that. The facts speak for themselves).
The truth is (and I’m only being half facetious) that if scientists through history had adhered rigorously to all those rules, we would still be in the Dark Ages. In the service of their own ego and self promotion, those rules have been bent and occasionally broken by the great heroes of scientific advance – Newton for mechanics, Mendel for heredity, Darwin for evolution, Einstein for relativity, and Arrhenius for global warming. (Well, I’m not sure about Darwin, but there’s historical evidence to document the transgressions of the rest).
In fact, the great innovators have often pushed their ideas and dismissed valid reservations about them in striking contrast to the objectivity that is claimed to be the proper standard. Without that persistence, their contributions would probably have been long delayed.
All right – the point I’m trying to make is that science often operates as a tension between the desire to prove oneself right and the demands of conscience to proceed no further than the evidence allows. Each of us as scientists does have a responsibility to be objective and candid, but the historical record also reminds us that pushing the boundaries can be the hallmark of a talent capable of great scientific advance, and we sometimes have to accept each individual for the entirety of his or her character. Few individuals who constantly doubt the merits of their contributions are likely to take those contributions to the limit of their potential.
Science is inherently self-correcting. Feynman is right that each scientist should consider the possibility of error and self-deception, but ultimately error correction remains primarily in the hands of the scientific establishment as a whole – to some extent through peer review of papers and grant proposals, but far more importantly from success or failure by others in replicating the results of an individual.
I can speak personally of these concepts from the perspective of biomedical science, which as others have already suggested, has been far more contaminated by scientific misconduct than climate science (at the level reported in the blogs and media). Each of these fields staggers toward a true understanding of nature in fits and starts, but each appears to have yielded valuable insights that help us deal more effectively with the world we live in – a conclusion I draw from considerable familiarity with the details of each of these sciences.
I would sum it up as follows: Adhering to the rules has been essential to the process. Bending the rules has been an equally essential ingredient.
I think you’ll find that great discoveries come from thinking outside the box, and not from ‘bending the rules’.
Actually, bending the rules and a lack of rigour militates against new discoveries, as it allows our beliefs and preconceptions to hide things from us. Very often, rigorous testing gives us new insights, when it reveals to us how things really work as opposed to how we thought they worked.
Well, I thought outside the box, and it didn’t seem to work with you.
History, I believe, validates the last two sentences in my earlier comment.
Can you provide some specific examples then?
See the “Rediscovery” paragraph in Mendel.
Also, Isaac Newton, although never to my knowledge shown to have engaged in dishonest activity, epitomized the scientist who is more interested in proving himself right than finding the truth. Being a genius, he turned out to be right about most things, although not about alchemy and mysticism.
You could just reference Bernoulli for some exquisite examples.
Bending the rules is not science, it is fraud!
Fraud has never been an essential ingredient in any thing.
Here is another of Feynman’s morsel
I agree strongly with you.
Great results have very often been the result of being stubborn against evidence. The scientists has reached an intuitive belief that a new approach may be valuable, but faces strong “proofs” against it both from own reasoning and from others. For the further progress it may be important that the idea is published, but it would not be possible, if the scientist would not belittle the apparent problems.
My experience from observing other during the 10 years I was active in theoretical physics was that the great new ideas were commonly such that many people had thought about them, but most were too rational to avoid dismissing them. They believed the evidence and followed the standard rules. Those who were able to break the rules succeeded.
I must add that perhaps 99.9% of the cases, where rules were broken do not lead to anything useful. Thus the point is breaking the rules in the right place and that is where the great scientists differ from the good ones.
I’m a bit puzzled about the expression of ‘bending the rules’.
Did Newton, Darwin, Einstein at the time of writing use data they had manipulated themselves to show they were right?
Or did they use the evidence, which was there for all to see at that time, to come to their conclusions?
In answering this question, I would caution to interpret what they did basen on our much advanced POV at the beginning of the 21st Century.
Vic – I don’t know whether Newton et al occasionally omitted data from their reports when it failed to confirm their pet theories. They may or may not have – in the case of Mendel that I linked to above, it seems likely that he may have fudged some of the evidence in order to prove he was right (which he was).
More important, though, is the substantial evidence that these and other great figures in science arrived at a set of conclusions, often partly based on intuition, that they then clung to in the face of contrary evidence – confident that their conclusions were correct and that the contradictions would eventually be resolved. Their response to challenge was not “Well, maybe I was wrong after all, and I should acknowledge that possibility”, but rather “I don’t care about the conflicting evidence – I know I’m right and I’m going to prove it”. Rather than attempt to prove themselves wrong (as Feynman urged), they dared others to try. It was this confidence and perseverance in the face of uncertainty that helped them to prevail. In critical areas, their confidence turned out to be justified. In some other areas, it didn’t, although it took evidence amassed by the scientific establishment as a whole rather than their own self-monitoring to prove them wrong.
In the context of current climate science, the confidence expressed by some high profile figures mirrors this same phenomenon. My point is that it is not a fault, but rather a character trait with disadvantages and advantages, and for the talented scientist, the advantages are likely to predominate. To criticize individuals for this element of their character is misguided, in my view. In some cases, it may also be ideologically or politically motivated, but I won’t judge that for specific critics. If corrections are to occur, it will be for science as a whole to provide them, rather than depending on each individual scientist to self correct.
Somewhere I have a copy of “The Art of Scientific Investigation” which gives many historical examples of how developments were not the result of a classic scientific investigation. The author points out that sometimes the value in the investigators hunch or theory is not that it is correct, but that it gives a determination to explore a particular area, without which progress wouldn’t be made. Even if the hunch is wrong, important discoveries can be made by following a new line.
My interpretation of Feynman’s statements is somewhat different from what I’ve read posted here, so I’ll give a tangential illustration of what I believe is Feynman’s point. If I had an inportant industrial problem to resolve and I could run one and only one experiment to elucidate the causal structure(s) involved I would follow a simple but tedious procedure. First I would design what I thought was the best experiment. Then I would generate a matrix of all possible outcomes of the experiment. This could be several pages. Then I would go through each possible outcome and try to give it a physical interpretation. This was challenging, since some outcomes which seemed nonsensical at first glance could actually be interpreted by postulating a physical mechanism which I hadn’t thought of when I designed the experiment. Sometimes I found the “best” experiment I designed wasn’t adequate in giving interpretable results, but the exercise would give me a better understanding of possibilities I had originally missed so I could redesign the experiment to take them into account, and some of these mechanisms originally not thought of turned out to be important. This exercise was valuable for the specific instances I used it, but it had much more value to me in that I became much better at designing experiments without following this procedure. I naturally follow Feynman’s advice when interpreting the results of something, and I think his point is the scientist will learn more from the exercise – things she hadn’t thought of will come to mind, and color the interpretation of the results, leading to better thinking and especially interpretation skills.
Dear Dr Curry,
I do not know if this is the appropriate thread to place my comment, but I would like to ask you a question. ( or two)
Given that your stated objective is to build bridges and effect a reconciliation between the `opposing sides´of the debate, how far do you think you have achieved this objective?
It seems to my mind that there is a greater, not lesser divide between the `factions´. But I don´t see this debate as `warring factions´, although that is how it is being percieved to be. I see it as science- science in all its gory glory.
To be a real scientist-one must first be sceptical of oneself and one´s theories and methods. Otherwise one will not get results bourne out by verification. Each and every one of us must question ourselves and root out our own entrenched pre- conceptions. Be relentless in determining how we arrive at those pre- conceptions, and be prepared to ditch them when they do not stand up to reality. How else are we to arrive at an approximation for the explanation of things?
In science, there are only sceptics- all scientists must be sceptical, and in prime place: of themselves. There is no place in science for the denial of science. For that is denial of the method by which we understand our world, and is an absurdity.
This `divide´ is false illusion. And reconciliation is not possible under these current conditions. It will only be possible if all players move to a new level.
I wonder how many `denizens ´of this blog are really willing to be open minded sceptics?
I can only contribute as a philosopher by profession. Ethical standards, Occam´s Razor and conceptual analysis are some tools to play with.
I would like to begin with a proposition: ¨Humankind is altering the Earth´s atmosphere by the addition of chemical particles and radiative gases, through the practice of burning fossil fuels, and through the degradation of
important ecosystems, such as forests.
Given that our understanding of science is of cause and effect. What will be the effects of this alteration to our atmosphere?
I appreciate that the interface between science and policy is a difficult and lonely place, but must you really be so partisan? Objectivity is a basic requirement in scientific endeavour and must be practised by all scientists, and that is statement that cannot be refuted.
But then I
Sarah, on the building bridges issue, I would say that both “sides” seem to be fracturing somewhat, with extreme positions on either side becoming increasingly isolated. Amongst scientists who ignore the political debate, the divide is a false illusion, I agree. Amongst the scientists who participate in the political debate, the divide is as strong as ever (see the recent alarmist vs deniers thread). Interesting that you find me partisan; which party or side do you see me on? I am mainly trying to provoke the people on both extremes to stop being so partisan and acknowledge that uncertainty is a big part of their disagreement.
Dear Dr Curry,
Which `side´do I think you are on? I would prefer to give you the benefit of the doubt and trust in your impartiality. But at the same time you must be aware that you are seen as a `champion´ for the `denialists´ at times and that this raises questions about your impartiality.
I fully support your intention to create open debate and even more so, your objective to realise a reconciliation. But what, rather than who, are you attempting to reconcile? Certainly not opposing scientific hypotheses, there are none.
So are we talking about reconciliation on policy disputes? On that issue there are only two policy options; we either take measures to stabilize atmospheric concentrations of GHGs, or we allow their atmospheric concentrations to increase. There is no third option here. That may sound simplistic, but it is, nevertheless, basically correct.
If we acknowledge uncertainty, isn´t that the same as saying that we don´t know enough to act? That we don´t have sufficient information to make informed policy decisions? But act we must. Either we stabilize our emissions on the premise that increasing emissions is likely to lead to disruption of our climate system, or we allow emissions to increase on the premise that there is insufficient reason not to do so.
Policy seems to moving in the direction of allowing emissions to increase unabated. And however uncertain the premise on which that is based, that is the decision we are taking. Is this an informed decision? I am doubtful. It seems to assume basically two things; that our actions have no consequences and that climate sensitivity is low.
Given the nature of the uncertainties, are we not taking a great gamble on the consequences of our policy decisions if we refuse to take the science into account?
If by acknowledging uncertainty, is that sufficient reason to ignore what we do know?
I think “skeptic”, “denialist” or any of the other words being used are political ones. In my view, the issues you refer to arise not from the science, which is a developing field and will sort itself out in the course of time. As for Dr. Curry’s position, my sense is that she supports good general standards of academic research. In my view, the standards for the political questions should conform to my industrial decision making standards if they are to be used for important decisions, and the field doesn’t seem up to that standard yet.
I think your line of questioning is the result of how the problem is being defined politically- Problem – CO2 is rising due to emissions; Solution – decrease emissions. On the other hand, if the problem is the planet may become too hot, then there are many more solutions than the one. At least in industry, defining a problem in such a way that it also defines the solution is considered erroneous, the problem is redefined in a more general way so there is an array of solutions. I don’t see why this should also not be true for the political arena.
Sarah, first off, there ARE opposing hypotheses, the main one being that most of climate variability over the 20th century is caused by natural variability. Both sides agree (with the exception of a fringe few that deny the existence of the greenhouse effect) that adding more CO2 will increase surface temperature; the issue is how much, and how much relative to natural climate variability? The IPCC AR4 says “most” (>50%) of the warming in latter half of the 20th century is explained by AGW; skeptic say less than 50% (hardly an irrational position given the large uncertainties). So there are a lot of scientific issues to resolve, dismissing this uncertainty and this disagreement is not useful, and has caused much uneccessary conflict and distraction from the real policy issues at hand.
The science (which is uncertain) is one element that goes into decision making. Values (what constitutes “dangerous”? who are winners and who are losers, etc), economics and predictions of technological innovation are also key pieces of information that go into the decision making. And then there is politics.
I have never said that we don’t have sufficient information to make policy decisions. I have been discussing robust decision making strategies in the context of decision making under uncertainty.
The IPCC AR4 says “most” (>50%) of the warming in latter half of the 20th century is explained by AGW; skeptic say less than 50% (hardly an irrational position given the large uncertainties).
Actually, that is an irrational position. If the uncertainties are so large then how can one confidently claim that AGW accounts for only<50% of recent warming?
the issue is the confidence level used by the IPCC: very likely, which implies >90% certainty in their conclusion. The skeptics’ main point is that there is insufficient justification for this high confidence level, and that there are plausible explanations whereby the amount is less than 50%.
OK, I don’t accept that the IPCC have overstated its position but for the sake of argument, the point I’m trying to make is that if the uncertainty is as great as you say then it way well be “plausible” that the amount is 50%, but skeptics seems to assume much more certainty that that, and much less possibility to the IPCC being correct which under the same scenario must be equally plausible.
There are indications that the IPCC have actually understated their position. But would that be `alarmist´of me to say so?
Many people would no doubt consider it so. Personally I think the IPCC’s statement is indeed extremely conservative.
For them to be correct the amount of warming due to CO2 only has to be 0.4C – even allowing for the fact that there is warming
still in the pipeline this would surely indicate climate sensitivity below the lower end of the IPCC’s stated range of 2-4.5C.
Seems to me they are over-compensation for uncertainty if anything.
Dear Dr Curry,
You say “ I have never said that we don’t have sufficient information to make policy decisions. I have been discussing robust decision making strategies in the context of decision making under uncertainty.”
This is the second time you have said this to me and I am still no wiser as to what you actually mean by this. Please excuse my exasperated tone but could you be plainer and tell me if you consider there to be sufficient evidence to consider emission control with the objective of stabilising atmospheric human produced GHGs concentrations or not.
If so, why and if not why not.
And if not then please could you tell me if there is sufficient evidence to justify allowing GHGs concentrations to continue to rise unabated.
There is a third option- which is to say that we simply do not know enough, that the whole question is simply too uncertain, has too many social, economic and political ramifications which need to be untangled, and we need more information. Which seems to be your point. But this hardly forms a basis for `robust decision making strategies´, as you claim, if while we are busy untangling these percieved uncertainties, we are at the same time, actively engaged in decisions to allow GHGs to continue to rise. That´s not a basis for robust strategy- that´s called `pussy footing around´.
Your opinion matters. Do we take action to stabilize emissions based on the information we have? Or do we delay taking this action on the basis of lack of sufficient information and continue to act, as a result, to allow emissions to rise?
On the thread decision making under uncertainty Part I, i discussed the problem with this either/or type of decision model. In my testimony, I introduced my ideas on robust decision making. For further information on these ideas, see the post on Jeroen van der Sluijs http://judithcurry.com/2011/01/31/lisbon-workshop-on-reconciliation-part-iii/
So your question poses a false dilemma. Provided that we identify sufficient co-benefits to reducing CO2 emissions (energy security, economics, pollution and human health), then it makes sense to limit emissions. Based on what we know about climate change and the uncertainties and but most importantly our ability to decarbonize in a substantial way anytime soon, decarbonization is not going to happen quickly in the short term. The challenge to getting the ball rolling is to identify no regrets options (e.g. energy conservation, introduction of green energy where feasible, and other policies that have benefits in terms of economics, security, and health). So that is how I view the problem and the solution. My personal interest and activities on this general area of “solutions” is in developing robust adaptation strategies and reducing vulnerabilities in the developing world by better use of advanced forecast information of extreme events combined with emergency management measures.
I have Part II started for decision making under uncertainty, but i am too busy to finish it right now (i got waylaid last november by Heretic, when I originally thought I had time to work on this).
Your comment is in many places in very strong contradiction with my thinking.
You write about only two options and that there is no third option. My view is rather that neither of the two options exists and that we have only the third option. World is not yes or no, it is “how much”.
This denial of quantitative thinking is a general and serious problem. Voltaire has been attributed the quote: “The perfect is the enemy of the good”. This is very much true in the real world. It is very much valid for the climate policies. Trying to solve one problem completely is damaging for the more general well-being of humankind.
The uncertainties are not a reason for not acting, but the uncertainties of the consequences of our acts are a valid reason for not attempting more than we can handle.
The `third option´ is often erroneously referred to as `doing nothing´. The science is, purportedly, too uncertain to take steps to stabilize emissions as there are opposing theories as to why the climate is changing, differences in opinion as to how atmospheric concentrations of GHGs will affect the climate and various viewpoints on whether changes will be good or bad; beneficial or dangerous.
Also to be considered are the ideas that suggest that humans are sufficiently intelligent to come up with an array of `solutions´ to the problem of global warming, if indeed it proves to be a problem. Who knows what technological fixes may become available to us should the worst case scenarios be bourne out over time?
And at the heart of the matter is our dependence on fossil fuels and how the idea of stabilizing our emissions from burning fossil fuels will impact on just about everything we do and believe in- what I think Doctor Curry referred to as our `values´.
These uncertainties make us cautious- as you say.. “ but the uncertainties of the consequences of our acts are a valid reason for not attempting more than we can handle.”
What you seem to be saying here is that until we know what we are doing, for certain, we should not attempt to address the question of whether to stabilize emissions or not. We must wait until we are more certain as to consequences of our actions and then decide. We must have meaningful dialogue to resolve the above concerns before we take any decision to stabilize emissions. That is: ´do nothing´ about emission control until we are have explored all the ramifications.
Meanwhile, of course, emissions continue to rise. So we are acting, we are making a decision. What interests me is; is this an informed decision or not? Are we acting in this way because there is no reason to do otherwise? Or maybe because rising emissions are so `institutionalized´that we are simply unable to prevent their rise?Or is it because this is considered to be the best course of action based on the best available information?
How high is the level of uncertainty behind this decision?
There is no third option. `Doing nothing´ about emissions is a euphemism for allowing emissions to rise.
On the question of uncertainty, I would suggest that if it is as high as is suggested then there is as much cause for concern regarding the decision to allow emissions to rise as there is in taking measures to stabilize them. Just as you want good reason for one decision then I want just reason for the other.
I want to know what we are betting the farm on.
The third option is no regret energy policies, motivated by concerns about energy security, pollution and health, and economic considerations. Many (but not all) energy policies that would address these issues would also reduce carbon emissions.
Agreed. I guess it all depends on how the argument is pitched. The end result is the same- reduced carbon emissions. Doesn´t really matter how we achieve that as long as we do.
I think you have done much to facilitate this from your dark and lonely place at the interface, and I will continue to give you the benefit of the doubt as to your impartiality. But I will not compromise on the science in exchange for political expediency, and I trust that neither will you.
A great place to start with no regret energy policies would be to stop any energy system that requires direct operating subsidies.
Another great place to start is stop going with systems that are inherently unreliable and clutter up huge land areas.
Another would be to halt developing any system so unreliable that it requires 50% – 100% backup on a regular basis.
I consider doing nothing a decision on equal footing with other choices, not as automatically preferred when uncertainties are large. No regret choices, when they can be identified, are preferred to doing nothing.
It is, however, important that the consequences of the proposed actions are analyzed sufficiently before they are implemented, whatever sufficiently means in each specific case. This means that more emphasis should be given to the analysis of alternatives, including alternatives, which appear at the moment to be only the second choice. When more is known of the consequences, it is possible to act faster and with less errors, when it is found necessary or when elections lead to a new leadership with higher willingness to act. Stopping research on options that one administration doesn’t want to implement, may lead to bad choices of the next government.
Here is IPCC statement:
For the period from 2010 to 20120, if the global warming rate is greater than about 0.15 deg C per decade, I will accept “Man-made Global warming. For the same period, if the global warming rate is less 0.15 deg C per decade, I will reject it.
Why not we define a criterion that we can use to verify “Man made global warming”?
So far, there is no evidence for man made global warming because the warming rate from 1970 to 2000 is nearly identical to that from 1910 to 1940.
And there is little warming since 2000.
“For the period from 2010 to 20120, if the global warming rate is greater than about 0.15 deg C per decade, I will accept “Man-made Global warming.”
Of course, that would result in an average global surface temp of about 300C. :)
Not 20120, I mean 2020.
For the period from 2010 to 2020, if the global warming rate is greater than about 0.15 deg C per decade, I will accept “Man-made Global warming.
Dear Dr Curry,
This probably is more to do with your ‘uncertainty monster’ and how scientists handle this rather than conduct and trust. But, in so far as misplaced certainty tends to produce misplaced expectations and, therefore, a misreading of product, by scientists, as well as lay people, I think it has some relevance. Anyway, it’s something that has preoccupied me over the last couple of days (I always like to see the best in people and would rather attribute folly where others might see malice!):
I tend think of uncertainty and, indeed, certainty, from a philosophical, psychological and genealogical point of view. An analogy: mathematicians, and, by extension, physicists, love perfect circles and tend to see them everywhere but, of course, there are no such things – just as there are no ‘atoms’, no ‘particles’, no ‘things’ – these being necessary and useful shorthand for what is, in a sense, over determined, convenient fictions, a semiotics which, in itself, has nothing to do with reality – they are merely idealised concepts of which nature is indifferent. Similarly, assuming – as I do – that the greenhouse affect is a well grounded theorem, it is thus, to speak in metaphor and not in metaphor, in ‘the laboratory’ – but, in the real world, where we encounter the uncountable variables of ‘stochasticism’, the ‘certainty’ of this physics can mean very little. Indeed, with ‘chaos theory’ even causation can break down. It is, however, natural that scientists, and, much more so, lay people can not resist the tendency to transfer a certainty of theory, of the laboratory, on to this intolerable disorder – they fight the ‘uncertainty monster’ by attempting to hoop perfect circles around it’s neck. But it is just that, an illegitimate transfer, a projection or, at best, a useful illusion.
So when a scientist or a lay person says, clumsily enough, ‘the science is in’, they are merely confusing ideation with reality, projecting on to the world the comforting certainties they find at their desks and in their laboratories, because they can’t help wishing it were true. The alleged ‘patronising arrogance’ of certain of these actors is, however, based on their ultimate knowledge that it is not true. Stronger spirits, yours for instance, perhaps, can sustain these uncertainties!
I quite enjoyed Roger Pielke Jr’s post yesterday on Jonathan Haidt’s talk on “Ideological Diversity in Academia” and I believe many here may be interested in it including Judith. The video of Haidt’s talk is long (20+ mins) but well worth watching. While he refers mainly to the field of social psychology I believe he touches on themes that are extremely relevant to many fields of study, including climate science. In the words of the nytimes John Tierney who covered the speech:
“Dr. Haidt argued that social psychologists are a “tribal-moral community” united by “sacred values” that hinder research and damage their credibility”
Well, Dr Haidt is in error.
Social psychologists are mere recorders of data.
And data, dear friends, is what it is all about.
But observational data aside.
Let´s examine the thornier scientific parameter of values.
And what that word means.
If you think that social psychologists are simply recording devices that do not filter their data through ideological systems, I have a bridge to sell you.
The Scientific Method: Standards of Proof
Scientific “standards of proof” are unique to each scientific discipline.
1. Newtonian physics: In Newtonian physics — the discipline with which I am most familiar — the required measurement precision is (compared to other disciplines) extreme. In a college physics lab course, when testing Newtonian hypotheses — Force = (mass)(acceleration); or Conservation of Momentum — errors greater than 1% are unacceptable. Another example: In the Millikan oil-drop experiment — designed to measure the electron’s charge — 1,000+ measurements are averaged to dramatically increase the precision of the result.
2. Climate science: What are the “standards of proof”? A portion of climate science appears to be (is claimed to be) “simple physics”. We can measure, to great precision:
– the atmospheric concentration of CO2 and other GHG’s;
– the reflective & absorptive characteristics, as a function of wavelength, for the GHG’s;
– the specific heat and mass of the earth’s intermediate-term heat-storage media — the oceans (primarily) and the atmosphere;
– the quantity of heat absorbed by phase-changes = ice-melt; and by chemical/biological processes.
This “simple physics” model becomes a bit more complex when we factor-in the “positive feedbacks” — changes in reflectivity as snow and ice melt; as vegetation shifts; etc. Even here, we are assured that the positive & negative feedbacks are “well understood” and are incorporated into the climate mathematical/computer models.
3. This all sounds convincing, and the average-air-temperature measurements for the decades of 1980 – 1989, and 1990 – 1999, do validate the predicted temperature increase: at least 0.12°C per decade, and no more than 0.32°C per decade (80% confidence level). However, for the decade 2000 – 2010, the measured temperature increase is approximately 0.02°C per decade. The AGW “signal” has been swamped by the “noise” of climate phenomena not understood, or not incorporated into the computer model.
4. Again, I’d like to return to this question: What is the appropriate climate-science “standard of proof”? Must we wait for 2 or 3 decades, before the AGW “signal” clearly dominates the “noise”? Many citizens are unwilling to act — to disrupt their comfortable lives — until a “standard of proof” is achieved.
The above post is by
August 28th, 2010
I think some very good statisticians will disagree with what I’m about to say. I don’t agree that any projections to the future can ever prove anything here. This type of modeling effort can only validate it’s models over the data space. Nobody ever has data from the future. That it isn’t obviously incorrect in 10 years is very different from saying it’s validated.
As Tomas pointed out in his discussion of chaos, it’s a theoretically and computationally intractable problem.
For climate scientists, this question is extraordinarily difficult to resolve. Firstly because the socio/economic/psychologic/politic arena is not their area of expertise.
And secondly, because your `standard of proof´ requirement is, frankly, nigh on impossible to achieve, due to the level of uncertainty involved in the study of chaotic systems such as our climate.
If you wish to prevaricate for 2 to 3 more decades before you act then I am afraid it may be too late to act, according to accepted mainstream understanding.
So what we do with that information is another area entirely. It is entirely up to us.
The extent to which the scientific findings are accepted and absorbed by citizens depends on basically two factors: Firstly, the level of the degree of study, and thus the understanding of Physics. And secondly, the degree of openess of critical thinking, which is basic to and a requisite of, scientific method. The so called Sceptic.
The third influence may be termed `progaganda´for want of a better term. This implies that the free thought processes have become subverted by external influences and are thus subject to bias or partiality. This is otherwise known as `conditioning´ and is a well researched and documented phenomenum . It is also very common.
Those citizens who are unwilling to act are exposing themselves, and others to a future scenario in which the climate becomes unpredictable. This is what Dr Curry means by `uncertainty´.
Thanks, Professor Curry, for bringing the climate debate closer to the root of the problem: The National Academy of Sciences (NAS).
It is my understanding that Congress turned its responsibility for budget review of federal research agencies over to NAS.
NAS has used that position to reform government science into the government propaganda machine that former President Dwight D. Eisenhower warned about in his farewell address to the nation on 17 January 1961:
That is probably the natural direction that any government program will take as the result of requests successively larger appropriations of funds.
With kind regards,
Oliver K. Manuel
This reply is intended to stop the “bold leak”.
Might I humbly suggest a reading of Max Weber?
This post is addressed to Larry G Girma above. I am going to ignore Harold.
Wow, I need to find a way to pay more attention here. Your focus on nurturing appreciation for uncertainty is near and dear to me. My long term focus and expertise is in computer/data aspects of what we don’t know… how to represent the unknown, calculate with unknown data present, visualize uncertainties and more… in practical and pragmatic ways. In my “real world” I’m working toward a comprehensive initiative to ensure we have a complete set of tools and methods for handling uncertainty well.
A few brief thoughts having skimmed some of the comments here:
I suspect Bart was just trying to be assertive in claiming an alternative hypothesis is always needed. If we all could learn to accept “We don’t know” as a valid null hypothesis, that little problem is solved.
Seems to me perhaps one of the challenges to a healthy perspective on uncertainty in science is this: we have an unstated underlying assumption, that a knowable solution/formula/method exists. If instead our hypotheses, experiments and investigations offered appropriate weight to “unknowable” perhaps we would do better.
I’ve discovered part of this is cultural. Western cultural logic is generally boolean: True/False (sometimes trinary.)
Another very real logic model (if disconcerting to us westerners!) comes from India. It is a seven-state model that also incorporates an additional bit of humility/uncertainty across the board. The seven states are:
“in some ways it is”
“in some ways it is not”
“in some ways it is and it is not”
“in some ways it is and it is indescribable”
“in some ways it is not and it is indescribable”
“in some ways it is, it is not and it is indescribable”
“in some ways it is indescribable”
…i.e. all possible combinations of “is”, “is not” and “indescribable.” Personally (and as a computer/data architect!), I hope we don’t need to get that complicated most of the time!
Now please excuse me while my head explodes.
Mr Pete, check out my italian flag post, for three bodied logic: evidence for, evidence against, and uncommitted belief (a bin for uncertainty, ignorance, indeterminacy)
actually, check out my entire uncertainty series, see this tag